Power Buys You Distance From The Crime

post by Elizabeth (pktechgirl) · 2019-08-02T20:50:01.106Z · LW · GW · 75 comments

Contents

  Introduction
  Examples 1 + 2: Corporate Malfeasance
  Example/Exception 2.5: Corporate Malfeasance Gone Wrong
  Example 3: Foreign Medical Care
  Example 4: My Dating an Artist Experience
  Summary
None
75 comments

Introduction

Taxes are typically meant to be proportional to money (or negative externalities, but that's not what I'm focusing on). But one thing money buys you is flexibility, which can be used to avoid taxes. Because of this, taxes aimed at the wealthy tend to end up hitting the well-off-or-rich-but-not-truly-wealthy harder, and tax cuts aimed at the poor end up helping the middle class. Examples (feel free to stop reading these when you get the idea, this is just the analogy section of the essay):

Note that most of these are perfectly legal and the rest are borderline. But we're still not getting the result we want, of taxes being proportional to income.

When we assess moral blame for a situation, we typically want it to be roughly in proportion to much power a person has to change said situation. But just like money can be used to evade taxes, power can be used to avoid blame. This results in a distorted blame-distribution apparatus which assigns the least blame to the person most able to change the situation. Allow me a few examples to demonstrate this.

Examples 1 + 2: Corporate Malfeasance

Amazon.com provides a valuable service by letting any idiot sell a book, with minimal overhead. One of the costs of this complete lack of verification is that people will sell things that wouldn't pass verification, such as counterfeits, at great cost to publishers and authors. Amazon could never sell counterfeits directly: they're a large company that's easy to sue. But by setting themselves up as a platform on which other people sell, they enable themselves to profit from counterfeits.

Or take slavery. No company goes “I’m going to go out and enslave people today” (especially not publicly), but not paying people is sometimes cheaper than paying them, so financial pressure will push towards slavery. Public pressure pushes in the opposite direction, so companies try not to visibly use slave labor. But they can’t control what their subcontractors do, and especially not what their subcontractors’ subcontractors’ subcontractors do, and sometimes this results in workers being unpaid and physically blocked from leaving.

Who’s at fault for the subcontractor(^3)’s slave labor? One obvious answer is “the person locking them in during the fire” or “the parent who gives their kid piecework”, and certainly it couldn’t happen without them. But if we say “Nike’s lack of knowledge makes them not responsible”, we give them an incentive to subcontract without asking follow up questions. The executive is probably benefiting more from the system of slave labor than the factory owner is from his little domain, and has more power to change what is happening. If the small factory owner pays fair wages, he gets outcompeted by a factory that does use slave labor. If the Nike CEO decides to insource their manufacturing to ensure fair working conditions, something actually changes.

...Unless consumers switch to a cheaper, slavery-driven shoe brand.

Which is actually really hard to not do. You could choose more expensive shoes, but the profit margin is still bigger if you shrink expenses, so that doesn’t help (which is why Fairtrade was a failure from the workers’ perspective). You can’t investigate the manufacturing conditions of everything you buy-- it’s just too time consuming. But if you punish obvious enslavement and conduct no follow up studies, what you get is obscured enslavement, not decent working conditions.

Moral Mazes describes the general phenomenon on page 21:

Moreover, pushing down details relieves superiors of the burden of too much knowledge, particularly guilty knowledge. A superior will say to a subordinate, for instance: “Give me your best thinking on the problem with [X].” When the subordinate makes his report, he is often told: “I think you can do better than that,” until the subordinate has worked out all the details of the boss’s predetermined solution, without the boss being specifically aware of “all the eggs that have to be broken.” It is also not at all uncommon for very bald and extremely general edicts to emerge from on high. For example, “Sell the plant in [St. Louis]; let me know when you’ve struck a deal,” or “We need to get higher prices for [fabric X]; see what you can work out,” or “Tom, I want you to go down there and meet with those guys and make a deal and I don’t want you to come back until you’ve got one.” This pushing down of details has important consequences.
First, because they are unfamiliar with—indeed deliberately distance themselves from—entangling details, corporate higher echelons tend to expect successful results without messy complications. This is central to top executives’ well-known aversion to bad news and to the resulting tendency to kill the messenger who bears the news.
Second, the pushing down of details creates great pressure on middle managers not only to transmit good news but, precisely because they know the details, to act to protect their corporations, their bosses, and themselves in the process. They become the “point men” of a given strategy and the potential “fall guys” when things go wrong. From an organizational standpoint, overly conscientious managers are particularly useful at the middle levels of the structure. Upwardly mobile men and women, especially those from working-class origins who find themselves in higher status milieux, seem to have the requisite level of anxiety, and perhaps tightly controlled anger and hostility, that fuels an obsession with detail. Of course, such conscientiousness is not necessarily, and is certainly not systematically, rewarded; the real organizational premiums are placed on other, more flexible, behavior.

These examples differ in an important way from tax structuring: structuring requires seeking out advice and acting on it to achieve the goal. It’s highly agentic. The Wells Fargo and apparel-outsourcing cases required no such agency on the part of executives. They vaguely wished for something (more revenue, fewer expenses), and somehow it happened. An employee who tried to direct the executives’ attention to the fact that they were indirectly employing slaves would probably be fired before they ever reached the executives. Executives are not only outsourcing their dirty work, they’re outsourcing knowledge of their dirty work. 

[Details of personal anecdotes changed both intentionally and by the vagaries of human memory]

Example/Exception 2.5: Corporate Malfeasance Gone Wrong

The Wells Fargo account fraud scandal: in order to meet quotas, entry level Wells Fargo employees created millions of unauthorized accounts (typically extra services for existing customers). I originally included this as an example of "executives incentivizing entry level employees to commit fraud on their behalf", but it turns out Wells Fargo made almost no money off the fraud- $2m over five years, which hardly seems worth the employees' time, much less the $185m fine. I've left this in as an example of how the incentives-not-orders system doesn't always work in powerful people's favor.

Thanks to Larks [LW(p) · GW(p)] for pointing this out.

Example 3: Foreign Medical Care

My cousin Angela broke her leg while traveling in Thailand, and was delighted by the level of care she received at the Thai hospital-- not just medically, but socially. Nurses brought her flowers and were just generally nicer than their American counterparts. Her interpretation was that Thailand was a place motivated by love and kindness, not money, and Americans should aspire to this level of regard for their fellow human being. My interpretation was that she had enough money to buy the goodwill of everyone in the room without noticing, so what she should have learned is that being rich is awesome, and that being an American who travels internationally is enough to qualify you as rich.

This is mostly a success story for the free market: Angela got good medical care and the nurses got money (I’m assuming). Any crime in this story were committed off-screen. But Angela was certainly benefiting from the nurses’ restrained choices in life. And had she had actual power to affect healthcare in US, trying to fix it based on what she learned in Thailand would have done a lot of damage.

Example 4: My Dating an Artist Experience

My starving-artist ex-boyfriend, Connor, stayed with me for two months after a little bad luck and a lot of bad decisions cost him his job and then apartment (this was back when I had a two bedroom apartment to myself-- I miss Seattle). During this time we had one big fight. My view on the fight now is that I was locally in the right but globally the disagreement was indicative of irreconcilable differences that should have led us to break up. That was delayed by months when he capitulated.

One possibility is that he genuinely thought he could change and that I was worth the attempt. Another is that he saw the incompatibility, or knew things that should have led him to see it, but lied or blocked out the knowledge so that he could keep living with me. This would be a shitty, manipulative thing for him to do. On the other hand, what did I expect? If the punishment for breaking up with me was, best case scenario, moving into a homeless shelter, of course he felt pressure to appease me. 

It wasn’t my fault he felt that pressure, any more than it was Angela’s fault her nurses were born with fewer options than her. Time in my spare bedroom was a gift to him I had no obligation to keep giving. But if I’d really valued a coercion free decision, I would have committed to housing him independent of our relationship. Although if that becomes common knowledge, it just means people can’t make an uncoerced decision to date me at all. And if helping Connor at all meant a commitment to do so forever, he would get a lot less help.

This case is more like the Wells Fargo case than Amazon or Nike. I was getting only the appearance of what I wanted (a genuine relationship with a compatible person), not the real thing. Nonetheless, the universe was contorting itself to give me the appearance of what I wanted.

Summary

What all of these stories have in common is that (relatively) powerful people’s desires were met by people less powerful than them, without them having to take responsibility for the action or sometimes even the desire. Society conspired to give them what they wanted (or in the case of Connor and Wells Fargo, a facsimile of what they wanted) without them having to articulate the want, even to themselves. That’s what power means: ability to make the game come out like you want. Disempowered people are forced to consciously notice things (e.g., this budget is unreachable) and make plans (e.g., slavery) where a powerful person wouldn’t. And it’s unfair to judge them for doing so while ignoring the morality of the powerful who never consider the system that brings them such nice things. 

Take home message:

  1. The most agentic person in a situation is not necessarily most morally culpable. One of the things power buys you is distance from the crime.
  2. Power obscures information flow. If you are not proactively looking to see how your wants and needs are being met, you are probably benefiting from something immoral or being tricked.

This piece was inspired by a conversation with and benefited from comments by Ben Hoffman. I'd also like to thank several commenters on Facebook for comments on an earlier draft and Justis Mills for copyediting.

75 comments

Comments sorted by top scores.

comment by Viliam · 2019-08-04T21:45:34.480Z · LW(p) · GW(p)

Just thinking loudly about the boundaries of this effect...

Suppose I have a garden, and a robot that can pick apples. I instruct the robot to bring me the "nearest big apple" (where "big" is exactly defined as having a diameter at least X). Coincidentally, all apples in my garden are small, so the robot picks the nearest apple at neighbor's garden and brings it to me, without saying anything.

When the neighbor notices it, he will be angry at me, but I can defend myself that I made an innocent mistake; I didn't mean to steal apples. This may be a good excuse for the first time; but if the thing keeps happening, the excuse no longer works; it was based on me not knowing.

Also, the robot will not even try to be inconspicuous; it will climb straight over the fence, in the middle of a day, even when the neighbor is there and looking at it. If I try to avoid this, by giving instructions like "bring me the nearest apple, but make sure no one notices you", I lose the plausible deniability. (I am assuming here that if my neighbor decides to sue me, the judge will see the robot's programming.)

Now why specifically does the situation change when instead of a robot I use a slave? It seems to me the only difference is that I can (knowingly or not, or anything between that) achieve an outcome where the slave will steal for me, using his intelligence to avoid detection; and if caught anyway, I can afterwards claim that the slave decided to steal on his own will; not because of my commands (either ordering him to steal, or just failing to mention that theft is prohibited), but specifically and intentionally against them. That relieves of all blame.

Okay, so how specifically do I make a slave steal for me, without giving the explicit order? I have to give a command that is impossible (or highly unlikely) to accomplish otherwise. For example, my garden only contains small apples, and I command the slave to bring me a big apple; threatening to whip him if he fails. The only way to avoid whipping is to steal, but hey, I never mentioned that! Also, it is a common knowledge that (unlike the robot) the slave is supposed to know that stealing is wrong.

But this naive strategy fails if the slave checks my garden and then tells me "master, your garden only contains small apples; what am I supposed to do?" Saying "I don't care, just bring the f-ing apple or feel my wrath" will achieve the purpose, but it puts a dent into my deniability. To do things properly, I must be able to claim afterwards that I believed that my garden contains big apples, and therefore I believed that my orders can be fulfilled legitimately. (Bonus points if afterwards, despite the slave's testimony of the contrary, I can insist with indignation that my garden still contains big apples, i.e. the slave is just lying to save his ass.)

Therefore, I need to remove the communication channel where the slave could tell me that there are no big apples in my garden. A simple approach is to give the command, along with the threat, and then leave. The slave knows that he gets whipped if the apple is not on my table by sunset, and I am not there to communicate the fact that the order cannot be achieved legitimately. So he either takes the risk and waits for my return -- betting on my sense of fairness, that I wouldn't punish him for not having accomplished the impossible -- or decides that it is relatively safer to simply steal the apple. This is better, but still too random. I could increase the probability of stealing by cultivating an image of a hot-tempered master who punishes firsts and asks questions later. But there are still ways this could fail (e.g. the slave could say afterwards "master, this was the last big apple in your garden, the remaining ones are all small", which would remove my deniability for giving the same command the next day).

A more complex approach is to make it know that talking about small apples is a taboo, and all violations of this taboo will be punished. Now the slave will not dare mention that the apples in my garden are small, but now I need a plausible excuse for the taboo. Is perhaps my garden a source of my pride, and thus I take any criticism of my garden as a personal offense? That could work. I only need to establish the rule "I am proud of my garden and any kind of criticism will be severely punished" sufficiently long before I start ordering my slaves to steal; to avoid the impression that I established that rule exactly for that purpose.

Summary: a command that is impossible (or unlikely) to be accomplished legitimately; a threat of punishment; and proactively destroying the feedback channel under some pretense.

Now I'd like to give some advice on how to notice when you are in a similar situation -- where the feedback channels are sabotaged, and perhaps it's just a question of time when you receive the impossible command, and will have to use your own will to break the rules and take the responsibility for the decision -- but actually, all situations with power differential are to smaller or greater degree like this. People usually can't communicate with their superiors openly. (Not even when the boss says "actually, I prefer if you communicate with me openly". Seriously, don't. What that statement actually means is that talking about difficulties in communication to superiors is also a taboo. Generally, a company statement "X is true" is usually best translated as "saying 'X is false' will be punished".)

A practical advice would perhaps be to notice the local taboos, and maneuver to avoid coming into contact with them. If a person X cannot be criticized, avoid being on the same team as X. If software Y cannot be criticized, avoid work that involves using software Y. If ideology Z cannot be criticized, avoid projects where Z applies most strongly. Yeah, I know, this is easier said than done, and it does not work reliably (e.g. you can choose a different team, and the next day X switches to that team, too).

Replies from: fictionfan42, Benquo
comment by fictionfan42 · 2019-09-02T16:36:26.215Z · LW(p) · GW(p)

Some context: In the past I had a job as a quality assurance inspector. I realized very soon after I started doing the job that a machine could easily do my job with less errors and for less then I was being paid so I wondered "Why do they pay for a human to do this job?" My conclusion was that if a machine makes a mistake as it is bound to do eventually they can't really fire it or yell at it well as a human can be. A human can be blamed.

So I agree with you. In the future I can see robots doing all the jobs expect being the scape goats.

Replies from: Viliam
comment by Viliam · 2019-09-02T23:47:43.867Z · LW(p) · GW(p)

Self-driving cars have a similar problem. Even if the car would cause 100 times fewer accidents than a human driver, the problem is that when an accident happens, we need a human to blame.

How will we determine who goes to jail? Elon Musk? The poor programmer who wrote the piece of software that will be identified as having caused the bug? Or maybe someone like you, who "should have checked that the car is 100% safe", even if everyone knows it is impossible. Most likely, it will be someone at the bottom of the corporate structure.

For now, as far as I know, the solution is that there must be a human driver in a self-driving car. In case of accident, that human will be blamed for not avoiding it by taking over the control.

But I suppose that moving the blame from the customer to some low-wage employee of the producer would be better for sales, so the legislation will likely change this way some day. We just need to find the proper scapegoat.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-10-09T04:51:27.077Z · LW(p) · GW(p)

How will we determine who goes to jail? Elon Musk? The poor programmer who wrote the piece of software that will be identified as having caused the bug? Or maybe someone like you, who "should have checked that the car is 100% safe", even if everyone knows it is impossible. Most likely, it will be someone at the bottom of the corporate structure.

It seems to me that the correct answer to your question is "no-one should go to jail"

Replies from: FireStormOOO
comment by FireStormOOO · 2021-03-10T02:27:07.222Z · LW(p) · GW(p)

Or more completely: In the absence of malice or extreme negligence there's nothing criminal to punish at all and money damages should suffice.  Given a 100x lower occurrence of accidents this should be insurable for ~1% the cost.  The default answer is drivers remain financially responsible for damages (but insurance gets cheaper) and driver can't be criminally negligent short of modifying/damaging the car in an obviously bad way (e.g. failing to fix a safety critical sensor in a reasonable amount of time that would have prevented the crash.  Alternately, bypassing one or more safety features that could have prevented the crash).  Car companies would be smart to lobby to keep it that way as letting every car accident become a product liability thing would be much more expensive. 

comment by Benquo · 2019-08-07T17:23:04.963Z · LW(p) · GW(p)

This is an excellent analytical account of the underlying dynamics. It also VERY strongly resembles the series of blame-deflections described in Part II Chapter VII of Atlas Shrugged (the train-in-the-tunnel part), where this sort of information suppression ultimately backfires on the nominal beneficiary.

comment by johnswentworth · 2021-01-06T17:02:22.871Z · LW(p) · GW(p)

ETA 1/12: This review is critical and at times harsh, not because I want to harshly criticize the post or the author, but because I did not consider harshness of criticism when writing. I still think the post is positive-net-value, and might even vote it up in the review. I especially want to emphasize that I do not think it is in any way useful to blame or punish the author for the things I complain about below; this is intended as a "pointing out a problematic habit which a lot of people have and society often encourages" criticism, not a "bad thing must be punished" criticism.

When this post first came out, I said something felt off about it. The same thing still feels off about it, but I no longer endorse my original explanation of what-felt-off. So here's another attempt.

First, what this post does well. There's a core model which says something like "people with the power to structure incentives tend get the appearance of what they ask for, which often means bad behavior is hidden". It's a useful and insightful model, and the post presents it with lots of examples, producing a well-written and engaging explanation. The things which the post does well more than outweigh the problems below; it's a great post.

On to the problem. Let's use the slave labor example, because that's the first spot where the problem comes up:

No company goes “I’m going to go out and enslave people today” (especially not publicly), but not paying people is sometimes cheaper than paying them, so financial pressure will push towards slavery. Public pressure pushes in the opposite direction, so companies try not to visibly use slave labor. But they can’t control what their subcontractors do, and especially not what their subcontractors’ subcontractors’ subcontractors do, and sometimes this results in workers being unpaid and physically blocked from leaving.

... so far, so good. This is generally solid analysis of an interesting phenomenon.

But then we get to the next sentence:

Who’s at fault for the subcontractor(^3)’s slave labor?

... and this where I want to say NO. My instinct says DO NOT EVER ASK THAT QUESTION, it is a WRONG QUESTION, you will be instantly mindkilled every time you ask "who should be blamed for X?".

... on reflection, I do not want to endorse this as an all-the-time heuristic, but I do want to endorse it whenever good epistemic discussion is an objective. Asking "who should we blame?" is always engaging in a status fight. Status fights are generally mindkillers, and should be kept strictly separate from modelling and epistemics.

Now, this does not mean that we shouldn't model status fights. Rather, it means that we should strive to avoid engaging in status fights when modelling them. Concretely: rather than ask "who should we blame?", ask "what incentives do we create by blaming <actor>?". This puts the question in an analytical frame, rather than a "we're having a status fight right now" frame.

Some sections of the post do discuss things in that sort of analytical frame, and those sections are where I see most of the value. Unfortunately, they're mixed in with parts which don't use an analytical frame. For instance, in the "Dating an Artist" example, we see "It wasn’t my fault he felt that pressure..." -  a sentence which is engaging in a status fight. But then two sentences later, there's a great analysis of the incentives created by precommitting to housing provision. If the sentences about how "Time in my spare bedroom was a gift to him I had no obligation to keep giving" were stripped out, and just the analysis were left behind, then the post would be dramatically better.

Now, I'm sure somebody's going to come along and object that we can never fully separate the status fights from the status-fight-models in reality; people will always sneak their status fights into epistemic discussions. My answer is that this is the fallacy of gray [? · GW]:

The Sophisticate: “The world isn’t black and white. No one does pure good or pure bad. It’s all gray. Therefore, no one is better than anyone else.”

The Zetet: “Knowing only gray, you conclude that all grays are the same shade. You mock the simplicity of the two-color view, yet you replace it with a one-color view . . .”

—Marc Stiegler, David’s Sling

Yes, the status fights will sometimes sneak into epistemic discussions. Ideally, we want epistemic discussion norms which are robust to the presence of status fights. "Don't engage in explicit status fights, use an analytic frame instead" is one such norm - it is not sufficient on its own, but it seems to take a large step in the right direction, and in practice it is the norm which seems most important for keeping epistemic discussion sane.

I can still see where there's room to quibble about using this norm all the time, but at the very least it is a norm which would improve overall epistemics if applied to posts like this.

Replies from: Raemon, Slider, pktechgirl
comment by Raemon · 2021-01-13T20:36:05.543Z · LW(p) · GW(p)

... on reflection, I do not want to endorse this as an all-the-time heuristic, but I do want to endorse it whenever good epistemic discussion is an objective. Asking "who should we blame?" is always engaging in a status fight. Status fights are generally mindkillers, and should be kept strictly separate from modelling and epistemics.

Now, this does not mean that we shouldn't model status fights. Rather, it means that we should strive to avoid engaging in status fights when modelling them. Concretely: rather than ask "who should we blame?", ask "what incentives do we create by blaming <actor>?". This puts the question in an analytical frame, rather than a "we're having a status fight right now" frame.

This was a pretty important couple of points. I'm not sure I agree with them as worded, but point towards something that I think is close to a pareto improvement, at least for LessWrong and maybe for the whole world.

I do not want to endorse this as an all-the-time heuristic, but I do want to endorse it whenever good epistemic discussion is an objective

The key problem is... sometimes you actually just do need to have status fights, and you still want to have as-good-epistemics-as-possible given that you're in a status fight. So a binary distinction of "trying to have good epistemics" vs "not" isn't the right frame.

I think this might actually be a pretty good distinction for LessWrong's frontpage – "status fight or no?" is close to the question that our Frontpage 'politics' distinction is aiming at. I do think it is probably reasonable that if you're trying to write a frontpage page, you follow the "what incentives do we create by blaming?" rule, and if you want to more directly talk about "no actually we should blame Bob for X" then you write a personal blogpost.

Replies from: johnswentworth
comment by johnswentworth · 2021-01-14T02:47:07.908Z · LW(p) · GW(p)

The key problem is... sometimes you actually just do need to have status fights, and you still want to have as-good-epistemics-as-possible given that you're in a status fight. So a binary distinction of "trying to have good epistemics" vs "not" isn't the right frame.

Part of my model here is that moral/status judgements (like "we should blame X for Y") like to sneak into epistemic models and masquerade as weight-bearing components of predictions. The "virtue theory of metabolism", which Yudkowsky jokes about a few times in the sequences, is an excellent example of this sort of thing, though I think it happens much more often and usually much more subtly than that.

My answer to that problem on a personal level is to rip out the weeds wherever I notice them, and build a dome around the garden to keep the spores out. In other words: keep morality/status fights strictly out of epistemics in my own head. In principle, there is zero reason why status-laden value judgements should ever be directly involved in predictive matters. (Even when we're trying to model our own value judgements, the analysis/engagement distinction still applies.)

Epistemics will still be involved in status fights, but the goal is to make that a one-way street as much as possible. Epistemics should influence status, not the other way around.

In practice it's never that precise even when it works, largely because value connotations in everyday language can compactly convey epistemically-useful information  - e.g. the weeds analogy above. But it's still useful to regularly check that the value connotations can be taboo [LW · GW]'d without the whole model ceasing to make sense, and it's useful to perform that sort of check automatically when value judgements play a large role.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2021-01-15T07:50:08.772Z · LW(p) · GW(p)

John and I had a fantastic offline discussion and I'm currently revising this in light of that. We're also working on a postmortem on the whole thing that I expect to be very informative. I keep mission creeping on my edits and response and it's going to take a while so I'm writing the bare minimum comment to register that this is happening.

comment by Slider · 2021-01-11T15:11:38.839Z · LW(p) · GW(p)

That explanation what is suspicious about the post drives two intuitions on me how that gets at something and how it is too black and white.

The danger is having a logic of "We had a bad harvest this year. We need to burn more witches so that our harvests are better". Having a guilty party makes it easy to stop being curious about the mechanics and fuels a very flawed theory of remedy.

But then if there is a car accident and somebody tries to find whos insurance company should be paying the repair bills going in that situation and saying "You are committing a grievious error if you try to find a blame party" seems wrong. There it still seems there are more and less productive ways about it. A court probably would not think who we should bill but rather who is on the hook for the bill. Likewise a airplane crash investigation is very interested in the causes and is likely to be basis for future preventative action. The kind of question of "a plane crashed and we have no clue why" screams to have a high quality, correct answer. It also seems typical that in such investigations multiple contribuitng hypotheses are examined closely.

In serial show The Boys one of the characters spins a plane crash for politcal grief over airspace control. That fictional situation seems like an example of how to do it wrong where it is pretty clear how to do it right.

I guess itmight just be that "How this happened?" is a way more justifiable question rather than "Whos life we should make diffcult based on this?"

comment by Elizabeth (pktechgirl) · 2021-01-11T05:59:05.023Z · LW(p) · GW(p)

I've been thinking a lot about this comment, and wanted to think more, but it seems useful to have something up as voting starts, so....

I think there's A Thing JW both agree is harmful (around assigning people moral responsibility when they're responding to incentives), and that I was trying to fight against. One thing I took from this comment is there's a good chance I had only a partial victory against Harmful Thing, and tried to pull down the master's house with the master's tools. I'd be very interested in exploring that further. (I also think it's possible JW is doing the same thing... it's a hard trap to escape)

I don't think giving up the question "Who should we blame?" entirely is a good idea. Possibly the benefits of the norm would outweigh the costs for LessWrong in particular, but I don't believe such a norm would be a pareto improvement. 

comment by Larks · 2019-08-03T16:19:46.446Z · LW(p) · GW(p)
This case is more complicated than the corporate cases because the powerful person (me) was getting merely the appearance of what she wanted (a genuine relationship with a compatible person), not the real thing. And because the exploited party was either me or Connor, not a third party like bank customers. No one thinks the Wells Fargo CEO was a victim the way I arguably was.

I think you have misunderstood the Wells Fargo case. These fake accounts generally didn't bring in any material revenue; they were just about internal 'number of new accounts targets'. It was directly a case of bank employees being incentivised to defraud management and investors, which they then did. If ordinary Wells employees had not behaved fraudulently, all the targets would have been missed, informing management/investors about their mis-calibration, and more appropriate targets would have been set. In this case power didn't buy distance from the crime, but only in the sense that it meant you couldn't tell you were being cheated.

For more on this I recommend the prolific Matt Levine:

There's a standard story in most bank scandals, in which small groups of highly paid traders gleefully and ungrammatically conspire to rip-off customers and make a lot of money for themselves and their bank. This isn't that. This looks more like a vast uprising of low-paid and ill-treated Wells Fargo employees against their bosses.
...
So that's about 2.1 million fake deposit and credit-card accounts, of which about 100,000 -- fewer than 5 percent -- brought in any fee income to Wells Fargo. The total fee income was $2.4 million, or about $1.14 per fake account. And that overstates the profitability: Wells Fargo also enrolled people for debit cards and online banking, but the CFPB doesn't bother to count those incidents, or suggest that any of them led to any fees. Which makes sense: You'd expect online banking and debit cards to be free, if you never use them or even know about them. Meanwhile, all this dumb stuff seems to have occupied huge amounts of employee time that could have been spent on more productive activities. If you divide the $2.4 million among the 5,300 employees fired for setting up fake accounts, you get about $450 per employee. Presumably it cost Wells Fargo way more than that just to replace them.
In the abstract, you can see why Wells Fargo would emphasize cross-selling of multiple "solutions" to customers. It is a good sales practice; it both indicates and encourages customer loyalty. If your customers have a checking account, and a savings account, and a credit card and online banking, all in one place, then they'll probably use each of those products more than if they had only one. And when they want a new, lucrative product -- a mortgage, say, or investment advice -- they're more likely to turn to the bank where they keep the rest of their financial life. 
But obviously no one in senior management wanted this. Signing customers up for online banking without telling them about it doesn't help Wells Fargo at all. No one feels extra loyalty because they have a banking product that they don't use or know about. Even signing them up for a credit card without telling them about it generally doesn't help Wells Fargo, because people don't use credit cards that they don't know about. Cards with an annual fee are a different story -- at least you can charge them the fee! -- but it seems like customers weren't signed up for many of those.  This isn't a case of management pushing for something profitable and getting what they asked for, albeit in a regrettable and illegal way. This is a case of management pushing for something profitable but difficult, and the workers pushing back with something worthless but easy.

Replies from: Raemon, pktechgirl
comment by Raemon · 2019-08-03T17:43:22.681Z · LW(p) · GW(p)

Hmm. I'm not sure I fully understand the Wells Fargo case but I interpreted it as a concern between four parties:

1. The people who got fake accounts signed up for them.

2. The employees doing the fake signups

3. A middle management tier, which set quotas

4. A higher level management tier that (presumaby?) wanted middle management to actually be making money.

So, the people being defrauded are not the customers, but the higher management tier, basically. (But, also, this entire thing might just be a weird game that middle management tiers play with each other for complex, mostly orthogonal reasons. Elizabeth references the book Moral Mazes, which Zvi provides an abridged version of here [LW · GW], which I suspect is important background here, although not 100% sure where Elizabeth was coming from)

[note: I hadn't necessarily interpreted the original Well's Fargo story with the subtext I outline here, but when you point out the correct subtext it doesn't feel like it shifts much how the example relates to the overall point]

comment by Elizabeth (pktechgirl) · 2019-08-03T17:41:02.529Z · LW(p) · GW(p)

Fixed, thank you for pointing this out.

comment by johnswentworth · 2019-08-03T17:33:58.954Z · LW(p) · GW(p)

Something about this piece felt off to me, like I couldn't see anything specifically wrong with it but still had a strong instinctive prior that lots of things were wrong.

After thinking about it for a bit, I think my main heuristic is: this whole piece sounds like it's built on a conflict-theory worldview. The whole question of the essay is basically "who should we be angry at"? Based on that, I'd expect that many or most of the individual examples are probably inaccurately understood or poorly analyzed. Lark's comment about the Wells Fargo case confirms that instinct for one of the examples.

Then I started thinking about the "conflict theory = predictably wrong" heuristic. We say "politics is the mindkiller [LW · GW]", but I don't think that's quite right - people have plenty of intelligent discussions about policy, even when those discussions inherently involve politics. "Tribalism is the mindkiller" is another obvious formulation, but I'd also propose "conflict theory is the mindkiller". Models like "arguments are soldiers [? · GW]" or "our enemies are evil [? · GW]" are the core of Yudkowsky's original argument [? · GW] for viewing politics as a mind-killer. But these sort of models are essentially synonymous with conflict theory; if we could somehow have a tribalistic or political discussion without those conflict-theoretic elements, I'd expect it wouldn't be so mindkiller-ish.

Looping back to the main topic of the OP: what would be a more mistake-theoretic way to view the same examples? One theme that jumps out to me is principal-agent problems: when something is outsourced, it's hard to align incentives. That topic has a whole literature in game theory, and I imagine more useful insight could be had by thinking about how it applies to the examples above, rather than thinking about "moral culpability" - a.k.a. who to be angry at.

Replies from: Benito, Benquo, jessica.liu.taylor, Raemon
comment by Ben Pace (Benito) · 2019-08-03T20:00:03.067Z · LW(p) · GW(p)

I changed my mind about conflict/mistake theory recently, after thinking about Scott's comments on Zvi's post [LW(p) · GW(p)]. I previously thought that people were either conflict theorists or mistake theorists. But I now do not use it to label people, but instead to label individual theories.

To point to a very public example, I don't think Sam Harris is a conflict theorist or a mistake theorist, but instead uses different theories to explain different disagreements. I think Sam Harris views any disagreements with people like Stephen Pinker or Daniel Dennett as primarily them making reasoning mistakes, or otherwise failing to notice strong arguments against their position. And I think that Sam Harris views his disagreements with people like <quickly googles Sam Harris controversies> Glenn Greenwald and Ezra Klein as primarily them attacking him for pushing different goals to their tribes.

I previously felt some not-insubstantial pull to pick sides in the conflict vs mistake theorist tribes, but I don't actually think this is a helpful way of talking, not least because I think that sometimes I will build a mistake theory for why a project failed, and sometimes I will build a conflict theory.

To push back on this part:

Models like "arguments are soldiers [? · GW]" or "our enemies are evil [? · GW]" are the core of Yudkowsky's original argument [? · GW] for viewing politics as a mind-killer. But these sort of models are essentially synonymous with conflict theory; if we could somehow have a tribalistic or political discussion without those conflict-theoretic elements, I'd expect it wouldn't be so mindkiller-ish.

"Arguments are soldiers" and "our enemies are evil" are not imaginary phenomena, they exist and people use such ideas regularly, and it's important that I don't prevent myself from describing reality accurately when this happens. I should be able to use a conflict theory.

I have a model of a common type of disagreement where people get angry at someone walking in with a mistake theory that goes like this: Alice has some power over Bob, and kinda self-deceives themselves into a situation where it's right for them to take resources from Bob, and as Bob is getting angry at Alice and tries to form a small political force to punish Alice, then Charlie comes along and is like "No you don't understand, Alice just made an error of reasoning and if I explain this to them they won't make that mistake again!" and Bob gets really angry at Charlie and thinks they're maybe trying to secretly help Alice or else are strikingly oblivious / conflict averse to an unhealthy degree. (Note this is a mistake theory about the disagreement between Bob and Charlie, and a conflict theory about the disagreement between Bob and Alice. And Charlie is wrong to use a mistake theory.)

I think the reason I'm tempted to split mistake and conflict into tribes, is because I do know people that largely fit into one or the other. I knew people at school who always viewed interpersonal conflict as emanating from tribal self-interest, and would view my attempt to show a solution that didn't require someone being at fault as me trying to make them submit to some kinda weird technicality, and got justifiably irritated. I also know people who are very conflict averse but also have an understanding of the complexity of reality, and so always assume it is merely a principal-agent problem or information flow problem, as opposed to going "Yeah, Alice is just acting out of self-interest here, we need to let her know that's not okay, and let's not obfuscate this unnecessarily." But I think the goal is to have one's beliefs correspond to reality - to use a conflict theory when that's true, a mistake theory when that's true, and not pre-commit to one side or the other regardless of how reality actually is.

I do think that conflict theories are often pretty derailing to bring up when trying to have a meaningful 1-1 public debate, and that it's good to think carefully about specific norms for how to do such a thing. I do think that straight-up banning them is likely the wrong move though. Well, I think that there are many places where they have no place, such as a math journal. However the mathematical community will need a place to be able to discuss internal politics + norm-violations where these can be raised.

Replies from: Viliam, johnswentworth, Gurkenglas
comment by Viliam · 2019-08-04T20:10:06.491Z · LW(p) · GW(p)

I think the whole "mistake theory vs conflict theory" thing needs to be examined and explained in greater detail, because there is a lot of potential to get confused about things (at least for me). For example:

Both "mistake statements" and "conflict statements" can be held sincerely, or can be lies strategically used against an enemy. For example, I may genuinely believe that X is racist, and then I would desire to make people aware of a danger X poses. The fact that I do not waste time explaining and examining specific details of X's beliefs is simply because time is a scarce resource, and warning people against a dangerous person is a priority. Or, I may knowingly falsely accuse X of being racist, because I assume that gives me higher probability of winning the tribal fight, compared to a honest debate about our opinions. (Note: The fact that I assume my opponent would win a debate doesn't necessarily imply that I believe he it right. Maybe his opinions are simply more viral; more compatible with existing biases and prejudices of listeners.) Same goes for the mistake theory: I can sincerely explain how most people are not evil and yet Moloch devours everything; or I may be perfectly aware that the people of my tribe are at this moment fighting for our selfish collective interest, and yet present an ad-hoc theory to confuse the nerds of the opposing tribe into inaction.

Plus, there is always a gray zone between knowingly lying and beliefs sincerely held. Unconscious biases, plausible deniability, all this "this person seems to be genuinely mistaken, but at the same time they resist all attempts to explain" which seems to be the behavior of most people most of the time. This balancing at "aware on some level, but unaware on another level" which allows us to navigate towards achieving our selfish goals while maintaining the image of innocence (including the self-image).

Then, we have different levels of meta. For example, suppose that Alice takes Bob's apple and eats it. This is a factual description. On the first level, Charlie the conflict theorist might say "she knowingly stole the apple", while Diana the mistake theorist might say "she just made a mistake and believed the apple was actually hers". Now on the second level, a conflict theorist could say "of course Charlie accuses Alice of acting badly; he is a misogynist" (conflict explanation of conflict explanation), or "of course Diana would defend Alice; women have a strong in-group bias" (conflict explanation of mistake explanation). A mistake theorist could say "Charlie is a victim of illusion of transparency, just because he noticed the apple belongs to Bob, doesn't mean Alice had to notice it, too" (mistake explanation of conflict explanation), or "Diana seems to be a nice person who would never steal, and she projects her attitude on Alice" (mistake explanation of mistake explanation). On the third level... well, it gets complicated quickly. And yet, people make models of each other, and make models of models other people have about them, so the higher levels will get constructed.

By the way, notice that "mistake theorists" and "conflict theorists" are not two opposing tribes, in the sense of tribal conflict. The same political tribe may contain both of them: some people believe their opponents are evil, others believe they are making a tragic mistake; both believe the opponents have to be stopped, by force if necessary. There may be conflict theorists on both sides: both explaining why the other side is making a power grab and needs to be stopped; or mistake theorists on both sides: both explaining why the other side is deluded.

...and I feel pretty sure there are other complications that I forgot at the moment.

EDIT:

For example, the conflict theory can be expressed in a mistake-theory lingo. Instead of saying "my evil opponent is just trying to get more power", say "my uneducated opponent is unaware of his unconscious biases that make him believe that things that get him more power are the right ones". You accused him of pretty much the same thing, but it makes your statement acceptable among mistake theorists.

Replies from: Stag
comment by Stag · 2019-08-09T23:02:14.830Z · LW(p) · GW(p)

I might be missing the forest for the trees, but all of those still feel like they end up making some kinds of predictions based on the model, even if they're not trivial to test. Something like:

If Alice were informed by some neutral party that she took Bob's apple, Charlie would predict that she would not show meaningful remorse or try to make up for the damage done beyond trivial gestures like an off-hand "sorry" as well as claiming that some other minor extraction of resources is likely to follow, while Diana would predict that Alice would treat her overreach more seriously when informed of it. Something similar can be done on the meta-level.

None of these are slamdunks, and there are a bunch of reasons why the predictions might turn out exactly as laid out by Charlie or Diana, but that just feels like how Bayesian cookies crumble, and I would definitely expect evidence to accumulate over time in one direction or the other.

Strong opinion weakly held: it feels like an iterated version of this prediction-making and tracking over time is how our native bad actor detection algorithms function. It seems to me that shining more light on this mechanism would be good.

comment by johnswentworth · 2019-08-04T02:13:05.759Z · LW(p) · GW(p)

After reading this and the comments you linked, I think people mean several different things by conflict/mistake theory.

I mostly think of conflict theory as a worldview characterized by (a) assuming that bad things mostly happen because of bad people, and (b) assuming that the solution is mostly to punish them and/or move power away from them. I think of mistake theory as a worldview characterized by assuming that people do not intend to be evil (although they can still have bad incentives). I see mechanism design as the prototypical mistake theory approach: if people are misaligned, then restructure the system to align their incentives. It's a technical problem, and getting angry at people is usually unhelpful.

In the comment thread you linked, Scott characterizes conflict theory as "the main driver of disagreement is self-interest rather than honest mistakes". That view matches up more with the example you give: the mistake theorist assumes that people have "good" intent, and if you just explain that their actions are harmful, then they'll stop. Under this interpretation, mechanism design is conflict-theory-flavored; it's thinking of people as self-interested and then trying to align them anyway.

(I think part of the confusion is that some people are coming in with the assumption that acting in self-interest is automatically bad, and others are coming in with more of an economic/game theory mindset. Like, from an economic viewpoint, there's no reason why "the main driver of disagreement is self-interest" would lead to arguing that public choice theory is racist, which was one of Scott's original examples.)

So I guess one good question to think about is: how do we categorize mechanism design? Is it conflict, is it mistake, is it something else? Different answers correspond to different interpretations of what "conflict" and "mistake" theory mean. I'm pretty sure my interpretation is a much better fit to the examples and explanations in Scott's original post on the topic, and it seems like a natural categorization to me. On the other hand, it also seems like there's another natural category of naive-mistake-theorists who just assume honest mistakes, as in your Bob-Charlie example, and apparently some people are using the terms to capture that category.

Personally, my view is that mechanism design is more-or-less-always the right way to think about these kinds of problems. Sometimes that will lead to the conclusion that someone is making an honest mistake, sometimes it will lead to the conclusion that punishment is an efficient strategy, and often it will lead to other conclusions.

Replies from: jessica.liu.taylor, Wei_Dai
comment by jessicata (jessica.liu.taylor) · 2019-08-04T02:50:24.447Z · LW(p) · GW(p)

Like, from an economic viewpoint, there’s no reason why “the main driver of disagreement is self-interest” would lead to arguing that public choice theory is racist, which was one of Scott’s original examples.

I don't share this intuition. The Baffler article argues:

IN DECEMBER 1992, AN OBSCURE ACADEMIC JOURNAL published an article by economists Alexander Tabarrok and Tyler Cowen, titled “The Public Choice Theory of John C. Calhoun.” Tabarrok and Cowen, who teach in the notoriously libertarian economics department at George Mason University, argued that the fire-breathing South Carolinian defender of slaveholders’ rights had anticipated “public choice theory,” the sine qua non of modern libertarian political thought.

...

Astutely picking up on the implications of Buchanan’s doctrine, Tabarrok and Cowen enumerated the affinities public choice shared with Calhoun’s fiercely anti-democratic political thought. Calhoun, like Buchanan a century and a half later, had theorized that majority rule tended to repress a select few. Both Buchanan and Calhoun put forward ideas meant to protect an aggrieved if privileged minority. And just as Calhoun argued that laws should only be approved by a “concurrent majority,” which would grant veto power to a region such as the South, Buchanan posited that laws should only be made by unanimous consent. As Tabarrok and Cowen put it, these two theories had “the same purpose and effect”: they oblige people with different interests to unite—and should these interested parties fail to achieve unanimity, government is paralyzed.

In marking Calhoun’s political philosophy as the crucial antecedent of public choice theory, Tabarrok and Cowen unwittingly confirmed what critics have long maintained: libertarianism is a political philosophy shot through with white supremacy. Public choice theory, a technical language nominally about human behavior and incentives, helps ensure that blacks remain shackled.

...

In her 2017 book, Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America, historian Nancy MacLean argues that Buchanan developed his ideas in service of a Virginia elite hell-bent on preserving Jim Crow.

The overall argument is something like:

  • Calhoun and Buchanan both had racist agendas (maintaining slavery and segregation). (They may have these agendas due to some combination of personal self-interest and class self-interest)
  • They promoted ideas about democratic governance (e.g. that majority rule is insufficient) that were largely motivated by these agendas.
  • These ideas are largely the same as the ones of public choice theory (as pointed out by Cowen and Tabarrok)
  • Therefore, it is likely that public choice theory is advancing a racist agenda, and continues being advocated partially for this reason.

Overall, this is an argument that personal self-interest, or class self-interest, are driving the promotion of public choice theory. (Such interests and their implications could be studied within economics; though, economics typically avoids discussing group interests except in the context of discrete organizational units such as firms)

Another way of looking at this is:

  • Economics, mechanism design, public choice theory, etc are meta-level theories about how to handle conflicts of interest.
  • It would be desirable to have agreement on good meta-level principles in order to resolve object-level conflicts.
  • However, the choice of meta-level principles (and, the mapping between those principles and reality) is often itself political or politicized.
  • Therefore, there will be conflicts over these meta-level principles.
Replies from: johnswentworth, OphilaDros
comment by johnswentworth · 2019-08-04T06:54:14.252Z · LW(p) · GW(p)

Let's imagine for a minute that we didn't know any of the background, and just think about what we might have predicted ahead of time.

Frame 1: conflict theory is characterized by the idea that problems mostly come from people following their own self-interest. Not knowing anything else, what do we expect conflict theorists to think about public choice theory - a theory whose central premise is modeling public servants as following their own self-interests/incentives? Like, the third sentence of the wikipedia article is "it is the subset of positive political theory that studies self-interested agents (voters, politicians, bureaucrats) and their interactions".

If conflict theory is about problems stemming from people following their self-interest, public choice theory ought to be right up the conflict theorist's alley. This whole "meta-level conflict" thing sounds like a rather contrived post-hoc explanation; a-priori there doesn't seem to be much reason for all this meta stuff. And conflict theorists in practice seem to be awfully selective about when to go meta, in a way that we wouldn't predict just based on "problems mostly stem from people following their self-interest".

On the other hand...

Frame 2: conflict theory is characterized by the idea that bad things mostly happen because of bad people, and the solution is to punish them. In this frame, what would we expect conflict theorists to think of public choice theory?

Well, we'd expect them to dismiss it as obviously wrong - it doesn't denounce any bad people - and therefore also probably an attempt by bad people to steer things the way they want.

If conflict theory is characterized by "bad things happen because of bad people", then an article about how racism secretly underlies public choice theory is exactly the sort of thing we'd predict.

Replies from: Benito, clone of saturn, Kenny
comment by Ben Pace (Benito) · 2019-08-04T10:38:33.975Z · LW(p) · GW(p)

I think it's a genuinely difficult problem to draw the boundary between a conflict and a mistake theory, in no small part due to the difficulties in drawing the boundary between lies and unconscious biases (which I rambled a bit about here [LW(p) · GW(p)]). You can also see the discussion on No, it's not The Incentives - it's you [LW · GW] as a disagreement over where this boundary should be.

That said, one thing I'll point out is that explaining Calhoun and Buchanan's use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. It's saying that them bringing public choice theory into the conversation was not a good faith attempt to convey how they see the world, but obfuscation in favour of their political side winning. And more broadly saying that public choice theory is racist is a theory that says the reason it is brought up in general is not due to people having differing understandings of economics, but due to people having different political goals and trying to win.

I find for myself that thinking 'conflict theorists' is a single coherent group is confusing me, and that I should instead replace the symbol with the substance when I'm tempted to use it, because there are many types of people who sometimes use conflict theories, and it is confusing to lump them in with people who always use them, because they often have different reasons for using them when they do.

To give one example of people who always use it: there are certain people who have for most of their lives found that the main determinant of outcomes for them is political conflict by people above them, who are only really able to understand the world using theories of conflict. They've also never gained a real understanding of any of the fascinating and useful different explanations for how social reality works (example [? · GW], example [? · GW]), or a sense that you often can expand massively rather than fight over existing resources. And when they're looking at someone bringing in public choice theory to argue one side of a social fight, they get an impression that the person is finding clever arguments for their position, rather than being honest.

(This is a mistake theory of why some people primarily reason using conflict theories. There are conflict theories that explain it as well.)

I think it's good to be able to describe what such people are doing, and what experiences have lead them to that outlook on life. But I also think that there are many reasons for holding a conflict theory about a situation, and these people are not at all the only examples of people who use such theories regularly.

Added: clone of saturn’s 3 point explanation seems right to me.

Replies from: johnswentworth
comment by johnswentworth · 2019-08-04T16:11:40.487Z · LW(p) · GW(p)

I get what you're saying about theories vs theorists. I agree that there are plenty of people who hold conflict theories about some things but not others, and that there are multiple reasons for holding a conflict theory.

None of this changes the original point: explaining a problem by someone being evil is still a mind-killer. Treating one's own arguments as soldiers is still a mind-killer. Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we're talking about conflict theory in the form of "bad thing happens because of this bad person" as opposed to "this person's incentives are misaligned". We can explain other peoples' positions by saying they're using a conflict theory, and that has some predictive power, but we should still expect those people to usually be mind-killed by default - even if their arguments happen to be correct.

As you say, explaining Calhoun and Buchanan's use of public choice theory as entirely a rationalisation for their political goals, is a conflict theory. Saying that people bring up public choice theory not due to differing economic understanding but due to different political goals, is a conflict theory. And I expect people using either those explanations to be mind-killed by default, even if the particular interpretation were correct.

Even after all this discussion of theories vs theorists, "conflict theory = predictably wrong" still seems like a solid heuristic.

Replies from: Benito
comment by Ben Pace (Benito) · 2019-08-11T01:15:48.291Z · LW(p) · GW(p)

Sorry for the delay, a lot has happened in the last week.

Let me point to where I disagree with you.

Holding a conflict theory about any particular situation is still a mind-killer, at least to the extent that we're talking about conflict theory in the form of "bad thing happens because of this bad person" as opposed to "this person's incentives are misaligned".

My sense is you are underestimating the cost of not being able to use conflict theories. Here are some examples, where I feel like prohibiting me from even considering that a bad thing happened because a person was bad will severely limit my ability to think and talk freely about what is actually happening.

There's something very valuable that you're pointing at, and I agree with a lot of it. There shouldn't be conflict theories in a math journal. It's plausible to me there shouldn't be conflict theories in an economics journal. And it's plausible to me that the goal should be for the frontpage of LessWrong to be safe from them too, because they do bring major costs in terms of mindkilling nature, and furthermore because several of the above are bullet points are simply off-topic for LessWrong. We're not here to discuss current-day tribal politics in various institutions, industries and communities.

And if I were writing publicly about any of the above topics, I would heavily avoid bringing conflict theories - and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person's position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. Rather than calling them biased or self-interested, I prefer to use the most powerful of rebuttals in the pursuit of truth, which is showing that they're wrong.

But ruling it out wholly in one's discourse and life seems way too much. I think there are cases where wholly censoring conflict theories will be far more cost than it's worth, and that removing them entirely from your discourse will cripple you and allow you to be taken over by outside forces that want your resources.

For example, I can imagine a relatively straightforward implementation of "no conflict theories" in a nearby world meaning that I am not able to say that study after study is suspect, or that a position is being pushed by political actors, unless I first reinvent mechanism theory and a bunch of microeconomics and a large amount of technical language to discuss bias. If I assume the worst about all of the above bullet points, not being able to talk about bad people causing bad things could mean we are forced to believe lots of false study results and ignore a new theory of fundamental physics, plus silence economists, bloggers, and public intellectuals.

The Hanson examples above feel the strongest to me because it’s the one that's a central example of something that's able to lead to a universal, deep insight about reality and be a central part of LessWrong's mission in understanding human rationality, whereas the others are mostly about current tribal politics. But I think they all substantially affect how much to trust our info sources.

My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.

***

Hmm.

I re-read the OP, and realise I actually identify a lot with your initial comment, and that I gave Elizabeth similar feedback when I read an earlier draft of hers a month ago. The wording of the OP crosses a few of my personal lines such that I would not publish it. And it's actually surprisingly accurate to say that the key thing I'd be doing if I were editing the OP would be turning it from things that had a hint of being like a conflict theory (aren't people with power bad!) to things that felt like a mistake theory (here's an interesting mechanism where you might mistakenly allocate responsibility). Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.

And if I were writing publicly about any topics where I had conflict theories, I would heavily avoid bringing conflict theories - and have in the past re-written whole essays to be making only object-level points about a topic rather than attacking a particular person's position, because I felt the way I had written it would come across as a bias-argument / conflict theory and destroy my ability to really dialogue with people who disagreed with me. When I get really irritated with someone's position and have a conflict theory about the source of the disagreement, I still write mistake-theory posts like this [LW · GW], a post with no mention of the original source of motivation.

I think that one of the things that's most prominent to me on the current margin is that I feel like there are massive blockers on public discourse, stopping people from saying or writing anything, and I have a model whereby telling people who write things like the OP to do more work [LW · GW] to make it all definitely mistake theory (which is indeed a standard I hold myself to) will not improve the current public discourse, but on the current margin simply stop public discourse. I feel similarly about Jessicata's post on AI timelines, where it is likely to me that the main outcome has been quite positive - even though I think I disagree with each of the three arguments in the post and its conclusion - because the current alternative is almost literally zero public conversation about plans for long AI timelines. I already am noticing personal benefits [LW(p) · GW(p)] from the discourse on the subject.

In the first half of this comment I kept arguing against the position "We should ban all conflict theories" rather than "Conflict theories are the mind-killer" which are two very different claims and only one of which you've been making. Right now I want to defend people's ability to write down their thoughts in public, and I think the OP is strongly worth publishing in the situation we're in. I could imagine a world where there was loads of great discussion of topics like what the OP is about, where the OP stands out as not having met a higher standard of effort to avoid mind-killing anyone that the other posts have, where I'd go "this is unnecessarily likely to make people feel defensive and like there's subtle tribal politics underpinning its conclusions, consider these changes?" but right now I'm very pro "Cool idea, let me share my thoughts on the subject too."

(Some background: The OP was discussed about 2 weeks ago on Elizabeth's FB wall, and in it someone else was proposing a different reason why this post needed re-writing for PR reasons, and there I argued already that they shouldn't put such high bars to writing things on people. I think that person’s specific suggestion, if taken seriously, would be incredibly harmful to public discourse regardless of its current health, whereas in this case I think your literal claims are just right. Regardless, I am strongly pro the post and others like it being published.)

Replies from: Zack_M_Davis, jessica.liu.taylor, johnswentworth
comment by Zack_M_Davis · 2019-08-11T19:25:19.486Z · LW(p) · GW(p)

Conflict theories tends to explode and eat up communal resources in communities and on the internet generally, and are a limited (though necessary) resource that I want to use with great caution.

But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community's beliefs.

The distortion is probably fine for most human communities: keeping the peace with your co-religionists is more important than doing systematically correct reasoning, because religions aren't trying to succeed by means of doing systematically correct reasoning. But if there is to be such a thing as a rationality community specifically, maybe communal resources that can be destroyed by the truth, should be.

(You said this elsewhere in the thread: "the goal is to have one's beliefs correspond to reality—to use a conflict theory when that's true, a mistake theory when that's true" [LW(p) · GW(p)].)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2019-08-11T23:15:27.384Z · LW(p) · GW(p)

But are theories that tend to explode and eat up communal resources therefore less likely to be true? If not, then avoiding them for the sake of preserving communal resources is a systematic distortion on the community's beliefs.

Expected infrequent discussion of a theory shouldn't lower estimates of its probability. (Does the intuition that such theories should be seen as less likely follow from most natural theories predicting discussion of themselves? Erroneous theorizing also predicts that, for example "If this statement is correct, it will be the only topic of all future discussions.")

In general, it shouldn't be possible to expect well-known systematic distortions for any reason, because they should've been recalibrated away immediately. What not discussing a theory should cause is lack of precision (or progress), not systematic distortion.

Replies from: jessica.liu.taylor, Zack_M_Davis
comment by jessicata (jessica.liu.taylor) · 2019-08-12T06:32:28.259Z · LW(p) · GW(p)

Consider a situation where:

  • People are discussing phenomenon X.
  • In fact, a conflict theory is a good explanation for phenomenon X.
  • However, people only state mistake theories for X, because conflict theories are taboo.

Is your prediction that the participants in the conversation, readers, etc, are not misled by this? Would you predict that, if you gave them a survey afterwards asking for how they would explain X, they in fact give a conflict theory rather than a mistake theory, since they corrected for the distortion due to the conflict theory taboo?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2019-08-12T09:07:13.790Z · LW(p) · GW(p)

Would you correct your response so? (Should you?) If the target audience tends to act similarly, so would they.

Aside from that, "How do you explain X?" is really ambiguous and anchors on well-understood rather than apt framing. "Does mistake theory explain this case well?" is better, because you may well use a bad theory to think about something while knowing it's a bad theory for explaining it. If it's the best you can do, at least this way you have gears to work with. Not having a counterfactually readily available good theory because it's taboo and wasn't developed is of course terrible, but it's not a reason to embrace the bad theory as correct.

Replies from: jessica.liu.taylor, mr-hire
comment by jessicata (jessica.liu.taylor) · 2019-08-12T18:15:37.144Z · LW(p) · GW(p)

Would you correct your response so?

Perhaps (75% chance?), in part because I've spent >100 hours talking about, reading about, and thinking about good conflict theories. I would have been very likely misled 3 years ago. I was only able to get to this point because enough people around me were willing to break conflict theory taboos.

It is not the case that everybody knows. To get from a state where not everybody knows to a state where everybody knows, it must be possible to talk openly about such things. (I expect the average person on this website to make the correction with <50% probability, even with the alternative framing "Does mistake theory explain this case well?")

It actually does have to be a lot of discussion. Over-attachment to mistake theory (even when a moderate amount of contrary evidence is presented) is a systematic bias I've observed, and it can be explained by factors such as: conformity, social desirability bias (incl. fear), conflict-aversion, desire for a coherent theory that you can talk about with others, getting theories directly from others' statements, being bad at lying (and at detecting lying), etc. (This is similar to the question (and may even be considered as a special case) of the question of why people are misled by propaganda, even when there is some evidence that the propaganda is propaganda; see Gell-Mann amnesia)

comment by Matt Goldenberg (mr-hire) · 2019-08-12T10:02:05.461Z · LW(p) · GW(p)

This seems a bit off as Jessica clearly knows about conflict theory. The whole thing about making a particular type of theory taboo is that it can't become common knowledge.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2019-08-12T10:40:51.377Z · LW(p) · GW(p)

That's relevant to the example, but not to the argument. Consider a hypothetical Jessica less interested in conflict theory or a topic other than conflict theory. Also, common knowledge doesn't seem to play a role here, and "doesn't know about" is a level of taboo that contradicts the assumption I posited about the argument from selection effect being "well-known".

comment by Zack_M_Davis · 2019-08-12T00:11:57.895Z · LW(p) · GW(p)

In general, it shouldn't be possible to expect well-known systematic distortions for any reason, because they should've been recalibrated away immediately.

Hm. Is "well-known" good enough here, or do you actually need common knowledge? (I expect you to be better than me at working out the math here.) If it's literally the case that everybody knows [LW · GW] that we're not talking about conflict theories, then I agree that everyone can just take that into account and not be confused. But the function of taboos, silencing tactics, &c. among humans would seem to be maintaining a state where everyone doesn't know.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2019-08-12T00:27:51.281Z · LW(p) · GW(p)

Is "well-known" good enough here, or do you actually need common knowledge?

There is no need for coordination or dependence on what others think. If you expect yourself to be miscalibrated, you just fix that. If most people act this way and accept the argument that convinced you, then you expect them to have done the same.

comment by jessicata (jessica.liu.taylor) · 2019-08-11T05:43:33.085Z · LW(p) · GW(p)

My current sense is that I should think of posing conflict theories as a highly constrained, limited communal resource, and that while spending it will often cause conflict and people to be mind-killed, a rule that says one can never use that resource will mean that when that resource is truly necessary, it won’t be available.

"Talking about conflict is a limited resource" seems very, very off to me.

There are two relevant resources in a community. One is actual trustworthiness: how often do people inform each other (rather than deceive each other), help each other (rather than cheat each other), etc. The other is correct beliefs about trustworthiness: are people well-calibrated and accurate about how trustworthy others (both in particular and in general) are. These are both resources. It's strictly better to have more of each of them.

If Bob deceives me,

I desire to believe that Bob deceives me;

If Bob does not deceive me,

I desire to believe that Bob does not deceive me;

Let me not become attached to beliefs I may not want.

Talking about conflict in ways that are wrong is damaging a resource (it's causing people to have incorrect beliefs). Using clickbaity conflict-y titles without corresponding evidence is spending a resource (attention). Talking about conflict informatively/accurately is not spending a resource, it's producing a resource.

EDIT: also note, informative discussion of conflict, such as in Robin Hanson's work, makes it easier to talk informatively about conflict in the future, as it builds up theoretical framework and familiarity. Which means "talking about conflict is a limited resource" is backwards.

Replies from: Benito
comment by Ben Pace (Benito) · 2019-08-11T18:09:25.136Z · LW(p) · GW(p)

I’m hearing you say “Politics is not the mind-killer, talking inaccurately and carelessly about politics is the mind-killer! If we all just say true things and don’t try to grab attention with misleading headlines then we’ll definitely just have a great and net positive conversation and nobody will feel needlessly threatened or attacked”. I feel like you are aware of how toxic things like bravery debates are, and I expect you agree they’d be toxic even if everyone tried very hard to only say true things. I’m confused.

I’m saying it always bears a cost, and a high one, but not a cost that cannot be overcome. I think that the cost is different in different communities, and this depends on the incentives, norms and culture in those communities, and you can build spaces where a lot of good discussion can happen with low cost.

You’re right that Hanson feels to me pretty different than my other examples, in that I don’t feel like marginal overcoming bias blogposts are paying a cost. I suspect this might have to do with the fact that Hanson has sent a lot of very costly signals that he is not fighting a side but is just trying to be an interested scientist. But I’m not sure why I feel differently in this case.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-08-12T06:27:13.135Z · LW(p) · GW(p)

I'm going to try explaining my view and how it differs from the "politics is the mind killer" slogan.

  • People who are good at talking about conflict, like Robin Hanson, can do it in a way that improves the ability for people to further talk rationally about conflict. Such discussions are not only not costly, they're the opposite of costly.
  • Some people (most people?) are bad at talking about conflict. They're likely to contribute disinformation to these discussions. The discussions may or may not be worth having, but, it's not surprising if high-disinformation conversations end up quite costly.
  • My view: people who are actually trying can talk rationally enough about conflict for it to be generally positive. The issue is not a question of ability so much as a question of intent-alignment. (Though, getting intent aligned could be thought of as a kind of skill). (So, I do think political discussions generally go well when people try hard to only say true things!)
  • Why would I believe this? The harms from talking about conflict aren't due to people making simple mistakes, the kind that are easily corrected by giving them more information (which could be uncovered in the course of discussions of conflict). Rather, they're due to people enacting conflict in the course of discussing conflict, rather than using denotative speech.
  • Yes, I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does. (Are errors random, or do they favor fighting on a given side / appeasing local power structures / etc?)
  • Basically, if the issue is adversarial/deceptive action (conscious or subconscious) rather than simple mistakes, then "politics is the mind-killer" is the wrong framing. Rather, "politics is a domain where people often try to kill each other's minds" is closer.
  • In such a circumstance, building models of which optimization pressures are harming discourse in which ways is highly useful, and actually critical for social modeling. (As I said in my previous content, it's strictly positive for an epistemic community to have better information about the degree of trustworthiness of different information systems)
  • If you see people making conflict theory models, and those models seem correct to you (or at least, you don't have any epistemic criticism of them), then shutting down the discussions (on the basis that they're conflict-theorist) is actively doing harm to this model-building process. You're keeping everyone confused about where the adversarial optimization pressures are. That's like preventing people from turning on the lights in a room that contains monsters.
  • Therefore, I object to talking about conflict theory models as "inherently costly to talk about" rather than "things some (not all!) people would rather not be talked about for various reasons". They're not inherently costly. They're costly because some optimization pressures are making them costly. Modeling and opposing (or otherwise dealing with) these is the way out. Insisting on epistemic discourse even when such discourse is about conflict is a key way of doing so.
Replies from: Benito, Hazard, Kenny, Kenny
comment by Ben Pace (Benito) · 2019-08-16T19:03:09.456Z · LW(p) · GW(p)

Thank you, this comment helped me understand your position quite a bit. You're right, discussing conflict theories are not inherently costly, it's that they're often costly because powerful optimization pressures are punishing discussion of them.

I strongly agree with you here:

I am advocating a conflict theory, rather than a mistake theory, for why discussions of conflict can be bad. I think, if you consider conflict vs mistake theories, you will find that a conflict theory makes better predictions for what sorts of errors people make in the course of discussing conflict, than a mistake theory does.

This is also a large part of my model of why discussions of conflict often go bad - power struggles are being enacted out through (and systematically distorting the use of) language and reasoning.

(I am quite tempted to add that even in a room with mostly scribes, given the incentive on actors to pretend to be scribes, can make it very hard for a scribe to figure out whether someone is a scribe or an actor, and this information asymmetry can lead to scribes distrusting all attempts to discuss conflict theories and reading such discussions as political coordination.

Yet I notice that I pretty reflexively looked for a mistake theory there, and my model of you suggested to me the hypothesis that I am much less comfortable with conflict theories than mistake theories. I guess I'll look out for this further in my thinking, and consider whether it's false. Perhaps, in this case, it is way easier than I'm suggesting for scribes to recognise each other, and the truth is we just have very few scribes.)

The next question is under what norms, incentives and cultures can one have discussions of conflict theories where people are playing the role of Scribe, and where that is common knowledge. I'm not sure we agree on the answer to that question, or what the current norms in this area should be. I'm working on a longer answer, maybe post-length, to Zach's comment below, so I'll see if I can present my thoughts on that.

comment by Hazard · 2020-12-03T22:34:38.703Z · LW(p) · GW(p)

This is a very helpful comment, thank you!

comment by Kenny · 2019-08-22T04:00:31.768Z · LW(p) · GW(p)

By-the-way, this is a fantastic comment and would make a great post pretty much by itself (with maybe a little context about that to which it's replying).

comment by Kenny · 2019-08-22T01:06:55.157Z · LW(p) · GW(p)

enacting conflict in the course of discussing conflict

... seems to be exactly why it's so difficult to discuss a conflict theory with someone already convinced that it's true – any discussion is necessarily an attack in that conflict as it in effect presupposes that it might be false.

But that also makes me think that maybe the best rhetorical counter to someone enacting a conflict is to explicitly claim that one's unconvinced of the truth of the corresponding conflict theory or to explicitly claim that one's decoupling the current discussion from a (or any) conflict theory.

comment by johnswentworth · 2019-08-12T15:09:50.061Z · LW(p) · GW(p)

I generally endorse this line of reasoning.

Replies from: Benito
comment by Ben Pace (Benito) · 2019-08-12T16:47:49.739Z · LW(p) · GW(p)

Nice :-)

comment by clone of saturn · 2019-08-04T08:34:18.016Z · LW(p) · GW(p)

This seems like dramatically over-complicating the idea. I would expect a prototypical conflict theorist to reason like this:

  1. Political debates have winners and losers—if a consensus is reached on a political question, one group of people will be materially better off and another group will be worse off.

  2. Public choice theory makes black people worse off. (I don't know if the article is right about this, but I'll assume it's true for the sake of argument.)

  3. Therefore, one ought to promote public choice theory if one wants to hurt black people, and disparage public choice theory if one wants to help black people.

Replies from: johnswentworth
comment by johnswentworth · 2019-08-04T15:46:55.224Z · LW(p) · GW(p)

This explanation loses predictive power compared to the explanation I gave above. In particular, if we think of conflict theory as "bad things happen because of bad people", then it makes sense why conflict theorists would think public choice theory makes black people worse off, rather than better off. In your explanation, we need that as an additional assumption.

comment by Kenny · 2019-08-21T21:23:38.113Z · LW(p) · GW(p)

I don't think it's useful to talk about 'conflict theory', i.e. as a general theory of disagreement. It's more useful in a form like 'Marxism is a conflict theory'.

And then a 'conflict theorist' is someone who, in some context, believes a conflict theory, but not that disagreements generally are due to conflict (let alone in all contexts).

So, from the perspective of a 'working class versus capital class' conflict theory, public choice theory is obviously a weapon used by the capital class against the working class. But other possible conflict theories might be neutral about public choice theory.

Maybe what makes 'conflict theory' seem like a single thing is the prevalence of Marxism-like political philosophies.

comment by OphilaDros · 2019-08-06T09:53:23.684Z · LW(p) · GW(p)

This example looks like yet another instance of conflict theory imputing bad motives where they don't exist and generally leading you wrong.

A large part of this example relies on "Buchanan having racist political agenda and using public choice theory as a vehicle for achieving this agenda" being a true proposition. I can not assign a high degree of credibility to this proposition though, considering Buchanan is the same guy who wrote this:

"Given the state monopoly as it exists, I surely support the introduction of vouchers. And I do support the state financing of vouchers from general tax revenues. However, although I know the evils of state monopoly, I would also want, somehow, to avoid the evils of race-class-cultural segregation that an unregulated voucher scheme might introduce. In principle, there is, after all, much in the ”melting pot“ notion of America. And there is also some merit in the notion that the education of all children should be a commonly shared experience in terms of basic curriculum, etc. We should not want a voucher scheme to reintroduce the elite that qualified for membership only because they have taken Latin and Greek classics. Ideally, and in principle, it should be possible to secure the beneficial effects of competition, in providing education, via voucher support, and at the same time to secure the potential benefits of commonly shared experiences, including exposure to other races, classes and cultures. In practise, we may not be able to accomplish the latter at all. But my main point is, I guess, to warn against dismissing the comprehensive school arguments out of hand too readily. "

Source: http://www.independent.org/issues/article.asp?id=9115

Replies from: Benquo
comment by Benquo · 2019-08-07T17:27:01.315Z · LW(p) · GW(p)
  1. Talk is cheap, especially when claiming not to hold opinions widely considered blameworthy.
  2. Buchanan's academic career (and therefore ability to get our attention) can easily depend on racists' appetite for convenient arguments regardless of his personal preferences.
comment by Wei Dai (Wei_Dai) · 2019-08-12T09:55:56.239Z · LW(p) · GW(p)

I mostly think of conflict theory as a worldview characterized by (a) assuming that bad things mostly happen because of bad people, and (b) assuming that the solution is mostly to punish them and/or move power away from them. I think of mistake theory as a worldview characterized by assuming that people do not intend to be evil (although they can still have bad incentives).

Why not integrate both perspectives: people make genuine mistakes due to cognitive limitations, and they also genuinely have different values that are in conflict with each other, and the right way to frame these problems is "bargaining by bounded rationalists" where "bargaining" can include negotiation, politics, and war. (I made a 2012 post [LW · GW] suggesting this frame, but maybe should have given it a catchy name...)

Personally, my view is that mechanism design is more-or-less-always the right way to think about these kinds of problems. Sometimes that will lead to the conclusion that someone is making an honest mistake, sometimes it will lead to the conclusion that punishment is an efficient strategy, and often it will lead to other conclusions.

(I wrote the above before seeing this part.) I guess "mechanism design" is similar to "bargaining by bounded rationalists" so you seem to have reached a similar conclusion, but "mechanism design" kind of assumes there's a disinterested third party who has the power to impose a "mechanism" that is designed to be socially optimal, but often you're one of the involved parties and "bargaining" is a more general framing that also makes sense in that case.

comment by Gurkenglas · 2019-08-06T13:53:41.227Z · LW(p) · GW(p)

You mean, we mistake theorists are not in perpetual conflict with conflict theorists, they are just making a mistake? O_o

comment by Benquo · 2019-08-12T06:42:48.464Z · LW(p) · GW(p)

If your concern is that this is evidence that the OP is wrong (since it has conflict-theoretic components, which are mindkillers), it seems important to establish that there are important false object-level claims, not just things that make such mistakes likely. If you can't do that, maybe change your mind about how much conflict theory introduces mistakes?

If you're just arguing that laying out such models are likely to have bad consequences for readers, this is an important risk to track, but it's also changing the subject [LW · GW] from the question of whether the OP's models do a good job explaining the data.

Replies from: johnswentworth, habryka4
comment by johnswentworth · 2019-08-12T15:23:24.667Z · LW(p) · GW(p)

This is a really good point and a great distinction to make.

As an example, suppose I hear a claim that some terrorist group likes to eat babies. Such a claim may very well be true. On the other hand, it's the sort of claim which I would expect to hear even in cases where it isn't true. In general, I expect claims of the form "<enemy> is/wants/does <evil thing>", regardless of whether those claims have any basis.

Now, clearly looking into the claim is an all-around solid solution, but it's also an expensive solution - it takes time and effort. So, a reasonable question to ask is: should the burden of proof be on writer or critic? One could imagine a community norm where that sort of statement needs to come with a citation, or a community norm where it's the commenters' job to prove it wrong. I don't think either of those standards are a good idea, because both of them require the expensive work to be done. There's a correct Bayesian update whether or not the work of finding a citation is done, and community norms should work reasonably well whether or not the work is done.

A norm which makes more sense to me: there's nothing wrong with writers occasionally dropping conflict-theory-esque claims. But readers should be suspicious of such claims a-priori, and just as it's reasonable for authors to make the claim without citation, it's reasonable for readers to question the claim on a-priori grounds. It makes sense to say "I haven't specifically looked into whether <enemy> wants <evil thing>, but that sounds suspicious a-priori."

Replies from: pktechgirl, Benquo
comment by Elizabeth (pktechgirl) · 2019-08-12T20:49:37.862Z · LW(p) · GW(p)

This feels very similar to the debate on the MTG color system a while ago, which went (as half-remembered some time so much later I don't remember how long it's been, and it's since been deleted):

A: [proposal of personality sorting system.]

B: [statement/argument that personality sorting systems are typically useless-to-harmful]

A: but this doesn't respond to my particular personality system.


I'm sympathetic to B (equivalent to jonhswentworth) here. If members of category X are generally useless-to-harmful, it's unfair and anti-truth to disallow incorporating that knowledge into your evaluations of an X. On the other hand, A could have provided rich evidence of why their particular system was good, and B could have made the exact same statement, and it would still be true. If there are ever exceptions to the rule of "category X is useless-to-harmful", you need to have a system for identifying them

[I'm going to keep talking about this in the MTG case because I think a specific case is easier to read that "category X", and it's less loaded for me than talking about my own piece, if the correspondences aren't obvious let me know and I can clarify]

A partial solution would be for B to outline not only why they're skeptical of personality systems, but why, and what specific things would increase their estimation of a particular system. This is a lot to ask, which is a tax on this particular form of criticism. But if the problem is as described there's a lot of utility in writing it up once, well, and linking to it as necessary.

@johnswentworth, if you're up for it I think for this and other reasons there's a lot of value in doing a full post on your general principle (with a link to this discussion). People clearly want to talk about it, and it seems valuable for it to have its own, easily-discoverable, space instead of being hidden behind my post. I would also like to resolve the general principle before discussing how to apply it to this post, which is one reason I've held back on participating in this sub-thread.

Replies from: johnswentworth
comment by johnswentworth · 2019-08-13T20:40:19.035Z · LW(p) · GW(p)

I probably won't get to that soon, but I'll put it on the list.

I also want to say that I'm sorry for kicking off this giant tangential thread on your post. I know this sort of thing can be a disincentive to write in the future, so I want to explicitly say that you're a good writer, this was a piece worth reading, and I would like to read more of your posts in the future.

comment by Benquo · 2019-08-14T17:08:52.148Z · LW(p) · GW(p)

Who, specifically, is the enemy here, and what, specifically, is the evil thing they want?

It seems to me as though you’re describing motives as evil which I’d consider pretty relatable, so as far as I can tell, you’re calling me an enemy with evil motives. Are people like me (and Elizabeth’s cousin, and Elizabeth herself, both of whom are featured in examples) a special exception whom it’s nonsuspect to call evil, or is there some other reason why this is less suspect than the OP?

Replies from: johnswentworth
comment by johnswentworth · 2019-08-15T21:01:55.042Z · LW(p) · GW(p)

By "enemy" I meant the hypothetical terrorist in the "some terrorist group likes to eat babies" example.

I'm very confused about what you're perceiving here, so I think some very severe miscommunication has occurred. Did you accidentally respond to a different comment than you thought?

Replies from: Benquo
comment by Benquo · 2019-08-18T21:04:56.949Z · LW(p) · GW(p)

How is that relevant to the OP?

comment by habryka (habryka4) · 2019-08-12T06:52:40.192Z · LW(p) · GW(p)

I do think that I tend to update downwards on the likelihood of a piece being true if it seems to have obvious alternative generators for how it was constructed that are unlikely to be very truth tracking. Obvious examples here are advertisements and political campaign speeches.

I do think in that sense I think it's reasonable to distrust pieces of writing that seem like they are part of some broader conflict, and as such are unlikely to be generated in anything close to an unbiased way. A lot of conflict-theory-heavy pieces tend to be part of some conflict, since accusing your enemies of being evil is memetic warfare 101.

I am not sure (yet) what the norms for discussion around these kinds of updates should be though, but did want to bring up that there exist some valid bayesian inferences here.

comment by jessicata (jessica.liu.taylor) · 2019-08-03T17:50:42.802Z · LW(p) · GW(p)

The whole question of the essay is basically “who should we be angry at”?

While the post has a few sentences about moral blame, the main thesis is that power allows people to avoid committing direct crime while having less-powerful people commit those crimes instead (and hiding this from the powerful people). This is a denotative statement that can be evaluated independent of "who should we be angry at".

Such denotative statements are very useful when considering different mechanisms for resolving principal-agent problems. Mechanism design is, to a large extent, a conflict theory, because it assumes conflicts of interest between different agents, and is determining what consequences should happen to different agents, e.g. in some cases "who we should be angry at" if that's the best available implementation.

Replies from: johnswentworth, Benquo
comment by johnswentworth · 2019-08-03T18:14:48.918Z · LW(p) · GW(p)
Mechanism design is, to a large extent, a conflict theory

I would say that mechanism design is how mistake theorists respond to situations where conflict theory is relevant - i.e., where there really is a "bad guy". Mechanism design is not about "what consequences should happen to different agents", it's about designing a system to achieve a goal using unaligned agents - "consequences" are just one tool in the tool box, and mechanism design (and mistake theory) is perfectly happy to use other tools as well.

the main thesis is that power allows people to avoid committing direct crime while having less-powerful people commit those crimes instead ... This is a denotative statement that can be evaluated independent of "who should we be angry at".

There's certainly a denotative idea in the OP which could potentially be useful. On the other hand, saying "the post has a few sentences about moral blame" seems like a serious understatement of the extent to which the OP is about who to be angry at.

in some cases "who we should be angry at" if that's the best available implementation

The OP didn't talk about any other possible implementations, which is part of why it smells like conflict theory. Framing it through principal-agent problems would at least have immediately suggested others.

comment by Benquo · 2019-08-04T17:26:01.929Z · LW(p) · GW(p)

Mechanism design is, to a large extent, a conflict theory, because it assumes conflicts of interest between different agents, and is determining what consequences should happen to different agents, e.g. in some cases “who we should be angry at” if that’s the best available implementation.

"Conflict theory" is specifically about the meaning of speech acts. This not the general question of conflicting interests. The question of conflict vs mistake theory is fundamentally, what are we doing when we talk? Are we fighting over the exact location of a contested border, or trying to refine our compression of information to better empower us to reason about things we care about [LW · GW]?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-08-04T18:39:04.709Z · LW(p) · GW(p)

Quoting Scott's post:

Mistake theorists treat politics as science, engineering, or medicine. The State is diseased. We’re all doctors, standing around arguing over the best diagnosis and cure. Some of us have good ideas, others have bad ideas that wouldn’t help, or that would cause too many side effects.

Conflict theorists treat politics as war. Different blocs with different interests are forever fighting to determine whether the State exists to enrich the Elites or to help the People.

Part of what seems strange about drawing the line at denotative vs. enactive speech is that there are conflict theorists who can speak coherently/articulately in a denotative fashion (about conflict), e.g.:

It seems both coherent and consistent with conflict theory to believe "some speech is denotative and some speech is enacting conflict."

(I do see a sense in which mechanism design is a mistake theory, in that it assumes that deliberation over the mechanism is possible and desirable; however, once the mechanism is in place, it assumes agents never make mistakes, and differences in action are due to differences in values)

Replies from: Benquo
comment by Benquo · 2019-08-05T02:09:27.434Z · LW(p) · GW(p)

I don't quite draw the line at denotative vs enactive speech - command languages which are not themselves contested would fit into neither "conflict theory" nor "mistake theory."

"War is the continuation of politics by other means" is a very different statement than its converse, that politics is a kind of war. Clausewitz is talking about states with specific, coherent policy goals, achieving those goals through military force, in a context where there's comparatively little pretext of a shared discourse. This is very different from the kind of situation described in Rao where a war is being fought in the domain of ostensibly "civilian" signal processing.

comment by Raemon · 2019-08-03T17:51:25.910Z · LW(p) · GW(p)

I'm not sure I endorse this comment as written, but just wanted to note that I appreciate trying to tease out why the article felt subtly off to you.

Something about framing it through mistake theory stills feels off to me, too, though. I see where you're coming from with the naive-conflict-theory feeling off. But something important about the article seemed to be grappling with (or at least, I was grappling with as I read the article, and especially through the lens of your comment) was something like:

"We have a bunch of naive intuitions about who to blame. Those naive intuitions get weird in sufficiently complex systems, and it's not obvious what to do. One thing you might do is discard the blame concept. But, this feels a bit unsatisfying because many people are still playing the blame game, and directing the blame at someone, and it's rarely the privileged people who were able to purchase distance from the blameworthy things. And maybe the solution here is to get everyone out of conflict theory, but it's not obvious to me that this is a tractable or even optimal-given-buy-in approach, because people in fact do fight over things." [edit: and jessicata's note that incentive alignment is conflict theory feels relevant]

comment by Hazard · 2020-12-03T22:44:31.278Z · LW(p) · GW(p)

This post makes a fairly straightforward point that has been vary helpful for thinking about power. Having several grounding concrete examples really helped as well. The quote from moral mazes that gave examples of the sorts of wiping-hands-of-knowledge things executives actually say really helped make this more real to me.

comment by Chris_Leong · 2020-12-04T06:19:53.468Z · LW(p) · GW(p)

Explains a particular social phenomenon that I hadn't previously been aware of.

comment by areiamus · 2019-08-02T21:13:14.881Z · LW(p) · GW(p)

Thanks for this insightful piece.

It seems to me that there's a third key message, or possibly a reframing of#1, which is that people without power should be considered less morally culpable for their actions -eg the Wells Fargo employees should be judged less harshly.

The concept of "human error" is often invoked to explain system breakdown as resulting from individual deficiencies (eg, early public discussion of the Boeing 737 MAX crashes had an underlying theme of "Ethiopian and Indonesian pilots are just not as skilled as American pilots") - but a human factors / resilient engineering perspective recognises that humans' roles in technical systems can be empowered or constrained by the system design. And of course it was other humans who designed (approved, built, ...) the system in the first place.

Replies from: Dagon, pktechgirl
comment by Dagon · 2019-08-02T23:38:35.120Z · LW(p) · GW(p)
that people without power should be considered less morally culpable for their actions

I strongly disagree with this. People without power often have less impact from their actions, and actions that do less harm should be judged less harshly. But this is a judgement of the degree of wrongness of the action, not the blame-ability of the person.

Also, moral culpability is not zero-sum. There's plenty of blame for everyone making harmful decisions, and "just following orders" is not a valid defense. Giving bad orders is clearly more harmful than following, but in fact more followers adds to the total and to the individual blame, rather than distributing it.

comment by Elizabeth (pktechgirl) · 2019-08-02T21:32:58.956Z · LW(p) · GW(p)
eg the Wells Fargo employees should be judged less harshly.

I go back and forth on this, and I think the answer might depend on exactly what question you're asking. If what you want to know is "how do we get Wells Fargo to stop defrauding customers?", the answer is obviously to focus on executives, not entry level employees. But if the question is "Do I want to go into business with Dave, who defrauded customers as part of his role as a teller at Wells Fargo? Or Jill, who sliced and diced her data to get her paper count up [LW · GW]"? That answer is going to depend a lot on particulars and context.

comment by Slider · 2019-08-06T15:36:10.396Z · LW(p) · GW(p)

Stumbled across a laww concepts that splits "power" in this context into finer distinctions.

There is a concept of "qualified immunity". If a police officer makes an act in the course of official duties lawsuits regarding that should be addressed to the police department not to the individual police officer. Power there does not buy distance to the crime but on the contrary great power leads to great accountability even responcibility for things that do not have rules governing them but maybe should. There it can be clear that a police officer used his judgement and took initiative on his own accord but can still use the organizations blame shield.

It would be silly to say "It was not me who stole, it was my hand". Focusing on which individual effected the blameworthy action within an organization can distract from the framework that the organization is responcible for the result. So saying that "it was not the hand that was at fault, it was the brain" might be in some sense be more accurate but it is important to catch the point that the human stole, not individual organs.


comment by Dagon · 2019-08-02T22:49:49.925Z · LW(p) · GW(p)

Thanks for writing this, but I don't believe there's broad agreement on either of your main examples.

I don't know anyone who claims taxes should be only proportional to wealth or income. There are those who say it should be super-linear with income or wealth (tax the rich even more than their proportion of income), and those who say not to tax based on wealth or income, but on consumption (my preference) or the hyper-pragmatic "tax everyone, we need the money". And of course many more nuanced variations.

I likewise don't think that most blame goes to people had power but failed to stop a harm - it goes to people who were active in the harm ( ideally; but often people who are active near the harm, even if not causal. https://www.lesswrong.com/posts/YRgMCXMbkKBZgMz4M/asymmetric-justice [LW · GW] ).

Agreed that in both cases, some dimensions of power can distort the straightforward application of principles, but I argue that that's true mostly BECAUSE the principles are not clearly agreed to by most people. Power exploits the disagreements by diverting attention toward the interpretations that favor the power. "Obscures information flow" is a misleading framing. It's closer to "diverts attention toward different models". And mostly on topics complicated enough that it's hard to say what's factually best, only what preferences are prioritized.

Replies from: pktechgirl, pktechgirl
comment by Elizabeth (pktechgirl) · 2019-08-02T23:58:05.839Z · LW(p) · GW(p)

I'm having trouble responding because I don't understand your cruxes.

I likewise don't think that most blame goes to people had power but failed to stop a harm - it goes to people who were active in the harm ( ideally; but often people who are active near the harm, even if not causal.https://www.lesswrong.com/posts/YRgMCXMbkKBZgMz4M/asymmetric-justice [LW · GW] ).

Are you arguing that in practice people blame the nearest person rather than the most powerful, or that this is theoretically or optimal, or some third thing? Because I agree that that's what happens, my argument is that it is wrong. If you disagree, can you share your cruxes for why I am wrong/something else is correct?


Replies from: Dagon
comment by Dagon · 2019-08-03T05:29:36.720Z · LW(p) · GW(p)

Mostly that this is what happens, and it wasn't clear whether you were describing the same thing, or your preferred configuration.

comment by Elizabeth (pktechgirl) · 2019-08-06T20:27:51.394Z · LW(p) · GW(p)
I don't know anyone who claims taxes should be only proportional to wealth or income. There are those who say it should be super-linear with income or wealth (tax the rich even more than their proportion of income), and those who say not to tax based on wealth or income, but on consumption (my preference) or the hyper-pragmatic "tax everyone, we need the money". And of course many more nuanced variations.

I am confused why you are bringing this up. I see how the fact that there are multiple kinds of taxes and they are sometimes marginally increasing changes my points about taxes or power.

Replies from: Dagon
comment by Dagon · 2019-08-06T20:59:09.177Z · LW(p) · GW(p)

I don't find either of your main examples (taxes or blame apportionment) particularly compelling, and gave some reasons for that. And this makes me less likely to accept your thesis that power allows an incorrect perception of moral distance, or that it (necessarily) obscures information flow.

There probably is a relationship in there - power as a measure of potential impact on almost any topic means that power can do these things. It's not clear that it automatically or always does, nor that power is the problem as opposed to bad intentions of the powerful.