Blackmail, continued: communal blackmail, uncoordinated responses

post by Stuart_Armstrong · 2014-10-22T17:53:01.245Z · LW · GW · Legacy · 33 comments

The heuristic that one should always resist blackmail seems a good one (no matter how tricky blackmail is to define). And one should be public about this, too; then, one is very unlikely to be blackmailed. Even if one speaks like an emperor.

But there's a subtlety: what if the blackmail is being used against a whole group, not just against one person? The US justice system is often seen to function like this: prosecutors pile on ridiculous numbers charges, threatening uncounted millennia in jail, in order to get the accused to settle for a lesser charge and avoid the expenses of a trial.

But for this to work, they need to occasionally find someone who rejects the offer, put them on trial, and slap them with a ridiculous sentence. Therefore by standing up to them (or proclaiming in advance that you will reject such offers), you are not actually making yourself immune to their threats. Your setting yourself up to be the sacrificial one made an example of.

Of course, if everyone were a UDT agent, the correct decision would be for everyone to reject the threat. That would ensure that the threats are never made in the first place. But - and apologies if this shocks you - not everyone in the world is a perfect UDT agent. So the threats will get made, and those resisting them will get slammed to the maximum.

Of course, if everyone could read everyone's mind and was perfectly rational, then they would realise that making examples of UDT agents wouldn't affect the behaviour of non-UDT agents. In that case, UDT agents should resist the threats, and the perfectly rational prosecutor wouldn't bother threatening UDT agents. However - and sorry to shock your views of reality three times in one post - not everyone is perfectly rational. And not everyone can read everyone's minds.

So even a perfect UDT agent must, it seems, sometimes succumb to blackmail.

33 comments

Comments sorted by top scores.

comment by hyporational · 2014-10-24T09:00:41.005Z · LW(p) · GW(p)

It's quite common for elderly patients' relatives to threaten me with all kinds of time consuming or reputation hurting bullshit like complaints involving a lot of paperwork or bad press in the local newspaper unless I wedge their relatively healthy granpa straight from the hospital past the line to a nursing home as fast as possible, instead of sending them back home where they'd do just fine. I haven't caved in and so far nothing bad has happened except some relatively harmless badmouthing. It's common for doctors to be blackmailed to do all kinds of stuff like unnecessary investigations or treatments.

You could say that in the US there have been enough trials and press gone badly for doctors that blackmailing often works. I'm glad that this isn't the case in Finland and we're still practicing cost effective medicine instead of covering our asses from all angles out of fear. The flip side of this is that incompetent doctors roam a bit too freely.

In a couple of cases I've made decisions that I've been blackmailed to make, not because of the blackmailing but because the blackmailer's interests happened to coincide with my medical reasoning. I find this problematic for signalling reasons.

comment by dthunt · 2014-10-25T16:14:18.817Z · LW(p) · GW(p)

I have made a prosecutor pale in the face by suggesting that courthouses should be places where people with plea bargains shop their offers around with each other so that they know what's a good deal and a bad deal.

comment by bogus · 2014-10-25T20:29:51.504Z · LW(p) · GW(p)

what if the blackmail is being used against a whole group, not just against one person?

If the group is made up of UDT agents, then they clearly coordinate. If CDT agents are a small fraction of the group (assuming that transaction costs make perfect bargaining non-feasible for CDT agents, as usual), then UDT agents' (meta)-incentive to reject blackmail will be muted to some degree, depending on the fraction of CDT agents. The opposite consideration applies to the blackmailer's side: when faced with rejection, she has to expend resources on a costly punishment that will only affect the fraction of agents that's CDT. So her incentive to engage in blackmail in the first place rises as the fraction of UDT agents drops.

On a different note, assuming that the informational environment is favorable, the best response to "group blackmail" is probably not for each agent to reject blackmail individually, but for all agents to coordinate on incenting whomever can reject blackmail at lowest cost. Under this assumption, UDT agents will have an (meta-)incentive to incent rejection by any agents in their group, including CDT agents. But still, the main result is unchanged; as the fraction of UDT agents falls, the resources expended in providing such incentives will drop proportionally.

comment by Gunnar_Zarncke · 2014-10-22T22:48:37.925Z · LW(p) · GW(p)

If this is a communal setting the logical step for the UDT agents is to coordinate and build a mutual blackmail prevention fond and clearly signal their membership. And I'd guess such a thing exists.

Replies from: somnicule
comment by somnicule · 2014-10-23T04:26:25.584Z · LW(p) · GW(p)

Only works if UDT agents make a significant proportion of agents in the setting. 10 UDT agents plus 1000 CDT agents, say, and the UDT agents are still vulnerable.

Replies from: Stuart_Armstrong, Gunnar_Zarncke
comment by Stuart_Armstrong · 2014-10-23T09:17:20.422Z · LW(p) · GW(p)

It also works if UDT agents can credibly distinguish themselves from non-UDT agents, whatever the proportions.

Replies from: SilentCal
comment by SilentCal · 2014-10-23T17:28:20.690Z · LW(p) · GW(p)

This requires not only that the UDT agents can reliably signal their UDT-ness to the blackmailers, but that the blackmailers can reliably signal to the non-UDTers that they can tell the difference. That is, letting the UDTers off might make the non-UDTers think that if they refuse the blackmail they'll also be let off.

So the ability of UDTers to resist blackmail depends not just on the properties of the UDTers and the blackmailers but also on those of the non-UDTers.

Replies from: Lumifer
comment by Lumifer · 2014-10-23T17:38:07.388Z · LW(p) · GW(p)

All y'all are assuming smart blackmailers.

The original example is of US prosecutors, right? I bet a standard prosecutor functions equivalently to a simple script:

threaten_multiple_charges();
if (pleads_guilty) { convict_reasonably() } else { throw_book() }

You can signal whatever you want to an agent executing this script, it's not going to care.

Replies from: SilentCal
comment by SilentCal · 2014-10-23T17:59:45.248Z · LW(p) · GW(p)

Right, the condition 'UDT agents can credibly distinguish themselves' sounds like a property of UDT agents but is actually a joint property of UDT agents and blackmailers.

That said, prosecutors ultimately follow that script because it works. I say 'ultimately' because it might be mediated by effects like 'they follow the script because they are rewarded for following it, and their bosses reward them for following it because it works'. The justice system is far from a rational agent, but it's also not an unincentivisable rock.

Replies from: Lumifer
comment by Lumifer · 2014-10-23T18:08:21.900Z · LW(p) · GW(p)

That said, prosecutors ultimately follow that script because it works

Yes, but note that here we are treating "works" as a binary variable and the presence of a minority of UDT agents in the target population is not going to switch "works" from true to false. In order for the prosecutors to care about signals, either a majority of the target population needs to credibly signal, or the throw_book() branch needs to have noticeable costs for prosecutors associated with it.

Replies from: Azathoth123, SilentCal
comment by Azathoth123 · 2014-10-25T07:58:29.314Z · LW(p) · GW(p)

or the throw_book() branch needs to have noticeable costs for prosecutors associated with it.

It does, otherwise they would simply do it to all suspects.

Replies from: Lumifer
comment by Lumifer · 2014-10-27T01:07:27.601Z · LW(p) · GW(p)

otherwise they would simply do it to all suspects.

What makes you think they don't?

Replies from: gwern
comment by gwern · 2014-10-27T01:26:47.484Z · LW(p) · GW(p)

Courts are generally heavily booked, trials take forever, it's a perennial news issue that courts are underfunded (this seems to be a major factor behind the incredibly nasty and abusive rise in 'offender-funded' court systems & treating traffic violations & civil asset seizures as normal funding sources to be maximized) and I've seen estimates that as much as 90%+ of all cases resolve as plea bargains. There's no way the court system could handle a sudden 10-20x increase in workload, which is what would happen if prosecutors stopped settling for somewhat reasonable plea bargains and tried to throw the book at suspects who would then have little choice but to take it to trial.

(I recall reading about an attempt to organize defendants in one US court district to agree to not plea bargain, overloading the system so badly that most of the cases would have to be dropped; but I don't recall what happened and can't seem to refind it. I'm guessing it didn't work out, given that this is almost literally the prisoner's dilemma.)

Replies from: Lumifer
comment by Lumifer · 2014-10-27T01:52:29.444Z · LW(p) · GW(p)

Oh, sorry, I think I was unclear or probably even confusing. I didn't mean prosecutors actually just ship off all suspects to the courts with a long list of charges. I meant that they threaten everyone.

Obviously, a plea bargain makes things much easier for prosecutors so their usual goal is to obtain one. However if the accused is sufficiently stubborn, their choice is (a) to assemble a case and prosecute for a few charges; or (b) to assemble a case and prosecute for many charges. I don't think there is a major cost-to-prosecutors difference between (a) and (b) so they go for (b).

Replies from: Azathoth123
comment by Azathoth123 · 2014-10-30T06:58:10.538Z · LW(p) · GW(p)

I didn't mean prosecutors actually just ship off all suspects to the courts with a long list of charges. I meant that they threaten everyone.

In that case, the argument you made here makes no sense.

Replies from: Lumifer
comment by Lumifer · 2014-10-30T14:39:04.985Z · LW(p) · GW(p)

Why is that?

comment by SilentCal · 2014-10-23T21:10:03.704Z · LW(p) · GW(p)

You mean because prosecutors' incentives are mediated by the justice system, and the justice system has friction such that it won't react to a small change? Makes sense.

The extent to which this is actually true is a complicated factual question about the US justice system.

comment by Gunnar_Zarncke · 2014-10-23T06:00:12.582Z · LW(p) · GW(p)

Agreed. But still less so than before.

comment by DeterminateJacobian · 2014-10-24T00:50:15.910Z · LW(p) · GW(p)

I was thinking about this question in regards to whether CDT agents might have a simply bayesian reason to mimic UDT agents, not only in any pre-decision signaling, but also in the actual decision. And I realized an important feature of these problems is that the game ends precisely when the agent submits a decision, which highlights the feature of UDT that distinguishes its cooperation from simple bayesian reasoning: A distinction that becomes important when you start adding qualifiers that include unknowns about other agents' source codes. The game may have as many confounders and additional decision steps before the final step, but UDT is exclusively the feature that allows cooperation on that final step.

comment by shminux · 2014-10-23T02:30:29.419Z · LW(p) · GW(p)

A UDT+ agent would clearly communicate her resistance to blackmail and cause the blackmailer to pick an easier target.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-10-23T09:16:44.070Z · LW(p) · GW(p)

In the case above - no. Without definite ways of distinguishing UDT+ agents from other agents, they would still get blackmailed, lest other agents try and pretend to be UDT+ agents.

comment by ChristianKl · 2014-10-23T21:38:08.126Z · LW(p) · GW(p)

Politics is the mindkiller.

Can you find an example that's less political to make the same point?

Replies from: Nornagest, hyporational, Stuart_Armstrong
comment by Nornagest · 2014-10-23T22:11:22.090Z · LW(p) · GW(p)

I don't think prosecutorial charge inflation is a substantially politicized issue in the US; at least, it's never come up in an election I remember. Is it in Germany?

Replies from: ChristianKl
comment by ChristianKl · 2014-10-23T23:11:03.796Z · LW(p) · GW(p)

In Germany it's not the job of a prosecutor to get the maximum possible sentence, so no the problem doesn't exist in the same way in Germany. If someone in Germany commits a crime that gets 3 years and another that's 4 years that doesn't simply add up to 7 years. Our system is much more well designed.

Being tough on crime happens to be a politicized topic in the US and there are many people who do hold the opinion that the US incarnates a percentage of it's population that's significantly to high. It's no Republican vs. Democratic issues but that doesn't mean it's not political in nature.

In this case I'm not sure whether blackmail is really the right term. Wikipedia defines blackmail as "Blackmail is an act, often a crime, involving unjustified threats to make a gain or cause loss to another unless a demand is met." Is using a valid law for charge inflation an unjustified thread? That depends a lot on your political beliefs.

comment by hyporational · 2014-10-24T08:49:19.947Z · LW(p) · GW(p)

Maybe a less political example.

Replies from: VAuroch
comment by VAuroch · 2014-10-24T09:38:17.236Z · LW(p) · GW(p)

That example seems significantly more politicized. It definitely is, in a US context; the "Death Panels" political meme grew out of attempts to deal with that problem.

Replies from: hyporational
comment by hyporational · 2014-10-24T11:49:42.647Z · LW(p) · GW(p)

Perhaps you're right. Dealing with all the nonsense just doesn't feel like politics from the inside :)

The death panel myth seems to be a separate issue. I'm talking about discharging patients that don't need further medical care nor nursing home care because they're healthy enough to go home. The same rules would apply to younger patients. The concept of futile care is another problem entirely, and definitely a more political issue, especially in religious countries like the US.

Tests and treatments on the other hand can be pointless for many other reasons than patients being so sick that everything is futile.

comment by Stuart_Armstrong · 2014-10-24T09:17:19.366Z · LW(p) · GW(p)

Can you find an example that's less political to make the same point?

Why? Are you arguing the example is wrong? Are you saying that you disagree with it personally? Because "don't talk about this general fact because someone else might think it has (weak) political implications" seems a heuristic to be avoided.

Replies from: ChristianKl
comment by ChristianKl · 2014-10-24T11:34:29.451Z · LW(p) · GW(p)

political implications

No. We do have research on how people get mindkilled. It's not about implications. Certain classes of claims for example lead to most people stop being able to use Bayes Rule when you ask them to analyse a problem. I'm claiming that the example is in that class.

Are you saying that you disagree with it personally?

A core question in this case is: "What do we gain from defining blackmail in a way that this example is covered? What do we gain from defining it in a way that this isn't covered?"

Given that most people here disapprove of the action of those persecutors politically they feel the desire to punish them by using a negative label. That makes it harder to have the discussion based on the merits.

Because "don't talk about this general fact because someone else might think it has (weak) political implications" seems a heuristic to be avoided.

That's not the heuristic brought forward in "Politics is the mindkiller". The heuristic is: You have a set A of examples (X_1, Y_1. X_2, Y_2, Y_3 ...). The X examples are political and fire up a bunch of mental biases in your reader. Most of your readers will suddenly start to flunk Bayesian calculation if you make one of those X_i examples. The Y_i examples on the other hand allow your readers to reason normally. If you want to choose an example for A, to speak about A don't choose one of the X_i but one of the Y_i.

Replies from: Jiro, Jiro
comment by Jiro · 2014-10-24T14:57:42.186Z · LW(p) · GW(p)

A core question in this case is: "What do we gain from defining blackmail in a way that this example is covered? What do we gain from defining it in a way that this isn't covered?"

We're trying to formalize our intuitions.

Replies from: ChristianKl
comment by ChristianKl · 2014-10-25T17:27:24.366Z · LW(p) · GW(p)

We're trying to formalize our intuitions.

The way the opening post is written it doesn't ask the question: "Should we consider this behavior blackmail?" but takes it for granted that the answer to that question is "Yes".

Shutting off questions like that is quite typical for how politics mindkilling works. "Boo, evil prosecutors"

Replies from: Jiro
comment by Jiro · 2014-10-25T20:53:38.669Z · LW(p) · GW(p)

That's the point of formalizing intuitions. He has a preexisting category, and he's trying to find the rule which formally describes what goes in the category. In order to do that you have to take for granted that certain things are and aren't in the category. If you didn't have a preexisting caregory there would be no reason to do it.

comment by Jiro · 2014-10-26T03:50:56.625Z · LW(p) · GW(p)

Certain classes of claims for example lead to most people stop being able to use Bayes Rule when you ask them to analyse a problem.

Most people are unable to use Bayes' rule anyway, regardless of the class of claims.