Privacy

post by Zvi · 2019-03-15T20:20:00.269Z · LW · GW · 78 comments

Contents

78 comments

Follow-up to: Blackmail

[Note on Compass Rose response: This is not a response to the recent Compass Rose response, it was written before that, but with my post on Hacker News I need to get this out now. It has been edited in light of what was said. His first section is a new counter-argument against a particular point that I made – it is interesting, and I have a response but it is beyond scope here. It does not fall into either main category, because it is addressing a particular argument of mine rather than being a general argument for blackmail. The second counter-argument is a form of #1 below, combined with #2, #3 and #4 (they do tend to go together) so it is addressed somewhat below, especially the difference between ‘information tends to be good’ and ‘information chosen, engineered and shared so to be maximally harmful tends to be bad.’ My model and Ben’s of practical results also greatly differ. We intend to hash all this out in detail in conversations, and I hope to have a write-up at some point. Anyway, on to the post at hand.]

There are two main categories of objection to my explicit thesis that blackmail should remain illegal.

Today we will not address what I consider the more challenging category. Claims that while blackmail is bad, making it illegal does not improve matters. Mainly because we can’t or won’t enforce laws, so it is unclear what the point is. Or costs of enforcement exceed benefits.

The category I address here claims blackmail is good. We want more.

Key arguments in this category:

  1. Information is good.*
  2. Blackmail reveals bad behavior.
  3. Blackmail provides incentive to uncover bad behavior.
  4. Blackmail provides a disincentive to bad behavior.
  5. Only bad, rich or elite people are vulnerable to blackmail.
  6. We should strongly enforce all norms on everyone, without context dependence not explicitly written into the norm, and fix or discard any norms we don’t want to enforce in this way.

A key assumption is that blackmail mostly targets existing true bad behavior. I do not think this is true. For true or bad or for existing. For details, see the previous post.

Such arguments also centrally argue against privacy. Blackmail advocates often claim privacy is unnecessary or even toxic.

It’s one thing to give up on privacy in practice, for yourself, in the age of Facebook. I get that. It’s another to argue that privacy is bad. That it is bad to not reveal all the information you know. Including about yourself.

This radical universal transparency position, perhaps even assumption, comes up quite a lot recently. Those advocating it act as if those opposed carry the burden of proof.

No. Privacy is good.

A reasonable life, a good life, requires privacy.

I

We need a realm shielded from signaling and judgment. A place where what we do does not change what everyone thinks about us, or get us rewarded and punished. Where others don’t judge what we do based on the assumption that we are choosing what we do knowing that others will judge us based on what we do. Where we are free from others’ Bayesian updates and those of computers, from what is correlated with what, with how things look. A place to play. A place to experiment. To unwind. To celebrate. To learn. To vent. To be afraid. To mourn. To worry. To be yourself. To be real. 

We need people there with us who won’t judge us. Who won’t use information against us. 

We need having such trust to not risk our ruin. We need to minimize how much we wonder, if someone’s goal is to get information to use against us. Or what price would tempt them to do that.

Friends. We desperately need real friends.

II

Norms are not laws. 

Life is full of trade-offs and necessary unpleasant actions that violate norms. This is not a fixable bug. Context is important for both enforcement and intelligent or useful action.

Even if we could fully enforce norms in principle, different groups have different such norms and each group’s/person’s norms are self-contradictory. Hard decisions mean violating norms and are common in the best of times.

A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms. It is unclear how one would avoid a total loss of freedom, or a total loss of reasonable action, productivity and survival, in such a context. Police states and cults and thought police and similar ideas have been tried and have definitely not improved this outlook.

What we do for fun. What we do to make money. What we do to stay sane. What we do for our friends and our families. What maintains order and civilization. What must be done. 

Necessary actions are often the very things others wouldn’t like, or couldn’t handle… if revealed in full, with context simplified to what gut reactions can handle.

Or worse, with context chosen to have the maximally negative gut reactions. 

There are also known dilemmas where any action taken would be a norm violation of a sacred value. And lots of values that claim to be sacred, because every value wants to be sacred, but which we know we must treat as not sacred when making real decisions with real consequences.

Or in many contexts, justifying our actions would require revealing massive amounts of private information that would then cause further harm (and which people very much do not have the time to properly absorb and consider). Meanwhile, you’re taking about the bad-sounding thing, which digs your hole deeper.

We all must do these necessary things. These often violate both norms and formal laws. Explaining them often requires sharing other things we dare not share.

I wish everyone a past and future Happy Petrov Day [LW · GW]

Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.

We constantly must claim  ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.

In these, and in many other ways, we live in an unusually hypocritical time. A time when people need be far more afraid both to not be hypocritical, and of their hypocrisy being revealed.

We are a nation of men, not of laws.

But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.

III

Life requires privacy so we can not reveal the exact extent of our resources.

If others know exactly what resources we have, they can and will take all of them. The tax man who knows what you can pay, what you would pay, already knows what you will pay. For government taxes, and for other types of taxes.

This is not only about payments in money. It is also about time, and emotion, and creativity, and everything else.

Many things in life claim to be sacred. Each claims all known available resources. Each claims we are blameworthy for any resources we hold back. If we hold nothing back, we have nothing.

That which is fully observed cannot be one’s slack. Once all constraints are known, they bind.

Slack requires privacy. Life requires slack.

The includes our decision making process. 

If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.

We seethe and despair. We have no choices. No agency. No slack.

It is a key protection that one might fight back, perhaps massively out of proportion, if others went after us. To any extent.

It is a key protection that one might do something good, if others helped you. Rather than others knowing exactly what things will cause you to do good things, and which will not.

It is central that one react when others are gaming the system. 

Sometimes that system is you.

World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.

Having all your actions fully predictable and all your information known isn’t Playing in Hard Mode. That’s Impossible [LW · GW] Mode.

I now give specific responses to the six claims above. This mostly summarizes from the previous post.

  1. Information, by default, is probably good. But this is a tenancy, not a law of physics. As discussed last time, information engineered to be locally harmful probably is net harmful. Keep this distinct from incentive effects on bad behavior, which is argument number 4.
  2. Most ‘bad’ behavior will be a justification for scapegoating, involving levels of bad behavior that are common. Since such bad behavior is rarely made common knowledge, and allowing it to become common knowledge is often considered far worse behavior than the original action, making it common knowledge forces oversize reaction and punishment. What people are punishing is that you are the type of person who lets this type of information become common knowledge about you. Thus you are not a good ally. In a world like ours, where all are anticipating future reactions by others anticipating future reactions, this can be devastating.
  3. Blackmail does provide incentive to investigate to find bad behavior. But if found, it also provides incentive to make sure it is never discovered. And what is extracted from the target is often further bad behavior, largely because…
  4. Blackmail also provides an incentive to engineer or provoke bad behavior, and to maximize the damage that would result from revelation of that behavior. The incentives promoting more bad behavior likely are stronger than the ones discouraging it. I argue in the last piece that it is common even now for people to engineer blackmail material against others and often also against themselves, to allow it to be used as collateral and leverage. That a large part of job interviews is proving that you are vulnerable in these ways. That much bonding is about creating mutual blackmail material. And so on. This seems quite bad.
  5. If any money one has can be extracted, then one will permanently be broke. This is a lot of my model of poverty traps – there are enough claiming-to-be-sacred things demanding resources that any resources get extracted, so no one tries to acquire resources or hold them for long. Consider what happens if people in such situations are allowed to borrow money. Even if you are (for any reason) sufficiently broke that you cannot pay money, you have much that you could be forced to say or do. Often this involves deep compromises of sacred values, of ethics and morals and truth and loyalty and friendship. It often involves being an ally of those you despise, and reinforcing that which is making your life a living hell, to get the pain to let up a little. Privacy, and the freedom from blackmail, are the only ways out.
  6. A full exploration is beyond scope but section two above is a sketch.

* – I want to be very clear that yes, information in general is good. But that is a far cry from the radical claim that all and any information is good and sharing more of it is everywhere and always good.

 

 

 

 

78 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2019-03-17T05:10:30.128Z · LW(p) · GW(p)

We need a realm shielded from signaling and judgment.

To support this, there are results from economics / game theory showing that signaling equilibria can be worse than non-signaling equilibria (in the sense of Pareto inefficiency). Quoting one example from http://faculty.econ.ucdavis.edu/faculty/bonanno/teaching/200C/Signaling.pdf

So the benchmark is represented by the situation where no signaling takes place and employers -- not being able to distinguish between more productive and less productive applicants and not having any elements on which to base a guess -- offer the same wage to every applicant, equal to the average productivity. Call this the non-signaling equilibrium. In a signaling equilibrium (where employers’ beliefs are confirmed, since less productive people do not invest in education, while the more productive do) everybody may be worse off than in the non-signaling equilibrium. This occurs if the wage offered to the non-educated is lower than the average productivity (= wage offered to everybody in the non-signaling equilibrium) and that offered to the educated people is higher, but becomes lower (than the average productivity) once the costs of acquiring education are subtracted. The possible Pareto inefficiency of signaling equilibria is a strong result and a worrying one: it means that society is wasting resources in the production of education. However, it is not per se enough to conclude that education (i.e. the signaling activity) should be eliminated. The result is not that, in general, elimination of the signaling activity leads to a Pareto improvement: Spence simply pointed out that this is a possibility.

So in theory it seems quite possible that privacy is a sort of coordination mechanism for avoiding bad signaling equilibria. Whether or not it actually is, I'm not sure. That seems to require empirical investigation and I'm not aware of such research.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-03-17T05:15:25.447Z · LW(p) · GW(p)

I get a 404 for the paper. The part you quoted says "maybe this might happen" but doesn't give an economic argument that it could happen, it just says "maybe employers don't pay people enough for it to be worth it". Is there somewhere where the argument is actually made?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-03-17T05:43:29.038Z · LW(p) · GW(p)

It looks like the code that turns a URL into a link made the colon into part of the link. I removed it so the link should work now. The argument should be in the PDF. Basically you just solve the game assuming the ability to signal and compare that to the game where signaling isn't possible, and see that the signaling equilibrium makes everyone worse off (in that particular game).

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-03-17T05:59:37.374Z · LW(p) · GW(p)

OK, looking at the argument, I think it makes sense that signalling equilibria can potentially be Pareto-worse than non-signalling equilibria, as they can have more of a "market for lemons" problem. Worth noting that not all equilibria in the game-with-signalling are worse than non-signalling equilibria (I think "no one gets education, everyone gets paid average productivity" is still a Nash equilibrium), it's just that signalling enables additional equilibria, some of which are bad.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-03-17T06:55:59.036Z · LW(p) · GW(p)

OK, looking at the argument, I think it makes sense that signalling equilibria can potentially be Pareto-worse than non-signalling equilibria, as they can have more of a “market for lemons” problem.

Not sure what the connection to “market for lemons” is. Can you explain more (if it seems important)?

(I think “no one gets education, everyone gets paid average productivity” is still a Nash equilibrium)

I agree that is still a Nash equilibrium and I think even a Perfect Bayesian Equilibrium, but there may be a stronger formal equilibrium concept that rules it out? (It's been a while since I studied all those equilibrium refinements so I can't tell you which off the top of my head.)

I think under Perfect Bayesian Equilibrium, off-the-play-path nodes formally happen with probability 0 and the players are allowed to update in an arbitrary way on those nodes, including not update at all. But intuitively if someone does deviate from the proposed equilibrium strategy and get some education, it seems implausible that employers don't update towards them being type H and therefore offer them a higher salary.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-03-17T07:24:27.175Z · LW(p) · GW(p)

Not sure what the connection to “market for lemons” is.

People who haven't gotten an education are, on average, unproductive, since productive people have a better alternative to not getting an education (namely, getting an education). Similarly, in a market for lemons, cars on the market are, on average, low-quality, since people with high-quality cars have a better alternative to putting them on an open market (namely, continuing to use the car, or selling it in a higher-trust market).

I agree that is still a Nash equilibrium and I think even a Perfect Bayesian Equilibrium, but there may be a stronger formal equilibrium concept that rules it out?

It's possible, I don't know the formal stronger equilibrium concepts though.

Now that I think about it, there are even simpler cases of more-available information making Nash equilibria worse. In any finite iterated prisoner's dilemma with known horizon, the only Nash equilibrium is to always defect. But, in a finite iterated prisoner's dilemma with unknown geometrically-distributed horizon (sufficiently far away in expectation), there are Nash equilibria that generate mutual cooperation (due to folk theorems).

comment by jessicata (jessica.liu.taylor) · 2019-03-15T22:32:22.084Z · LW(p) · GW(p)

In a scapegoating environment, having privacy yourself is obviously pretty important. However, you seem to be making a stronger point, which is that privacy in general is good (e.g. we shouldn't have things like blackmail and surveillance which generally reduce privacy, not just our own privacy). I'm going to respond assuming you are arguing in favor of the stronger point.

This post rests on several background assumptions about how the world works, which are worth making explicit. I think many of these are empirically true but are, importantly, not necessarily true, and not all of them are true.

We need a realm shielded from signaling and judgment. A place where what we do does not change what everyone thinks about us, or get us rewarded and punished.

Implication: it's bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)

We need people there with us who won’t judge us. Who won’t use information against us.

Implication: "judge" means to use information against someone. Linguistic norms related to the word "judgment" are thoroughly corrupt enough that it's worth ceding to these, linguistically, and using "judge" to mean (usually unjustly!) using information against people.

A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms.

Implication (in the context of the overall argument): a general reduction in privacy wouldn't lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.

There are also known dilemmas where any action taken would be a norm violation of a sacred value.

Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won't adjust even when this is obvious.

Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.

Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it's better not to know that in concrete detail.

We constantly must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.

Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.

But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.

Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)

If others know exactly what resources we have, they can and will take all of them.

Implication: the bad guys won; we have rule by gangsters, who aren't concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn't 100% [EDIT: see Ben's comment, the actual rate of extraction is higher than the marginal tax rate])

If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.

Implication: more generally available information about what strategies people are using helps "our" enemies more than it helps "us". (This seems false to me, for notions of "us" that I usually use in strategy)

World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.

Implication (in context): strategic ambiguity isn't just necessary for us given our circumstances, it's necessary in general, even if we lived in a surveillance state. (Huh?)

To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn't drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.

Replies from: Viliam, Benquo, Zvi, cousin_it, Raemon, BurntVictory, ChristianKl
comment by Viliam · 2019-03-17T00:59:36.942Z · LW(p) · GW(p)
> If others know exactly what resources we have, they can and will take all of them.
Implication: the bad guys won; we have rule by gangsters, who aren't concerned with sustainable production, and just take as much stuff as possible in the short term.

To me this feels like Zvi is talking about some impersonal universal law of economics (whether such law really exists or not, we may debate), and you are making it about people ("the bad guys", "gangsters") and their intentions, like we could get a better outcome instead by simply replacing the government or something.

I see it as something similar to Moloch. If you have resources, it creates a temptation for others to try taking it. Nice people will resist the temptation... but in a prisoners' dilemma with sufficient number of players, sooner or later someone will choose to defect, and it only takes one such person for you to get hurt. You can defend against an attempt to steal your resources, but the defense also costs you some resources. And perhaps... in the hypothetical state of perfect information... the only stable equilibrium is when you spend so much on defense that there is almost nothing left to steal from you.

And there is nothing special about the "bad guys" other than the fact that, statistically, they exist. Actually, if the hypothesis is correct, then... in the hypothetical state of perfect information... the bad guys would themselves end up in the very same situation, having to spend almost all successfully stolen resources to defend themselves against theft by other bad guys.

To defend yourself from the ordinary thieves, you need police. The police needs some money to be able to do their job. But what prevents them from abusing their power to take more from you? So you have the government to protect you from the police, but the government also needs money to do their job, and it is also tempted to take more. In the democratic government, politicians compete against each other... and the good guy who doesn't want to take more of your money than he actually needs to do his job, may be outcompeted by a bad guy who takes more of your resources and uses the surplus to defeat the good guy. Also, different countries expend resources on defending against each other. And you have corruption inside all organizations, including the government, the police, the army. The corruption costs resources, and so does fighting against it. It is a fractal of burning resources.

So... perhaps there is an economical law saying that this process continues until the available resources are exhausted (because otherwise, someone would be tempted to take some of the remaining resources, and then more resources would have to be spent to stop them). Unless there is some kind of "friction", such as people not knowing exactly how much money you have, or how exactly would you react if pushed further (where exactly is your "now I have nothing to lose anymore" point, when instead of providing the requested resources you start doing something undesired, even if doing so is likely to hurt you more); or when it becomes too difficult for the government to coordinate to take each available penny (because their oversight and money extraction also have a cost). And making the situation more transparent reduces this "friction".

It this model, the difference between the "good guy" and the "bad guy" becomes smaller than you might expect, simply because the good guy still needs (your) resources to fight against the bad guy, so he can't leave you alone either.

comment by Benquo · 2019-03-16T00:20:27.517Z · LW(p) · GW(p)

I don’t think the 100% tax rate argument works, for several reasons:

  • 100% is not the short-run maximum extraction rate (Cf “Laffer Curve,” which is explicitly short-term).
  • USGOVT is not really an agent here, some extractors taking all they can are subjectvto the top marginal tax rate & reallocating to themselves using subtler mechanisms like monetary policy and financial regulation (and deregulation, cyclically), boondoggles, other regulatory capture...
  • If you count other extraction points such as credentialism + high college tuition + need-based financial aid (mostly involving loans), hospital bills, lifetime extraction rate may be a lot higher.
Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-03-16T00:28:23.508Z · LW(p) · GW(p)

Good point, I updated towards the extraction rate being higher than I thought (will edit my comment). Rich people do end up existing but they're rare and are often under additional constraints.

comment by Zvi · 2019-03-17T17:20:12.604Z · LW(p) · GW(p)

I will attempt to clarify which of these things I actually believe, as best I can, but do not expect to be able to engage deeper into the thread.

Implication: it's bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust. We currently have scapegoating, not justice. (Note that a just system for rewarding/punishing people will do no worse by having more information, and in particular will do no worse than the null strategy of not rewarding/punishing behavior based on certain subsets of information)

>> What I'm primarily thinking about here is that if one is going to be rewarded/punished for what one does and thinks, one chooses what one does and thinks largely based upon that - you have a signaling equilibria, as Wei Dei notes in his top-level comment. I believe that this in many situations is much worse, and will lead to massive warping of behavior in various ways, even if those rewarding/punishing were attempting to be just (or even if they actually were just, if there wasn't both common knowledge of this and agreement on what is and isn't just). The primary concern isn't whether someone can expect to be on-net punished or rewarded, but on how behaviors are changed.

We need people there with us who won’t judge us. Who won’t use information against us.

Implication: "judge" means to use information against someone. Linguistic norms related to the word "judgment" are thoroughly corrupt enough that it's worth ceding to these, linguistically, and using "judge" to mean (usually unjustly!) using information against people.

>> Judge here means to react to information about someone or their actions or thoughts largely by updating their view of the person - to not have to worry (as much, at least) about how things make you seem. The second sentence is a second claim, that we also need them not to use the information against us. I did not intend for the second to seem to be part of the first.

A complete transformation of our norms and norm principles, beyond anything I can think of in a healthy historical society, would be required to even attempt full non-contextual strong enforcement of all remaining norms.

Implication (in the context of the overall argument): a general reduction in privacy wouldn't lead to norms changing or being enforced less strongly, it would lead to the same norms being enforced strongly. Whatever or whoever decides which norms to enforce and how to enforce them is reflexive rather than responsive to information. We live in a reflex-based control system.

>> That doesn't follow at all, and I'm confused why you think that it does. I'm saying that when I try to design a norm system from scratch in order to be compatible with full non-contextual strong enforcement, I don't see a way to do that. Not that things wouldn't change - I'm sure they would.

There are also known dilemmas where any action taken would be a norm violation of a sacred value.

Implication: the system of norms is so corrupt that they will regularly put people in situations where they are guaranteed to be blamed, regardless of their actions. They won't adjust even when this is obvious.

>> The system of norms is messy, which is different than corrupt. Different norms conflict. Yes, the system is corrupt, but that's not required for this to be a problem. Concrete example, chosen to hopefully be not controversial: Either turn away the expensive sick child patient, or risk bankrupting the hospital.

Part of the job of making sausage is to allow others not to see it. We still get reliably disgusted when we see it.

Implication: people expect to lose value by knowing some things. Probably, it is because they would expect to be punished due to it being revealed they know these things (as in 1984). It is all an act, and it's better not to know that in concrete detail.

>> Consider the literal example of sausage being made. The central problem is not that people are afraid the sausage makers will strike back at them. The problem is knowing reduces one's ability to enjoy sausage. Alternatively, it might force one to stop enjoying sausage.

>> Another important dynamic is that we want to enforce a norm that X is bad and should be minimized. But sometimes X is necessary. So we'd rather not be too reminded of the X that is necessary in some situations where we know X must occur, to avoid weakening the norm against X elsewhere, and because we don't want to penalize those doing X where it is necessary as we would instinctively do if we learned too much detail.

We constantly must claim ‘everything is going to be all right’ or ‘everything is OK.’ That’s never true. Ever.

Implication: the control system demands optimistic stories regardless of the facts. There is something or someone forcing everyone to call the deer a horse under threat of punishment, to maintain a lie about how good things are, probably to prop up an unjust regime.

>> OK, this one's just straight up correct if you remove the unjust regime part. Also, I am married with children.

But these problems, while improved, wouldn’t go away in a better or less hypocritical time. Norms are not a system that can have full well-specified context dependence and be universally enforced. That’s not how norms work.

Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)

>> As I noted above, my model of norms is that they are even at their best messy ways of steering behavior, and generally just norms will in some circumstances push towards incorrect action in ways the norm system would cause people to instinctively punish. In such cases it is sometimes correct to violate the norm system, even if it is as just a system as one could hope for. And yes, in some of those cases, it would be good to hide that this was done, to avoid weakening norms (including by allowing such cases not be punished thus enabling otherwise stronger punishment).

If others know exactly what resources we have, they can and will take all of them.

Implication: the bad guys won; we have rule by gangsters, who aren't concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn't 100% [EDIT: see Ben's comment, the actual rate of extraction is higher than the marginal tax rate])

>> This is not primarily a statement about The Powers That Be or any particular bad guys. I think this is inherent in how people and politics operate, and what happens when one has many conflicting would-be sacred values. Of course, it is also a statement that when gangsters do go after you, it is important that they not know, and there is always worry about potential gangsters on many levels whether or not they have won. Often the thing taking all your resources is not a bad guy - e.g. expensive medical treatments, or in-need family members, etc etc.

If it is known how we respond to any given action, others find best responses. They will respond to incentives. They exploit exactly the amount we won’t retaliate against. They feel safe.

Implication: more generally available information about what strategies people are using helps "our" enemies more than it helps "us". (This seems false to me, for notions of "us" that I usually use in strategy)

>> Often on the margin more information is helpful. But complete information is highly dangerous. And in my experience, most systems in an interesting equilibrium where good things happen sustain that partly with fuzziness and uncertainty - the idea that obeying the spirit of the rules and working towards the goals and good things gets rewarded, other action gets punished, in uncertain ways. There need to be unknowns in the system. Competitions where every action by other agents is known are one-player games about optimization and exploitation.

World peace, and doing anything at all that interacts with others, depends upon both strategic confidence in some places, and strategic ambiguity in others. We need to choose carefully where to use which.

Implication (in context): strategic ambiguity isn't just necessary for us given our circumstances, it's necessary in general, even if we lived in a surveillance state. (Huh?)

>> Strategic ambiguity is necessary for the surveillance state so that people can't do everything the state didn't explicitly punish/forbid. It is necessary for those living in the state, because the risk of revolution, the we're-not-going-to-take-it-anymore moment, helps keep such places relatively livable versus places where there is no such fear. It is important that you don't know exactly what will cause the people to rise up, or you'll treat them as bad as won't do that. And of course I was also talking explicitly about things like 'if you cross that border we will be at war' - there are times when you want to be 100% clear that there will be war (e.g. NATO) and others where you want to be 100% unclear (e.g. Taiwan).

To conclude: if you think the arguments in this post are sound (with the conclusion being that we shouldn't drastically reduce privacy in general), you also believe the implications I just listed, unless I (or you) misinterpreted something.

>> I hope this cleared things up. And of course, you can disagree with many, most or even all my arguments and still not think we should radically reduce privacy. Radical changes don't default to being a good idea if someone gives invalid arguments against them!

comment by cousin_it · 2019-03-17T07:38:20.800Z · LW(p) · GW(p)

I agree that privacy would be less necessary in a hypothetical world of angels. But I don't find it convincing that removing privacy would bring about such a world, and arguments of this type (let's discard a human right like property / free speech / privacy, and a world of angels will result) have a very poor track record.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-03-17T07:44:46.156Z · LW(p) · GW(p)

Why do you think I'm arguing against privacy in my comment (the one you replied to)? I don't think I've been taking a strong stance on it.

Replies from: cousin_it
comment by cousin_it · 2019-03-17T08:00:01.007Z · LW(p) · GW(p)

I think you have been. In every comment you try to cast doubt on justifications for privacy.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-03-17T08:45:51.848Z · LW(p) · GW(p)

That isn't the same as arguing against privacy. If someone says "I think X because Y" and I say "Y is false for this reason" that isn't (necessarily) arguing against X. People can have wrong reasons for correct beliefs.

It's epistemically harmful to frame efforts towards increasing local validity [LW · GW] as attempts to control the outcome of a discussion process; they're good independent of whether they push one way or the other in expectation.

In other words, you're treating arguments as soldiers here.

(Additionally, in the original comment, I was mostly not saying that Zvi's arguments were unsound (although I did say that for a few), but that they reflected a certain background understanding of how the world works)

Replies from: cousin_it
comment by cousin_it · 2019-03-17T09:09:34.859Z · LW(p) · GW(p)

Let's get back to the world of angels problem. You do seem to be saying that removing privacy would get us closer to a world of angels. Why?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-03-17T09:13:35.308Z · LW(p) · GW(p)

You do seem to be saying that removing privacy would get us closer to a world of angels.

Where? (I actually think I am uncertain about this)

Replies from: cousin_it
comment by cousin_it · 2019-03-17T13:57:12.956Z · LW(p) · GW(p)

Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)

Maybe I'm misreading and you're arguing that it will help us and enemies equally? But even that seems impossible. If Big Bad Wolf can run faster than Little Red Hood, mutual visibility ensures that Little Red Hood gets eaten.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-03-17T17:44:48.433Z · LW(p) · GW(p)

OK, I can defend this claim, which seems different from the "less privacy means we get closer to a world of angels" claim; it's about asymmetric advantages in conflict situations.

In the example you gave, more generally available information about people's locations helps Big Bad Wolf more than Little Red Hood. If I'm strategically identifying with Big Bad Wolf then I want more information available, and if I'm strategically identifying with Little Red Hood then I want less information available. I haven't seen a good argument that my strategic position is more like Little Red Hood's than Big Bad Wolf's (yes, the names here are producing moral connotations that I think are off).

So, why would info help us more than our enemies? I think efforts to do big, important things (e.g. solve AI safety or aging) really often get derailed by predatory patterns (see Geeks, Mops, Sociopaths), which usually aren't obvious to the people cooperative with the original goal for a while. These patterns derail the group and cause it to stop actually targeting its original mission. It seems like having more information about strategies would help solve this problem.

Of course, it also gives the predators more information. But I think it helps defense more than offense, since there are more non-predators to start with than predators, and non-predators are (presently) at a more severe information disadvantage than the predators are, with respect to this conflict.

Anyway, I'm not that confident in the overall judgment, but I currently think more available info about strategies is good in expectation with respect to conflict situations.

Replies from: cousin_it
comment by cousin_it · 2019-03-18T08:08:44.087Z · LW(p) · GW(p)

Yes, less privacy leads to more conformity. But I don't think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity - ideologies and religions.

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-03-18T20:42:30.351Z · LW(p) · GW(p)

OK, you're right that less privacy gives significant advantage to non-generative conformity-based strategies, which seems like a problem. Hmm.

Replies from: Benquo
comment by Benquo · 2019-05-02T01:51:18.190Z · LW(p) · GW(p)

Only ones that don't structurally depend on huge levels of hypocrisy. People can lie. It's currently cheap and effective in a wide variety of circumstances. This does not make the lies true.

Replies from: Raemon
comment by Raemon · 2019-05-02T03:37:18.746Z · LW(p) · GW(p)

[edit: actually, I'm just generally confused about what the parent comment is claiming]

Replies from: Benquo
comment by Benquo · 2019-05-02T23:46:46.674Z · LW(p) · GW(p)

Conformity-based strategies only benefit from reductions in privacy, when they're based on actual conformity. If they're based on pretend/outer conformity, then they get exposed with less privacy.

Replies from: Raemon, clone of saturn
comment by Raemon · 2019-05-03T00:08:42.120Z · LW(p) · GW(p)

Ah, gotcha. Yeah that makes sense, although it in turn depends a lot on what you think happens when lack-of-privacy forces the strategy to adapt.

(note: following comment didn't end up engaging with a strong version of the claim, and I ran out of time to think through other scenarios.)

If you have a workplace (with a low generativity strategy) in which people are supposed to work 8 hours, but they actually only work 2 (and goof off the rest of the time), and then suddenly everyone has access to exactly how much people work, I'd expect one of a few things to happen:

1. People actually start working harder

2. People actually end up getting 2 hour work days (and then go home)

3. People continue working for 2 hours and then goofing off (with or without maintaining some kind of plausible fiction – i.e. I could easily imagine that even with full information, people still maintain the polite fiction that people work 8 hours a day, and people only go to the efforts of directing attention to those who goof off when they are a political enemy. "Polite" society often seems to not just be about concealing information but actively choosing to look away)

4. People start finding things to do with their extra 6 hours that look enough like work (but are low effort / fun) that even though people could theoretically check on them and expose them, there'd still be enough plausible deniability that it'd require effort to expose them and punish them.

These options range in how good they are – hopefully you get 1 or 2 depending on how much more valuable the extra 6 hours are.

But none of them actually change the underlying fact that this business is pursuing a simple, collectivist strategy.

(this line of options doesn't really interface with the original claim that simple collective strategies are easier under a privacy-less regime, I think I'd have to look at several plausible examples to build up a better model and ran out of time to write this comment before, um, returning to work. [hi habryka])

Replies from: Raemon
comment by Raemon · 2019-05-03T00:41:53.592Z · LW(p) · GW(p)

I think the main thing is I can't think of many examples where it seems like the active-ingredient in the strategy is the conformity-that-would-be-ruined-by-information.

The most common sort of strategy I'm imagining is "we are a community that requires costly signals for group membership" (i.e. strict sexual norms, subscribing to and professing the latest dogma, giving to the poor), but costly signals are, well, costly, so there's incentive for people to pretend to meet them without actually doing so.

If it became common knowledge that nobody or very few people were "really" doing the work, one thing that might happen is that the community's bonds would weaken or disintegrate. But I think these sorts of social norms would mostly just adapt to the new environment, in one of a few ways:

  • come up with new norms that are more complicated, such that it's harder to check (even given perfect information) whether someone is meeting them. I think this what often happened in academia. (See jokes about postmodernism, where people can review each other's work, but the work is sort of deliberately inscrutable so it's hard to see if it says anything meaningful)
  • people just develop a norm of not checking in on each other (cooperating for the sake of preserving the fiction), and scrutiny is only actually deployed against political opponents.

(The latter one at least creates an interesting mutually assured destruction thing that probably makes people less willing to attack each other openly, but humans also just seem pretty good at taking social games into whatever domain seems most plausibly deniable)

comment by clone of saturn · 2019-05-03T00:05:23.651Z · LW(p) · GW(p)

Only if you assume everyone loses an equal amount of privacy.

comment by Raemon · 2019-03-15T22:53:52.943Z · LW(p) · GW(p)

I think you're pointing in an important direction, but your phrasing sounds off to me.

(In particular, 'scapegoating' feels like a very different frame than the one I'd use here)

If I think out loud, especially about something I'm uncertain about, that other people have opinions on, a few things can happen to me:

  • Someone who overhears part of my thought process might think (correctly, even!) that my thought process reveals that I am not very smart. Therefore, they will be less likely to hire me. This is punishment, but it's very much not "scapegoating" style punishment.
  • Someone who overhears my private thought process might (correctly, or incorrectly! either) come to think that I am smart, and be more likely to hire me. This can be just as dangerous. In a world where all information is public, I have to attend to how the process by which I act and think looks. I am incentivized to think in ways that are legibly good.
  • "Judgment" is dangerous to me (epistemically) even if the judgment is positive, because it incentives me against exploring paths that look bad, or are good for incomprehensible reasons.

    Replies from: Benquo, jessica.liu.taylor, jessica.liu.taylor
    comment by Benquo · 2019-03-16T00:08:18.264Z · LW(p) · GW(p)

    This seems like a general argument that providing evidence without trying to control the conclusions others draw is bad because it leads to errors. It doesn’t seem to take into account the cost of reduced info glow or the possibility that the gatekeeper might also introduce errors. That’s before we even consider self-serving bias!

    Related: http://benjaminrosshoffman.com/humility-argument-honesty/

    TLDR: I literally do not understand how to interpret your comment as NOT a general endorsement of fraud and implicit declaration of intent to engage in it.

    Replies from: Raemon
    comment by Raemon · 2019-03-16T00:11:40.212Z · LW(p) · GW(p)

    My intent was not that it's "bad", just, if you do not attempt to control the conclusions of others, they will predictably form conclusions of particular types, and this will have effects. (It so happens that I think most people won't like those effects, and therefore will attempt to control the conclusions of others.)

    Replies from: Raemon
    comment by Raemon · 2019-03-16T00:14:21.776Z · LW(p) · GW(p)

    (I feel somewhat confused by the above comment, actually. Can you taboo "bad" and try saying it in different words?)

    Replies from: Benquo
    comment by Benquo · 2019-03-16T00:22:53.320Z · LW(p) · GW(p)

    Ah, if you literally just mean it increases variance & risk, that’s true in the very short term. In context it sounded to me like a policy argument against doing so, but on reflection it’s easy to read you as meaning the more reasonable thing. Thank you for explaining.

    Replies from: Raemon
    comment by Raemon · 2019-03-16T00:35:58.712Z · LW(p) · GW(p)

    Hmm. I think I meant something more like your second interpretation than your first interpretation but I think I actually meant a third thing and am not confident we aren't still misunderstanding each other.

    An intended implication, (which comes with an if-then suggestion, which was not an essential part of my original claim but I think is relevant) is:

    If you value being able to think freely and have epistemologically sound thoughts, it is important to be able to think thoughts that you will neither be rewarded nor punished for... [edit: or be extremely confident than you have accounted for your biases towards reward gradients]. And the rewards are only somewhat less bad than the punishments.

    A followup implication is that this is not possible to maintain humanity-wide if thought-privacy is removed (which legalizing blackmail would contribution somewhat towards). And that this isn't just a fact about our current equilibria, it's intrinsic to human biology.

    It seems plausible (although I am quite skeptical) that a small group of humans might be able to construct an epistemically sound world that includes lack-of-intellectual-privacy, but they'd have to have correctly accounted for wide variety of subtle errors.

    [edit: all of this assumes you are running on human wetware. If you remove that as a constraint other things may be possible]

    Replies from: Raemon, DanielFilan, Raemon
    comment by Raemon · 2019-03-16T20:17:11.368Z · LW(p) · GW(p)

    further update: I do think rewards are something like 10x less problematic than punishments, because humans are risk averse and fear punishment more than they desire reward. ("10x" is a stand-in for "whatever the psychological research says on how big the difference is between human response to rewards and punishments")

    Replies from: Dagon
    comment by Dagon · 2019-03-18T21:53:16.750Z · LW(p) · GW(p)

    [note: this subthread is far afield from the article - LW is about publication, not private thoughts (unless there's a section I don't know about where only specifically invited people can see things) . And LW karma is far from the sanctions under discussion in the rest of the post.]

    Have you considered things to reduce the assymetric impact of up- and down-votes? Cap karma value at -5? Use downvotes as a divisor for upvotes (say, score is upvotes / (1 + 0.25 * downvotes)) rather than simple subtraction?

    Replies from: Raemon
    comment by Raemon · 2019-03-19T20:17:11.107Z · LW(p) · GW(p)

    We've thought about things in that space, although any of the ideas would be a fairly major change, and we haven't come up with anything we feel good enough about to commit to.

    (We have done some subtle things to avoid making downvotes feel worse than they need to, such as not including the explicit number of downvotes)

    comment by DanielFilan · 2019-03-17T01:23:48.202Z · LW(p) · GW(p)

    Do you think that thoughts are too incentivised or not incentivised enough on the margin, for the purpose of epistemically sound thinking? If they're too incentivised, have you considered dampening LWs karma system? If they're not incentivised enough, what makes you believe that legalising blackmail will worsen the epistemic quality of thoughts?

    Replies from: Viliam, Raemon
    comment by Viliam · 2019-03-17T16:33:11.067Z · LW(p) · GW(p)

    The LW karma obviously has its flaws, per Goodhart's law. It is used anyway, because the alternative is having other problems, and for the moment this seems like a reasonable trade-off.

    The punishment for "heresies" is actually very mild. As long as one posts respected content in general, posting a "heretical" comment every now and then does not ruin their karma. (Compare to people having their lives changed dramatically because of one tweet.) The punishment accumulates mostly for people whose only purpose here is to post "heresies". Also, LW karma does not prevent anyone from posting "heresies" on a different website. Thus, people can keep positive LW karma even if their main topic is talking how LW is fundamentally wrong as long as they can avoid being annoying (for example by posting hundred LW-critical posts on their personal website, posting a short summary with hyperlinks on LW, and afterwards using LW mostly to debate other topics).

    Blackmail typically attacks you in real life, i.e. you can't limit the scope of impact. If losing an online account on a website X would be the worst possible outcome of one's behavior at the website X, life would be easy. (You would only need to keep your accounts on different websites separated from each other.) It was already mentioned somewhere in this debate that blackmail often uses the difference between norms in different communities, i.e. that your local-norm-following behavior in one context can be local-norm-breaking in another context. This is quite unlike LW karma.

    comment by Raemon · 2019-03-18T20:32:31.534Z · LW(p) · GW(p)

    I'd say thoughts aren't incentivized enough on the margin, but:

    1. A major bottleneck is how fine-tuned and useful the incentives are. (i.e. I'd want to make LW karma more closely track "reward good epistemic processes" before I made the signal stronger. I think it currently tracks that well enough that I prefer it over no-karma).

    2. It's important that people can still have private thoughts separate from the LW karma system. LW is where you come when you have thoughts that seem good enough to either contribute to the commons, or to get feedback on so you can improve your thought process... after having had time to mull things over privately without worrying about what anyone will think of you.

    (But, I also think, on the margin, people should be much less scared about sharing their private thoughts than they currently are. Many people seem to be scared about sharing unfinished thoughts at all, and my actual model of what is "threatening" says that there's a much narrower domain where you need to be worried in the current environment)

    3. One conscious decision we made was not not display "number of downvotes" on a post (we tried it out privately for admins for awhile). Instead we just included "total number of votes". Explicitly knowing how much one's post got downvoted felt much worse than having a vague sense of how good it was overall + a rough sense of how many people *may* have downvoted it. This created a stronger punishment signal than seemed actually appropriate.

    comment by Raemon · 2019-03-16T00:42:39.641Z · LW(p) · GW(p)

    (Separately, I am right now making arguments in terms that I'm fairly confident both of us value, but I also think there are reasons to want private thoughts that are more like "having a Raemon_healthy soul", than like being able to contribute usefully to the intellectual commons.

    (I noticed while writing this that the latter might be most of what a Benquo finds important for having a healthy soul, but unsure. In any case healthy souls are more complicated and I'm avoiding making claims about them for now)

    comment by jessicata (jessica.liu.taylor) · 2019-03-15T23:00:18.301Z · LW(p) · GW(p)

    If privacy in general is reduced, then they get to see others' thoughts too [EDIT: this sentence isn't critical, the rest works even if they can only see your thoughts]. If they're acting justly, then they will take into account that others might modify their thoughts to look smarter, and make basically well-calibrated (if not always accurate) judgments about how smart different people are. (People who are trying can detect posers a lot of the time, even without mind-reading). So, them having more information means they are more likely to make a correct judgment, hiring the smarter person (or, generally, whoever can do the job better). At worst, even if they are very bad at detecting posers, they can see everyone's thoughts and choose to ignore them, making the judgment they would make without having this information (But, they were probably already vulnerable to posers, it's just that seeing people's thoughts doesn't have to make them more vulnerable).

    Replies from: Raemon
    comment by Raemon · 2019-03-15T23:12:58.315Z · LW(p) · GW(p)
    If privacy in general is reduced, then they get to see others' thoughts too.

    This response seems mostly orthogonal to what I was worried about. It is quite plausible that most hiring decisions would become better in fully transparent (and also just?) world. But, fully-and-justly-transparent-world can still mean that fewer people think original or interesting thoughts because doing so is too risky.

    And I might think this is bad, not only because of fewer-objectively-useful thoughts get thunk, but also because... it just kinda sucks and I don't get to be myself?

    (As well as, fully-transparent-and-just-world might still be a more stressful world to live in, and/or involve more cognitive overhead because I need to model how others will think about me all the time. Hypothetically we could come to an equilibrium wherein we *don't* put extra effort into signaling legibly good thought processes. This is plausible, but it is indeed a background assumption of mine that this is not possible to run on human wetware)

    Replies from: jessica.liu.taylor
    comment by jessicata (jessica.liu.taylor) · 2019-03-15T23:17:44.771Z · LW(p) · GW(p)

    Regarding that sentence, I edited my comment at about the same time you posted this.

    But, fully-and-justly-transparent-world can still mean that fewer people think original or interesting thoughts because doing so is too risky.

    If someone taking a risk is good with respect to the social good, then the justice process should be able to see that they did that and reward them (or at least not punish them) for it, right? This gets easier the more information is available to the justice process.

    Replies from: Raemon
    comment by Raemon · 2019-03-15T23:29:53.791Z · LW(p) · GW(p)

    So, much of my thread was respond to this sentence:

    Implication: "judge" means to use information against someone.

    The point being, you can have entirely positive judgment, and have it still produce distortions. All that has to be true is that some forms of thought are more legibly good and get more rewarded, for a fully transparent system to start producing warped incentives on what sort of thoughts get thought.

    i.e. say I have three options of what to think about today:

  • some random innocuous status quo thought (neither gets me rewarded nor punished)
  • some weird thought that seems kind of dumb, which most of the time is evidence about being dumb, which occasionally pays off with something creative and neat. (I'm not sure what kind of world we're stipulating here. In some "just"-worlds, this sort of thought gets punished (because it's usually dumb). In some "just worlds" it gets rewarded (because everyone has cooperated on some kind of long term strategy). In some just-worlds it's hit or miss because there's a collection of people trying different strategies with their rewards.
  • some heretical thought that seems actively dangerous, and only occasionally produces novel usefulness if I turn out to be real good at being contrarian.
  • a thought that is clearly, legibly good, almost certainly net positive, either by following well worn paths, or being "creatively out of the box" in a set of ways that are known to have pretty good returns.
  • Even in one of the possible-just-worlds, it seems like you're going to incentivize the last one much more than the 2nd or 3rd.

    This isn't that different from the status quo – it's a hard problem that VC funders have an easier time investing in people doing something that seems obviously good, then someone with a genuinely weird, new idea. But I think this would crank that problem up to 11, even if we stipulate a just-world.

    ...

    Most importantly: the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow. (And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing. For reasons related to Overconfident talking down, humble or hostile talking up [LW · GW])

    Replies from: jessica.liu.taylor
    comment by jessicata (jessica.liu.taylor) · 2019-03-15T23:55:49.126Z · LW(p) · GW(p)

    Even in one of the possible-just-worlds, it seems like you’re going to incentivize the last one much more than the 2nd or 3rd.

    This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation. If you can't implement a process that complicated, you can just stop punishing people for heresy, entirely ignoring their thoughts if necessary.

    the key implication I believe in, is that humans are not nearly smart enough at present to coordinate on anything like a just world, even if everyone were incredibly well intentioned. This whole conversation is in fact probably not possible for the average person to follow.

    Average people don't need to do it, someone needs to do it. The first target isn't "make the whole world just", it's "make some local context just". Actually, before that, it's "produce common knowledge in some local context that the world is unjust but that justice is desirable", which might actually be accomplished in this very thread, I'm not sure.

    And this implication in this sentence right here right now is something that could get me punished in many circles, even by people trying hard to do the right thing.

    Thanks for adding this information. I appreciate that you're making these parts of your worldview clear.

    Replies from: Raemon
    comment by Raemon · 2019-03-16T00:46:22.872Z · LW(p) · GW(p)
    This is not responsive to what I said! If you can see (or infer) the process by which someone decided to have one thought or another, you can reward them for doing things that have higher expected returns, e.g. having heretical thoughts when heresy is net positive in expectation.

    This was most of what I meant to imply. I am mostly talking about rewards, not punishments.

    I am claiming that rewards distort thoughts similarly to punishments, although somewhat more weakly because humans seem to respond more strongly to punishment than reward.

    Replies from: jessica.liu.taylor
    comment by jessicata (jessica.liu.taylor) · 2019-03-16T22:28:47.739Z · LW(p) · GW(p)

    You're continuing to miss the completely obvious point that a just process does no worse (in expectation) by having more information potentially available to it, which it can decide what to do with. Like, either you are missing really basic decision theory stuff covered in the Sequences or you are trolling.

    (Agree that rewards affect thoughts too, and that these can cause distortions when done unjustly)

    Replies from: Raemon
    comment by Raemon · 2019-03-16T23:06:18.553Z · LW(p) · GW(p)

    Yes, I disagree with that point, and I feel like you've been missing the completely obvious point that bounded agents have limited capabilities.

    Choices are costly [LW(p) · GW(p)].

    Choices are really costly.

    Your comments don't seem to be acknowledging that, so from my perspective you seem to be describing an Impossible Utopia (capitalized because I intend to write a post that encapsulates the concept of Which Utopias Are Possible), and so it doesn't seem very relevant.

    (I recall claims on LessWrong that a decision process can do no worse with more information, but I don't recall a compelling case that this was true on bounded human agents. Though I am interested if you have a post that responds to Zvi's claims in the Choices are Bad series, and/or a post that articulates what exactly you mean by "just" since it sounds like you're using it as a jargon term that's meant to encapsulate more information than I'm receiving right now).

    I've periodically mentioned that my arguments about "just worlds implemented on humans". "Just worlds implemented on non-humans or augmented humans" might be quite different, and I think it's worth talking about too.

    But the topic here is legalizing blackmail in a human world. So it matters how this will be implemented on the median human, who are responsible for most actions.

    Notice that in this conversation, where you are and I are both smarter than average, it is not obvious to both of us what the correct answer is here, and we have spent some time arguing about it. When I imagine the average human town, or company, or community, attempting to implement a just world that includes blackmail and full transparency, I am imagining either a) lots more time being spent trying to figure out the right answer, b) people getting wrong answers all the time.

    Replies from: jessica.liu.taylor
    comment by jessicata (jessica.liu.taylor) · 2019-03-16T23:25:04.979Z · LW(p) · GW(p)

    The two posts you linked are not even a little relevant to the question of whether, in general, bounded agents do better or worse by having more information (Yes, choice paralysis might make some information about what choices you have costly, but more info also reduces choice paralysis by increasing certainty about how good the different options are, and overall the posts make no claim about the overall direction of info being good or bad for bounded agents). To avoid feeding the trolls, I'm going to stop responding here.

    Replies from: Raemon
    comment by Raemon · 2019-03-18T20:23:27.461Z · LW(p) · GW(p)

    I'm not trolling. I have some probability on me being the confused one here. But given the downvote record above, it seems like the claims you're making are at least less obvious than you think they are.

    If you value those claims being treated as obvious-things-to-build-off-of by the LW commentariat, you may want to expand on the details or address confusions about them at some point.

    But, I do think it is generally important for people to be able to tap out of conversations whenever the conversation is seeming low value, and seems reasonable for this thread to terminate.

    Replies from: Vladimir_Nesov
    comment by Vladimir_Nesov · 2019-03-19T03:59:52.945Z · LW(p) · GW(p)

    I have some probability on me being the confused one here.

    In conversations like this, both sides are confused, that is don't understand the other's point, so "who is the confused one" is already an incorrect framing. One of you may be factually correct, but that doesn't really matter for making a conversation work, understanding each other is more relevant.

    (In this particular case, I think both of you are correct and fail to see what the other means, but Jessica's point is harder to follow and pattern-matches misleading things, hence the balance of votes.)

    Replies from: habryka4, Raemon
    comment by habryka (habryka4) · 2019-03-19T04:35:23.623Z · LW(p) · GW(p)

    (I downvoted some of Jessica's comments, mostly only in the cases where I thought she was not putting in a good faith effort to try to understand what her interlocutor is trying to say, like her comment upstream in the thread. Saying that talking to someone is equivalent to feeding trolls is rarely a good move, and seems particularly bad in situations where you are talking about highly subjective and fuzzy concepts. I upvoted all of her comments that actually made points without dismissing other people's perspectives, so in my case, I don't really think that the voting patterns are a result of her ideas being harder to follow, and more the result of me perceiving her to be violating certain conversational norms)

    comment by Raemon · 2019-03-19T04:35:13.652Z · LW(p) · GW(p)
    In conversations like this, both sides are confused,

    Nod. I did actually consider a more accurate version of the comment that said something like "at least one of us is at least somewhat confused about something", but by the time we got to this comment I was just trying to disengage while saying the things that seemed most important to wrap up with.

    Replies from: Vladimir_Nesov
    comment by Vladimir_Nesov · 2019-03-19T04:54:30.696Z · LW(p) · GW(p)

    Nod. I did actually consider a more accurate version of the comment that said something like "at least one of us is at least somewhat confused about something" [...]

    The clarification doesn't address what I was talking about, or else disagrees with my point, so I don't see how that can be characterised with a "Nod". The confusion I refer to is about what the other means, with the question of whether anyone is correct about the world irrelevant. And this confusion is significant on both sides, otherwise a conversation doesn't go off the rails in this way. Paying attention to truth is counterproductive when intended meaning is not yet established, and you seem to be talking about truth, while I was commenting about meaning.

    Replies from: Raemon
    comment by Raemon · 2019-03-19T07:37:49.995Z · LW(p) · GW(p)

    Hmm. Well I am now somewhat confused what you mean. Say more? (My intention was for ‘at least one of us is confused’ to be casting a fairly broad net that included ‘confused about the world’, or ‘confused about what each other meant by our words’, or ‘confused... on some other level that I couldn’t predict easily.’)

    comment by jessicata (jessica.liu.taylor) · 2019-03-15T23:22:02.033Z · LW(p) · GW(p)

    (In particular, ‘scapegoating’ feels like a very different frame than the one I’d use here)

    Having read Zvi's post and my comment, do you think the norm-enforcement process is just, or even not very unjust? If not, what makes it not scapegoating?

    Replies from: Raemon
    comment by Raemon · 2019-03-15T23:36:05.768Z · LW(p) · GW(p)

    I think scapegoating has a particular definition – blaming someone for something that they didn't do because your social environment demands someone get blamed. And that this isn't relevant to most of my concerns here. You can get unjustly punished for things that have nothing to do with scapegoating.

    Replies from: jessica.liu.taylor
    comment by jessicata (jessica.liu.taylor) · 2019-03-15T23:44:15.256Z · LW(p) · GW(p)

    Good point. I think there is a lot of scapegoating (in the sense you mean here) but that's a further claim than that it's unjust punishment, and I don't believe this strongly enough to argue it right now.

    comment by BurntVictory · 2019-03-15T23:30:41.540Z · LW(p) · GW(p)

    I found this pretty useful--Zvi's definitely reflecting a particular, pretty negative view of society and strategy here. But I disagree with some of your inferences, and I think you're somewhat exaggerating the level of gloom-and-doom implicit in the post.

    >Implication: "judge" means to use information against someone. Linguistic norms related to the word "judgment" are thoroughly corrupt enough that it's worth ceding to these, linguistically, and using "judge" to mean (usually unjustly!) using information against people.

    No, this isn't bare repetition. I agree with Raemon that "judge" here means something closer to one of its standard usages, "to make inferences about". Though it also fits with the colloquial "deem unworthy for baring [understandable] flaws", which is also a thing that would happen with blackmail and could be bad.

    >Implication: more generally available information about what strategies people are using helps "our" enemies more than it helps "us". (This seems false to me, for notions of "us" that I usually use in strategy)

    I can imagine a couple things going on here? One, if the world is a place where may more vulnerabilities are more known, this incentivizes more people to specialize in exploiting those vulnerabilities. Two, as a flawed human there are probably some stressors against which you can't credibly play the "won't negotiate with terrorists" card.


    >Implication: even in the most just possible system of norms, it would be good to sometimes violate those norms and hide the fact that you violated them. (This seems incorrect to me!)

    I think the assumption is these are ~baseline humans we're talking about, and most human brains can't hold norms of sufficient sophistication to capture true ethical law, and are also biased in ways that will sometimes strain against reflectively-endorsed ethics (e.g. they're prone to using constrained circles of moral concern rather than universality).


    >Implication: the bad guys won; we have rule by gangsters, who aren't concerned with sustainable production, and just take as much stuff as possible in the short term. (This seems on the right track but partially false; the top marginal tax rate isn't 100%)

    This part of the post reminded me of (the SSC review of) Seeing Like a State, which makes a similar point; surveying and 'rationalizing' farmland, taking a census, etc. = legibility = taxability. "all of them" does seem like hyperbole here. I guess you can imagine the maximally inconvenient case where motivated people with low cost of time and few compunctions know your resources and full utility function, and can proceed to extract ~all liquid value from you.

    Replies from: jessica.liu.taylor
    comment by jessicata (jessica.liu.taylor) · 2019-03-15T23:39:55.203Z · LW(p) · GW(p)

    I agree with Raemon that “judge” here means something closer to one of its standard usages, “to make inferences about”.

    The post implies it is bad to be judged. I could have misinterpreted why, but that implication is there. If judge just meant "make inferences about" why would it be bad?

    One, if the world is a place where may more vulnerabilities are more known, this incentivizes more people to specialize in exploiting those vulnerabilities.

    But it also helps in knowing who's exploiting them! Why does it give more advantages to the "bad" side?

    Two, as a flawed human there are probably some stressors against which you can’t credibly play the “won’t negotiate with terrorists” card.

    Why would you expect the terrorists to be miscalibrated about this before the reduction in privacy, to the point where they think people won't negotiate with them when they actually will, and less privacy predictably changes this opinion?

    I think the assumption is these are ~baseline humans we’re talking about, and most human brains can’t hold norms of sufficient sophistication to capture true ethical law

    Perhaps the optimal set of norms for these people is "there are no rules, do what you want". If you can improve on that, than that would constitute a norm-set that is more just than normlessness. Capturing true ethical law in the norms most people follow isn't necessary.

    I guess you can imagine the maximally inconvenient case where motivated people with low cost of time and few compunctions know your resources and full utility function, and can proceed to extract ~all liquid value from you.

    Sure, but doesn't it help me against them too?

    Replies from: BurntVictory
    comment by BurntVictory · 2019-03-16T01:34:55.594Z · LW(p) · GW(p)
    The post implies it is bad to be judged. I could have misinterpreted why, but that implication is there. If judge just meant "make inferences about" why would it be bad?

    As Raemon says, knowing that others are making correct inferences about your behavior means you can't relax. No, idk, watching soap operas, because that's an indicator of being less likely to repay your loans, and your premia go up. There's an ethos of slack, decisionmaking-has-costs, strategizing-has-costs that Zvi's explored in his previous posts, and that's part of how I'm interpreting what he's saying here.

    But it also helps in knowing who's exploiting them! Why does it give more advantages to the "bad" side?

    Sure, but doesn't it help me against them too?

    You don't want to spend your precious time on blackmailing random jerks, probably. So at best, now some of your income goes toward paying a white-hat blackmailer to fend off the black-hats. (Unclear what the market for that looks like. Also, black-hatters can afford to specialize in unblackmailability; it comes up much more often for them than the average person.) You're right, though, that it's possible to have an equilibrium where deterrence dominates and the black-hatting incentives are low, in which case maybe the white-hat fees are low and now you have a white-hat deterrent. So this isn't strictly bad, though my instinct is that it's bad in most plausible cases.

    Why would you expect the terrorists to be miscalibrated about this before the reduction in privacy, to the point where they think people won't negotiate with them when they actually will, and less privacy predictably changes this opinion?

    That's a fair point! A couple of counterpoints: I think risk-aversion of 'terrorists' helps. There's also a point about second-order effects again; the easier it is to blackmail/extort/etc., the more people can afford to specialize in it and reap economies of scale.

    Perhaps the optimal set of norms for these people is "there are no rules, do what you want". If you can improve on that, than that would constitute a norm-set that is more just than normlessness. Capturing true ethical law in the norms most people follow isn't necessary.

    Eh, sure. My guess is that Zvi is making a statement about norms as they are likely to exist in human societies with some level of intuitive-similarity to our own. I think the useful question here is like "is it possible to instantiate norms s.t. norm-violations are ~all ethical-violations". (we're still discussing the value of less privacy/more blackmail, right?) No-rule or few-rule communities could work for this, but I expect it to be pretty hard to instantiate them at large scale. So sure, this does mean you could maybe build a small local community where blackmail is easy. That's even kind of just what social groups are, as Zvi notes; places where you can share sensitive info because you won't be judged much, nor attacked as a norm-violator. Having that work at super-Dunbar level seems tough.

    Replies from: jessica.liu.taylor
    comment by jessicata (jessica.liu.taylor) · 2019-03-16T22:00:48.062Z · LW(p) · GW(p)

    As Raemon says, knowing that others are making correct inferences about your behavior means you can’t relax. No, idk, watching soap operas, because that’s an indicator of being less likely to repay your loans, and your premia go up.

    This is really, really clearly false!

    1. This assumes that, upon more facts being revealed, insurance companies will think I am less (not more) likely to repay my loans, by default (e.g. if I don't change my TV viewing behavior).
    2. More egregiously, this assumes that I have to keep putting in effort into reducing my insurance premiums until I have no slack left, because these premiums really, really, really matter. (I don't even spend that much on insurance premiums!)

    If you meant this more generally, and insurance was just a bad example, why is the situation worse in terms of slack than it was before? (I already have the ability to spend leisure time on gaining more money, signalling, etc.)

    Replies from: Benquo, BurntVictory
    comment by BurntVictory · 2019-03-18T01:28:51.849Z · LW(p) · GW(p)

    It's true the net effect is low to first order, but you're neglecting second-order effects. If premia are important enough, people will feel compelled to Goodhart proxies used for them until those proxies have less meaning.

    Given the linked siderea post, maybe this is not very true for insurance in particular. I agree that wasn't a great example.

    Slack-wise, uh, choices are bad. really bad. Keep the sabbath. These are some intuitions I suspect are at play here. I'm not interested in a detailed argument hashing out whether we should believe that these outweigh other factors in practice across whatever range of scenarios, because it seems like it would take a lot of time/effort for me to actually build good models here, and opportunity costs are a thing. I just want to point out that these ideas seem relevant for correctly interpreting Zvi's position.

    comment by ChristianKl · 2019-03-16T09:59:28.833Z · LW(p) · GW(p)
    Implication: it's bad for people to have much more information about other people (generally), because they would reward/punish them based on that info, and such rewarding/punishing would be unjust.

    I don't think that's a necessary implication. In a world where people live in fear of being punished they will be able to act in a way to avoid unjust punishment. That world is still one where people suffer from living in fear.

    Replies from: jessica.liu.taylor
    comment by jessicata (jessica.liu.taylor) · 2019-03-16T20:19:23.332Z · LW(p) · GW(p)

    Whence fear of unjust punishment if there is no unjust punishment? Hypothetically there could be (justified) fear of a counterfactual that never happens, but this isn't a stable arrangement (in practice, some people will not work as hard to avoid the unjust punishment, and so will get punished)

    Replies from: ChristianKl
    comment by ChristianKl · 2019-03-17T00:13:35.781Z · LW(p) · GW(p)

    Most people who have fear of heights don't often fall in a way that hurts them.

    comment by Benquo · 2019-03-16T00:09:45.763Z · LW(p) · GW(p)

    Your notion of trust seems like it’s conflation of two opposite things meant by the word.

    The first relates to coordination towards clarity, a norm of using info to improve the commons. The second is about covering for each other in an environment where information is mainly used to extract things from others.

    Related: http://benjaminrosshoffman.com/humility-argument-honesty/ http://benjaminrosshoffman.com/against-neglectedness/ http://benjaminrosshoffman.com/model-building-and-scapegoating/

    Replies from: Zvi
    comment by Zvi · 2019-03-17T16:48:57.124Z · LW(p) · GW(p)

    I replied to this comment on my blog (https://thezvi.wordpress.com/2019/03/15/privacy/#comment-3827)

    comment by sapphire (deluks917) · 2019-03-16T17:16:41.639Z · LW(p) · GW(p)

    Ben Hoffman's views on privacy are downstream of a very extreme world model. On http://benjaminrosshoffman.com/blackmailers-are-privateers-in-the-war-on-hypocrisy a person comments under the name 'declaration of war' and Ben says:

    I was a little surprised to see someone else express opinions so similar to my true feelings here (which are stronger than my endorsed opinions), but they’re not me.

    Here are two relevant quotes:

    It's not surprising if privacy has value for the person preserving it. It's very surprising if it has social value.

    Trivially, information puts people in better positions to make decisions. If it doesn't, it logically has to be due to their perverse behaviors.

    It seems self-evident that we are all MASSIVELY worse off because sexuality is somewhat shrouded in secrecy. If we don't agree on that point, not regarding what happens on the margins, but regarding global policy, I simply consider you to be part of rape culture and possibly it would be immoral to blackmail you rather than simply exposing you unconditionally.

    Another (in the context of sexuality and privacy)

    Coordinated concealing information is always about perpetuating patterns of abuse.

    Ben says his endorsed views are not this extreme but he certainly seems to have some extreme views about whether sharing more information is almost always good. His position on this is presumably downstream of how 'perverse' he thinks human society is. I personally think that it is pretty obvious that, in currently existing society, sharing more information is not almost always good for society. And that privacy is not primarily a way to prevent abuse.

    A society with no privacy is essentially a society of perfect norm and law enforcement. I do not think that would be a good society. Ben and others presumably agree many current norms and laws are quite bad. But they also seem to think that in a world without privacy all norms and laws would become just. Perhaps the central crux is 'in a world without privacy would laws and norms automatically become just?'.

    Replies from: Benquo
    comment by Benquo · 2019-03-17T06:59:16.532Z · LW(p) · GW(p)

    It seems to me like you changed the subject halfway through your comment, from systemic to marginal effects. I’m on the record as having very different opinions about the two.

    Your description of the crux seems too extreme to me, but I do think it’s pretty likely - and certainly not obviously false as Zvi seems to think - that in a world without privacy, nasty power structures would pay a heavy price.

    comment by Douglas_Knight · 2019-03-16T21:44:24.933Z · LW(p) · GW(p)
    I argue in the last piece that it is common even now for people to engineer blackmail material against others and often also against themselves, to allow it to be used as collateral and leverage. That a large part of job interviews is proving that you are vulnerable in these ways.

    I don't see anything about existing practices for job interviews in the previous piece.

    comment by Dagon · 2019-03-15T20:42:34.868Z · LW(p) · GW(p)

    There's another largely-unaddressed element to the debate: underlying freedoms of transaction and of information-handling. All of the arguments about blackmail are about it as an incentive for something - why are we not debating the things themselves? Arguments against gossip and investigation are not necessarily arguments against blackmail.

    Before addressing the incentives, you should seek clarity/agreement on what behaviors you're trying to encourage and prevent. I still have heard very few examples of things that are acceptable without money involvement (investigating and publishing someone for spite or social one-ups) and become unacceptable only because of the blackmail.

    Replies from: TAG
    comment by TAG · 2019-03-15T22:30:11.395Z · LW(p) · GW(p)

    Some things are acceptable on small quantities but unacceptable in large ones. You don't want to incentivise those things.

    Replies from: Dagon
    comment by Dagon · 2019-03-16T16:12:14.676Z · LW(p) · GW(p)
    Some things are acceptable on small quantities but unacceptable in large ones. You don't want to incentivise those things.

    This takes some unpacking. For things that are acceptable on small scales and not large ones, should we prohibit the scale rather than the act? The status quo is that blackmail is frowned upon, but not enforced unless particularly noteworthy. That bugs me from a rule-implementation standpoint, but may be ideal in a practical sense.

    Replies from: TheAncientGeek
    comment by TheAncientGeek · 2019-03-17T10:32:49.541Z · LW(p) · GW(p)

    We do have some laws that are explicit about scale. For instance speed limits and blood alcohol levels. However, nor everything is easily quantified. Money changing hands can be a proxy for something reaching too large a scale

    Replies from: Dagon
    comment by Dagon · 2019-03-17T17:19:33.887Z · LW(p) · GW(p)

    Many laws incorporate scaling in terms of damage threshold or magnitude of single incident. We have very few laws that are explicit about scale in terms of overall frequency or number of participants in multiple incidents. City zoning may be one example of success in this area - only allowing so many residents in an area, without specifying who.

    There are very few criminal laws such that something is legal only when a few people are doing it, and becomes illegal if it's too popular. Much more common to just outlaw it and allow prosecutors/judges leeway in enforcing it. I'd argue that this choice gets exercised in ways that are harmful, but it does get the job (permitting low-level incidence while preventing large-scale infractions) done.

    comment by TAG · 2019-03-16T15:45:15.117Z · LW(p) · GW(p)

    s/tenancy/tendency