Permissions in Governance

post by sarahconstantin · 2019-08-02T19:50:00.592Z · LW · GW · 12 comments

Contents

  Compliance Costs
  Requiring Permission Raises Compliance Costs 
  Impersonal Boundaries
  Consensus and Do-Ocracy
None
12 comments

Compliance Costs

The burden of a rule can be separated into (at least) two components.

First, there’s the direct opportunity cost of not being allowed to do the things the rule forbids. (We can include here the penalties for violating the rule.)

Second, there’s the “cost of compliance”, the effort spent on finding out what is permitted vs. forbidden and demonstrating that you are only doing the permitted things.

Separating these is useful. You can, at least in principle, aim to reduce the compliance costs of a rule without making it less stringent.

For instance, you could aim to simplify the documentation requirements for environmental impact assessments, without relaxing standards for pollution or safety.  “Streamlining” or “simplifying” regulations aims to reduce compliance costs, without necessarily lowering standards or softening penalties.

If your goal in making a rule is to avoid or reduce some unwanted behavior — for instance, to reduce the amount of toxic pollution people and animals are exposed to — then shifting up or down your pollution standards is a zero-sum tradeoff between your environmental goals and the convenience of polluters.

Reducing the costs of compliance, on the other hand, is positive-sum: it saves money for developers, without increasing pollution levels.  Everybody wins. Where possible, you’d intuitively think rulemakers would always want to do this.

Of course, this assumes an idealized world where the only goal of a prohibition is to reduce the total amount of prohibited behavior.

You might want compliance costs to be high if you’re using the rule, not to reduce incidence of the forbidden behavior, but to produce distinctions between people — i.e. to separate the extremely committed from the casual, so you can reward them relative to others.  Costly signals are good if you’re playing a competitive zero-sum game; they induce variance because not everyone is able or willing to pay the cost.

For instance, some theories of sexual selection (such as the handicap principle) argue that we evolved traits which are not beneficial in themselves but are sensitive indicators of whether or not we have other fitness-enhancing traits. E.g. a peacock’s tail is so heavy and showy that only the strongest and healthiest and best-fed birds can afford to maintain it. The tail magnifies variance, making it easier for peahens to distinguish otherwise small variations in the health of potential mates.

Such “magnifying glasses for small flaws” are useful in situations where you need to pick “winners” and can inherently only choose a few. Sexual selection is an example of such a a situation, as females have biological limits on how many children they can bear per lifetime; there is a fixed number of males they can reproduce with.  So it’s a zero-sum situation, as males are competing for a fixed number of breeding slots.  Other competitions for fixed prizes are similar in structure, and likewise tend to evolve expensive signals of commitment or quality.  A test that’s so easy anyone can pass it, is useless for identifying the top 1%.

On a regulatory-capture or spoils-based account of politics, where politics (including regulation) is seen as a negotiation to divide up a fixed pool of resources, and loyalty/trust is important in repeated negotiations, high compliance costs are easy to explain. They prevent diluting the spoils among too many people, and create variance in people’s ability to comply, which allows you to be selective along whatever dimension you care about.

Competitive (selective, zero-sum) processes work better when there’s wide variance among people. A rule (or boundary, or incentive) that’s meant to minimize an undesired behavior is, by contrast, looking at aggregate outcomes. If you can make it easier for people to do the desired behavior and refrain from the undesired, you’ll get better aggregate behavior, all else being equal.  These goals are, in a sense, “democratic” or “anti-elitist”; if you just care about total aggregate outcomes, then you want good behavior to be broadly accessible.

Requiring Permission Raises Compliance Costs 

A straightforward way of avoiding undesired behavior is to require people to ask an authority’s permission before acting.

This has advantages: sometimes “undesired behavior” is a complex, situational thing that’s hard to codify into a rule, so the discretional judgment of a human can do better than a rigid rule.

One disadvantage that I think people underestimate, however, is the chilling effect it has on desired behavior.

For instance:

The inhibition against asking for permission is going to be strongest for shy people who “don’t want to be a bother” — i.e. those who are most conscious of the effects of their actions on others, and perhaps those who you’d most want to encourage to act.  Those who don’t care about bothering you are going to be undaunted, and will flood you with unreasonable requests.  A system where you have to ask a human’s permission before doing anything is an asshole filter, in Siderea’s terminology; it empowers assholes and disadvantages everyone else.

The direct costs of a rule fall only on those who violate it (or wish they could); the compliance costs fall on everyone.  A system of enforcement that preferentially inhibits desired behavior (while not being that reliable in restricting undesired behavior) is even worse from an efficiency perspective than a high compliance cost on everyone.

Impersonal Boundaries

An alternative is to instantiate your boundaries in an inanimate object — something that can’t intimidate shy people or cave to pressure from entitled jerks.  For instance:

The key element here isn’t information-theoretic simplicity, as in the debate over simple rules vs. discretion.  Inanimate boundaries can be complex and opaque.  They can be a black box to the user.

The key elements are that, unlike humans, inanimate boundaries do not punish requests that are refused (even socially, by wearing a disappointed facial expression), and they do not give in to repeated or more forceful requests.

An inanimate boundary is, rather, like the ideal version of a human maintaining a boundary in an “assertive” fashion; it enforces the boundary reliably and patiently and without emotion.

This way, it produces less inhibition in shy or empathetic people (who hate to make requests that could make someone unhappy) and is less vulnerable to pushy people (who browbeat others into compromising on boundaries.)

In fact, you can get some of the benefits of an inanimate boundary without actually taking a human out of the loop, but just by reducing the bandwidth for social signals. By using email instead of in-person communication, for instance, or by using formalized scripts and impersonal terminology.  Distancing tactics make it easier to refuse requests and easier to make requests; if these effects are roughly the same in magnitude, you get a system that selects more effectively for enabling desired behavior and preventing undesired behavior. (Of course, when you have one permission-granter and many permission-seekers, the effects are not the same in aggregate magnitude; the permission-granter can get spammed by tons of unreasonable requests.)

Of course, if you’re trying to select for transgressiveness — if you want to reward people who are too savvy to follow the official rules and too stubborn to take no for an answer — you’d want to do the opposite; have an automated, impersonal filter to block or intimidate the dutiful, and an extremely personal, intimate, psychologically grueling test for the exceptional. But in this case, what you’ve set up is a competitive test to differentiate between people, not a rule or boundary which you’d like followed as widely as possible.

Consensus and Do-Ocracy

So far, the systems we’ve talked about are centralized, and described from the perspective of an authority figure. Given that you, the authority, want to achieve some goal, how should you most effectively enforce or incentivize desired activity?

But, of course, that’s not the only perspective one might take. You could instead take the perspective that everybody has goals, with no a priori reason to prefer one person’s goals to anyone else’s (without knowing  what the goals are), and model the situation as a group deliberating on how to make decisions.

Consensus represents the egalitarian-group version of permission-asking. Before an action is taken, the group must discuss it, and must agree (by majority vote, or unanimous consent, or some other aggregation mechanism) that it’s sufficiently widely accepted.

This has all of the typical flaws of asking permission from an authority figure, with the added problem that groups can take longer to come to consensus than a single authority takes to make a go/no-go decision. Consensus decision processes inhibit action.

(Of course, sometimes that’s exactly what you want. We have jury trials to prevent giving criminal penalties lightly or without deliberation.)

An alternative, equally egalitarian structure is what some hackerspaces call do-ocracy.

In a do-ocracy, everyone has authority to act, unilaterally. If you think something should be done, like rearranging the tables in a shared space, you do it. No need to ask for permission.

There might be disputes when someone objects to your actions, which have to be resolved in some way.  But this is basically the only situation where governance enters into a do-ocracy. Consensus decisionmaking is an informal version of a legislative or executive body; do-ocracy is an informal version of a judicial system.  Instead of needing governance every time someone acts, in a judicial-only system you only need governance every time someone acts (or states an intention to act) AND someone else objects.

The primary advantage of do-ocracy is that it doesn’t slow down actions in the majority of cases where nobody minds.  There’s no friction, no barrier to taking initiative.  You don’t have tasks lying undone because nobody knows “whose job” they are.  Additionally, it grants the most power to the most active participants, which intuitively has a kind of fairness to it, especially in voluntary clubs that have a lot of passive members who barely engage at all.

The disadvantages of do-ocracy are exactly the same as its advantages.  First of all, any action which is potentially harmful and hard to reverse (including, of course, dangerous accidents and violence) can be unilaterally initiated, and do-ocracy cannot prevent it, only remediate it after the fact (or penalize the agent.)  Do-ocracies don’t deal well with very severe, irreversible risks. When they have to, they evolve permission-based functions; for instance, the rules firms or insurance companies institute to prevent risky activities that could lead to lawsuits.

Secondly, do-ocracies grant the most power to the most active participants, which often means those who have the most time on their hands, or who are closest to the action, at the expense of absent stakeholders. This means, for instance, it favors a firm’s executives (who engage in day-to-day activity) over investors or donors or the general public; in volunteer and political organizations it favors those who have more free time to participate (retirees, students, the unemployed, the independently wealthy) over those who have less (working adults, parents).  The general phenomenon here is principal-agent problems — theft, self-dealing, negligence, all cases where the people who are physically there and acting take unfair advantage of the people who are absent and not in the loop, but depend on things remaining okay.

A judicial system doesn’t help those who don’t know they’ve been wronged.

Consensus systems, in fact, are designed to force governance to include or represent all the stakeholders — even those who would, by default, not take the initiative to participate.

Consumer-product companies mostly have do-ocratic power over their users. It’s possible to quit Facebook, with the touch of a button. Facebook changes its algorithms, often in ways users don’t like — but, in most cases, people don’t hate the changes enough to quit.  Facebook makes use of personal data — after putting up a dialog box requesting permission to use it. Yet, some people are dissatisfied and feel like Facebook is too powerful, like it’s hacking into their baser instincts, like this wasn’t what they’d wanted. But Facebook hasn’t harmed them in any way they didn’t, in a sense, consent to. The issue is that Facebook was doing things they didn’t reflectively approve of while they weren’t paying attention. Not secretly — none of this was secret, it just wasn’t on their minds, until suddenly a big media firestorm put it there.

You can get a lot of power to shape human behavior just by showing up, knowing what you want, and enacting it before anyone else has thought about it enough to object. That’s the side of do-ocracy that freaks people out.  Wherever in your life you’re running on autopilot, an adversarial innovator can take a bite out of you and get away with it long before you notice something’s wrong.  

This is another part of the appeal of permission-based systems, whether egalitarian or authoritarian; if you have to make a high-touch, human connection with me and get my permission before acting, I’m more likely to notice changes that are bad in ways I didn’t have any prior model of. If I’m sufficiently cautious or pessimistic, I might even be ok with the costs in terms of causing a chilling effect on harmless actions, so long as I make sure I’m sensitive to new kinds of shenanigans that can’t be captured in pre-existing rules.  If I don’t know what I want exactly, but I expect change is bad, I’m going to be much more drawn to permssion-based systems than if I know exactly what I want or if I expect typical actions to be improvements.

12 comments

Comments sorted by top scores.

comment by Raemon · 2019-08-02T22:42:30.439Z · LW(p) · GW(p)

Sort of meta:

I notice that this is the sort of blogpost I want to think about seriously, and generate comments or thoughts that actually apply the knowledge in domains I care about. But, I don't have time to do that right now, which means I probably won't get around to it. I sort of wish I had an easy button to remind me to maybe think about this seriously for 10 minutes later.

For now, saying that publicly partly because it sort of sucks that this sort of writing doesn't necessarily provoke comments and wanted to give some kind of feedback. Meanwhile, I guess if I haven't written any followup here in a few days, someone feel free to ping me about it?

Replies from: Kenny
comment by Kenny · 2019-08-21T18:37:56.590Z · LW(p) · GW(p)

I've used calendar reminders for exactly this.

(And 'ping' by-the-way.)

comment by Spiracular · 2019-08-05T23:36:49.293Z · LW(p) · GW(p)

Some rules seem to have an element of "cost of compliance" that relies on being able to predict the future, in a way that even specialists have little hope of doing. Sometimes, this leads to a risk-taker-enriched (aka asshole-filtered) gray zone surrounding a well of value which may-or-may-not have been poisoned (something of a Shroedinger's Well?).

If the gray zone is valuable, then for a while, the market might heavily favor gray-area violators of this type. But occasionally, one of these zones suddenly transforms into a trap for gray-area violators at all levels of competence. At least in theory, I could see some law-makers deploying this variety of illegible rules on purpose, to use that as a wasp-trap for sneaky, competent boundary-pushers. I have very little idea of whether this actually happens on-purpose much, and a lot of things that might initially look like this probably turn out to be "you gotta know a guy" in retrospect.

Some examples I can think of that might fit this pattern...

The question of exactly which financial regulations would be deemed "applicable to Bitcoin" was going to have to be done largely in retrospect, and I think even specialists largely couldn't make reliable predictions about this. On a related note, I know a story where a pre-blockchain gold-backed online currency tried to get a permit and was prohibited, due to regulators deciding that they didn't fall into the relevant reference class. This company was later penalized into bankruptcy for operating without that permit, when a later court ruling decided that they did fall into that reference class.

More broadly, the question of "will rules around patents will be leveraged against you" seems to sometimes fall near this gray-zone. That one's dampening profile is a bit weirder and complicated, though. The dampening-effect there seems to be disproportionately borne by the medium- to well-funded. It seems to reward obscurity, because small corporations are usually not worth going after about patent violations. Medium-sized ones might be, and often settle out of court to avoid a lawsuit, making it profitable to go after them; I would guess that they're the ones penalized the worst by this, but I'm not certain. Top corporations probably fall somewhere in-between; on the one hand, they tend to have good lawyers (repelling frivolous lawsuits), but on the other hand, they might stand to lose a very large sum in court.

Possibly anything where court rulings being made on the same case seem to see-saw back-and-forth as it goes up levels.

comment by Dagon · 2019-08-02T20:21:13.101Z · LW(p) · GW(p)

An important consideration here is trust. Governance is _easy_ for small groups of people who are generally aligned on goals and expectations (and on scope of interaction; my boss can't tell me where to go on vacation, only some negotiation of when). I have to ask my boss for any major purchase or travel, but I have a very high degree of confidence that he will approve anything I request (or suggest a better alternative), and this is because he has a high degree of trust that I won't try to sneak a wasteful or harmful purchase past him.

The adversarial cases are _much_ harder. Most attempts to minimize cost of compliance _also_ expose opportunities for un-enforced non-compliance. And as the number and diversity of agents being governed (or claiming to self-govern) increases, the likelihood that there are adversarial in addition to cooperative agents increases very quickly. Other things that increase the probability of adversarial governance (or semi-adversarial; any time the rule is being followed only based on rule-enforcing consequences, not because of actual agreement that it's best) are weirdness of topic (rules about things that differ from that group's norms), intrusiveness (on both dimensions: how often the rule applies and how difficult it is to know whether the rule applies), and opposing incentives (which can include stated or unstated rules from a different authority).


comment by limerott · 2019-08-04T16:05:11.458Z · LW(p) · GW(p)

You claim that, in politics, rules with high cost of compliance are introduced to keep the fixed pool of resources from being divided between too many people. Is there an example of that? I think that this is mostly done not through laws, but through social affiliations. Those with the best connections get the job or the resources.

I like the idea of a do-ocracy. It's like saying "The only rule is that don't look for rules to do what needs doing". But the crux is that this seeming anti-rule is actually a rule that needs to be followed. If there were absolutely no rule, and everyone were allowed to do what they wanted, nothing would get done. So, for this to work, first, a consensus has to be established that no consensus needs to be established!

Replies from: Dagon, kithpendragon, Kenny
comment by Dagon · 2019-08-05T19:33:18.294Z · LW(p) · GW(p)
If there were absolutely no rule, and everyone were allowed to do what they wanted, nothing would get done.

Wait. If everyone does what they want, and nothing gets done, that implies that everyone wants nothing done, doesn't it? What if doing what they want actually is DOING what they want? In that case, what they want gets done.

Replies from: limerott
comment by limerott · 2019-08-06T06:45:18.112Z · LW(p) · GW(p)

Let me clarify. If a group decides that it wants X, this does not imply that the individual member of that group wants X. What they usually want is to avoid work and let other's do it or be told what to do. But if they agree upon the strategy "To achieve X, we agree that every member has to want X and, if he is capable, do X" (rather than "To achieve X, one leader tells everyone what to do"), then things would get done!

Replies from: Dagon, Kenny
comment by Dagon · 2019-08-06T13:33:57.902Z · LW(p) · GW(p)

Ah, I think we need a more detailed model of what it means to want something. What a person says they want, what they think they want, and what they actually want at any given moment may differ. As verbal manipulators, humans tend to focus on what is said, but it's hard to see how that's actually the correct one.

If a group decided that it wants X, and the individual member doesn't want X enough to actually do it, the definition of "decided" seems to be in question. Maybe some members want X more than others.

(yes, I'm being a bit intentionally obtuse. I do want to be explicit when we're talking about coercion of others in order to meet your goals, as opposed to examining our own goals and beliefs. )

comment by Kenny · 2019-08-21T19:21:05.420Z · LW(p) · GW(p)

In do-ocracies, generally the 'revealed preferences' of the group members is pretty obvious. The things the 'group wants' are readily revealed to be those things that the group members actually act to achieve or acquire.

And, as a matter of how do-ocracies form initially, they typically 'accrete' around a single person or a small group of people that are already actively working on something. Think of a small open source programming project. Usually the project is started by a single person and whatever they actually work on is what they 'want' to work on. Often, when other people suggest changes, the initial person (who is likely still the 'project leader') will respond along the lines of "Pull requests welcome!", which is basically equivalent to "Feel free to work on the changes yourself and send them to me to review.". And, sometimes, a new contributor will work on the changes first, before even discussing the possibility. And then, after submitting the changes to review, the project leader or other participants might object to the changes, but, by default, anyone is free to make changes themselves (tho typically not anyone can actually make changes directly to the 'authoritative version').

comment by kithpendragon · 2019-08-05T14:56:02.943Z · LW(p) · GW(p)

...rules with high cost of compliance are introduced to keep the fixed pool of resources from being divided between too many people. Is there an example of that?

I think tax codes fall under this category. You can keep the money you earned if you are already part of the economic elite -- you already have enough money to have things like offshore bank accounts (worth it only if you can afford to squirrel away large sums) and high-yield investments (which have a good deal of risk attached to them, so are a potentially very costly way of investing; if you can't afford to lose the cash you shouldn't buy these, but they can be very lucrative for those who can afford to lose on occasion), or to hire an expert who can help you manage large swaths of your cash flow. Without that initial capital, you are unable to take advantage of tax laws (and other economic systems) in the same way as those who have more to work with in the first place. This kind of system tends to encourage economic resources to accumulate with those few who already control a lot.

Another example may be found in business law. I don't own a business, so can't get get very specific I'm afraid, but I gather that licensing and payroll and (again) tax concerns (among other issues) are often legally tuned in such a way that larger corporations have an easier time achieving compliance than smaller businesses. Laws designed, for example, to protect the environment from the waste output of a large factory could easily be written to except the local shop engaging in a similar process but at many orders less magnitude. Instead, I routinely encounter news articles (publication bias alert) highlighting the plight of local businesses as they struggle to keep financially afloat and stay legal. This kind of system tends to encourage business resources to accumulate with those businesses that already control a lot.

Replies from: Kenny
comment by Kenny · 2019-08-21T18:57:48.662Z · LW(p) · GW(p)

It's usually not so much that payroll or taxes are "legally tuned" to the benefit of larger corporations but that complying with all of the relevant laws and regulations is a relatively large 'fixed cost' that can be more easily born by a larger organization. Even something like initially selecting a payroll company, or monitoring (and potentially switching to another) payroll company is something more easily, and less costly, performed by a dedicated HR professional, let alone a group of professionals in an HR department, whereas lots of small businesses don't even have a full-time, dedicated HR person.

comment by Kenny · 2019-08-21T18:51:51.408Z · LW(p) · GW(p)

An example of a (relatively) high cost of compliance is the recentish EU GDPR. Large companies will be able to comply (relatively) more easily than small companies so the effect of the regulation is to privilege large companies over small (or smaller) ones , i.e. "keep the fixed pool of resources from being divided between too many people", where the pool of resources in this case are potential customers of online businesses (or even just users of online sites or services).

And more generally, for almost every law and regulation, it's easier for larger companies or organizations to 'pay compliance costs' so every new law or regulation effectively penalizes smaller companies or organizations.

Note that this is basically never considered, let alone advertised, as a deliberate effect of any law or regulation.

You're right that social affiliation is often used, in effect anyways, to mediate access to resources, but I've never encountered anyone describing the initiation or maintenance of affiliation as being a 'compliance cost', tho it's not an inapt analogy and might operate pretty similarly. I think it's relatively uncommon for social affiliation to involve explicit rules tho, which distinguishes it from what is typically described as 'compliance'.