Beyond Blame Minimization: Thoughts from the comments
post by physicaleconomics · 2022-03-29T22:28:38.493Z · LW · GW · 8 commentsContents
Incentives Selection Effects Structure preservation Moral Mazes Conclusion: What does an alien bureaucracy look like? None 8 comments
There were a lot of fantastic comments on this [LW · GW] post. I want to break down some common themes and offer my thoughts.
Incentives
Unsurprisingly, there was a lot of focus on the role that incentives—or a lack thereof—play in a bureaucracy.
Dumbledore's Army [LW · GW]:
I’ve been asking myself the same question about bureaucracies, and the depressing conclusion I came up with is that bureaucracies are often so lacking incentives that their actions are either based on inertia or simply unpredictable. I’m working from a UK perspective but I think it generalises. In a typical civil service job, once hired, you get your salary. You don’t get performance pay or any particular incentive to outperform.[1] You also don’t get fired for anything less than the most egregious misconduct. (I think the US has strong enough public sector unions that the typical civil servant also can’t be fired, despite your different employment laws.) So basically the individual has no incentive to do anything.
As far as I can see, the default state is to continue half-assing your job indefinitely, putting in the minimum effort to stay employed, possibly plus some moral-maze stuff doing office politics if you want promotion. (I’m assuming promotion is not based on accomplishment of object-level metrics.) The moral maze stuff probably accounts for tendencies toward blame minimisation.
Some individuals may care altruistically about doing the bureaucracy’s mission better, eg getting medicines approved faster, but unless they are the boss of the whole organisation, they need to persuade other people to cooperate in order to achieve that. And most of the other people will be enjoying their comfortable low-effort existence and will just get annoyed at that weirdo who’s trying to make them do extra work in order to achieve a change that doesn’t benefit them. So the end result is strong inertia where the bureaucracy keeps doing whatever it was doing already.
tailcalled [LW · GW] quotes John Wentworth, who talks about incentives in terms of degrees of freedom:
Responding to Chris: if you go look at real bureaucracies, it is not really the case that "at each level the bosses tell the subordinates what to do and they just have to do it". At every bureaucracy I've worked in/around, lower-level decision makers had many de facto degrees of freedom. You can think of this as a generalization of one of the central problems of jurisprudence: in practice, human "bosses" (or legislatures, in the jurisprudence case) are not able to give instructions which unambiguously specify what to do in all the crazy situations which come up in practice. Nor do people at the top have anywhere near the bandwidth needed to decide every ambiguous case themselves; there is far too much ambiguity in the world. So, in practice, lower-level people (i.e. judges at various levels) necessarily make many many judgement calls in the course of their work.
And Dagon [LW · GW] on weak or missing feedback:
I think a reasonable model for it is "mission motive" - somewhat like any other motive, but with a very weak or missing feedback mechanism. Without being able to track results, and with no market discipline (failure -> bankruptcy when the motive is aligned with existence), you get weird behaviors based on individual humans with unrefined models.
Other comments alluded to these ideas as well—I can't quote everyone, sadly. But let me try to pull these ideas together and make use of them.
In economics, the most obvious way to think of an incentive is that it is a tendency for things to go one way rather than another way. If people are incentivized by money, for example, and there are two paths in front of them, and one of them has a $20 bill at the end and the other doesn't, then we expect people to go down the path of the $20 bill more than randomness predicts. We don't have to talk about human psychology or the profit motive or anything. Instead, if we observe that a complex system consistently tends towards certain parameters, then we can say that it is incentivized to do so. So if we see plants consistently grow in the direction of the sun, then we can say plants are incentivized by sunlight without ever claiming to know what goes on inside of a plant.
(Not that there's anything wrong with learning about what goes on inside of a plant.)
An incentive, in this sense, is simply a tendency of a physical system, not a psychological factor per se. Therefore, we could think of an incentive in a few different ways. One is certainly in terms of feedback mechanisms. In order for a plant to be incentivized by sunlight, the plant needs a way of detecting the sunlight and repeatedly and consistently knowing where it is and tracking the relationship of its own movement with respect to the light. Similarly, to be incentivized by the $20 bill, people need to be able to see it and to have some way of determining that their motion is taking them closer to rather than farther from the money.
One such feedback mechanism, as Dumbledore's Army notes, is getting fired. If you work for a business, and your actions are consistently inconsistent with their goal of profit maximization, they will fire you. In terms of pathfinding, the employee might be thought of like a blood cell in an organism: the goal of the design of the system is to make sure that the narrow path that the blood cell is incentivized, or tends, to go is also the path that suits the overall system's needs. Eliminating unhelpful parts is a strict but powerful way of solving problems.
If bureaucracies are poorly incentivized, then there are a few ways we can understand this proposition:
- Bureaucracies are tightly incentivized to do bad things. This is a system that is well-designed in an objective engineering sense, but what it is designed to do is undesirable. A killing squad a la certain cliche depictions of Nazis may be such a system. But this isn't what anyone seems to be talking about, and it certainly isn't what I'm trying to understand either.
- Bureaucracies are not tightly incentivized. By this I mean that bureaucracies do not provide narrow paths for their subunits: there are too many degrees of freedom. The paths that employees follow, the activities that they engage in, are consequently not always optimal from the system's perspective even though the employees are behaving reasonably within their individual circumstances. This relates to the issue of being able to fire someone: firing someone is a very sharp way of cutting off paths.
- Bureaucracies have no clear goal, such that a tightly incentivized system may nevertheless contradict itself. Imagine a body that does an excellent job of "incentivizing" blood cells to flow down the arteries of the left leg and does an excellent job of "incentivizing" blood cells to not flow down the arteries of the right leg. This is a well-incentivized system that also is arguably not a coherent system. The bureaucracy's "mission motive" is unclear, or self-contradictory as a whole.
Ignoring possibility 1, possibilities 2 and 3 are what result in the "Bwuh?!" systems we are trying to understand. At an extreme, someone who literally cannot be fired or disciplined in any way does not have to do their job. As a consequence, bureaucracy may be unable to initiate some step in a process meant to achieve its goals. From the outside, such a system will be inexplicably slow and cumbersome, frequently forced to route around itself for no apparent reason.
Similarly, a bureaucracy might be tasked with incompatible goals. For example, a school system might be meant to educate children, but it might be incentivized to have them perform well on tests to such a degree that achieving the latter comes at the expense of the former. Such a system might talk earnestly and sincerely about the value of education while also predictably failing to educate, creating perpetual confusion among outside observers.
So maybe we could think about incentives as the physical structure that defines a clear motive, be it a tendency to move toward money, sunlight, or whatever else. What a system "wants" to do is what it consistently tends to do even when entropy would otherwise dictate that the outcome is highly unlikely. A consistent state that is consistently achieved despite being a priori unlikely to occur is a preferred state, and a system that tends to a preferred state is incentivized by it, meaning there is some physical structure causing said tendency.
With this reduction of the concept of an incentive, the question is, how can we better incentivize bureaucracies?
And so I ask you: is there a way to incentivize bureaucracies as strongly, clearly, and self-consistently as a textbook firm is without turning a bureaucracy into a textbook firm?
Selection Effects
The next most common analysis was on the role of selection effects. AllAmericanBreakfast [LW · GW] says,
I favor selection-based arguments in this area. Businesses that happen to be profit-maximizing tend to survive, along with their leadership. This doesn't mean that leaders always believe that every decision they make is a profit-maximizing decision, and the important thing is the overall trend. Many mistakes are made, and there's a lot of random noise in the system that can defeat even the wisest of profit-maximizing strategies.
To understand the behavior of bureaucracies, we need to understand what causes them to survive. I think that blame avoidance is a stronger argument than you're making it out to be.
Short-term budget (or power) maximization can fail to explain their behavior, because a swollen bureaucracy that's mis-managing its money or power is a ripe target for politicians. For survival, bureaucracies should aim to please the electorate, or at least be seen as less blameworthy than some other organization.
Your argument about the CDC and the rental market conflates responsibility-minimization with blame-minimization. A bureaucracy that reduces its responsibility to zero is dead. Having responsibilities is central to bureaucratic survival. And bureaucracies don't have perfect control over what responsibilities are allocated to them. The CDC couldn't necessarily control the amount of responsibility thrust upon them in the pandemic. They were trying to avoid blame for excessive COVID deaths, and in order to do that they assumed temporary responsibility over the rental market (and, predictably, rid themselves of it when the negative consequences manifested).
I think the point that survival necessitates a certain degree of being able to demonstrate value is a good one. Perhaps an "ideal" bureaucracy would do nothing and simply soak up some salaries for its employees, but it will be shut down if it has nothing to offer politicians or interest groups who can influence politicians.
Matthew Barnett [LW · GW]:
Here's some background first. Firms are well-modeled as profit maximizers because, although they employ internal bureaucracies to achieve their ends, bureaucracies that are bad at the task of making investors money are either selected out over time due to competition, or are pruned due to higher-up managers having relatively strong incentives to fire people who are not making investors money. This model relies on an assumption that investors themselves are usually profit-maximizers, which seems uncontroversial.
By contrast, government bureaucracies lack the pressures of competition, though they can (though less commonly) be subject to pruning, especially at the higher levels. I can think of two big forces shaping the motivations of government bureaucracies: the first being internal pressures on workers to "do work that looks good" to get promoted, and the second being a pressure to conform to the desires of the current president's political agenda (for people at the top of the bureaucracy).
Selection effects are like the other side of the coin to incentives. If incentives are structures that cause a system to exhibit tendencies, then selection effects are structures that limit which tendencies can be observed. So in a profit-maximizing firm, for example, maybe there is a tendency to open early and close late because you make more money that way. Minimum-wage workers and high-level executives alike would mostly prefer to come in at noon and leave at 3pm. But such firms would be selected out.
If bureaucracies generally do not get shut down, and individuals generally do not lose their jobs, the they can have inconvenient hours at offices in inconvenient locations. They can make lots of rules and forms that make life difficult for the very people that they serve. Even if no bureaucrat maliciously wants to make things difficult for anyone, in the absence of forces that weed out such inconveniences, they will only ever increase in prevalence.
Similarly, in the absence of positive selection effects, a good idea at one bureaucracy will not spread to others. Whereas every firm has had to adapt to the Internet or face extinction, for example, bureaucracies may often tend to be slower to adopt principles of good web design or paperless service.
Selection effects offer a clear explanation for why bureaucracies are often confusing. If a confusing system—a system that does not do a good job of tightly constraining itself to follow a path toward a set of mutually consistently goals—is much easier to create than a non-confusing system, for basic entropy reasons if nothing else, then confusing systems will tend to proliferate over non-confusing ones in the absence of selection effects against the former.
Structure preservation
Let me try to bring the twin concepts of incentives and selection effects together into a third concept: the idea of a structure-preserving system. To preserve the structure of something is to behave in such a way that your behavior is a model of the other thing, allowing us to deduce things about it by studying you.
In economics, a familiar example of a structure-preserving system is a utility-maximizer. For an entity to be rational, its utility function, which determines its behavior, must preserve the structure of its preference order, so that an apple higher in preference than an orange must also be higher in utility, meaning that the system—a human body, in this case—shows a greater tendency to eat apples rather than oranges, all else held even. Conceivably, therefore, it is possible to watch a rational economic agent's behavior and deduce the structure of its preferences because said structure is preserved by the behavior of the agent.
Relatedly, for an incentive to be have actual physical meaning—for it to cause a measurable tendency in a system—it must interact with the system so as to preserve some structure of the thing-providing-the-incentive. Direction with respect to the system is a structure frequently preserved. For example, plants tend to grow in the direction of the sun, and people tend to spray bug spray in the direction of bugs. Even though we act so as to destroy bugs, we do so by taking action that is a function of facts about bugs, such as where they are relative to us. Thus, it is possible for an outside observer to whom the bugs are invisible to watch our behavior and deduce, at the very least, the direction of the bugs relative to us. The ability for an outside observer to watch us to deduce at least some properties of something else is what it means for our behavior to preserve some of that other thing's structure.
Incentives only make sense in terms of structure-preservation. To be incentivized by money is to preserve structure about which jobs pay the most, what kinds of educational choices tend to allow entry into those jobs, etc. One also needs to behave in such a way that captures the idea that $100 is twice that of $50, and so on. Someone whose behavior fails to preserve the structure of where money is and how much more money is over here relative to over there is someone who is not really incentivized by money.
Selection effects, similarly, limit observable systems to those which preserve a narrow range of structures. Anything that evolves in the ocean, for example, must behave in such a way as to preserve the structure of water. As a result of such pressures, we can expect systems in a given environment to consistently preserve a consistent set of structures.
The economic systems that we understand with relative clarity are systems that preserve a particular structure clearly and consistently. We can easily tell that individual humans are following their preferences, firms are following money, and politicians are following votes. Even if I can't see profit opportunities myself—I have no specific knowledge about why, e.g., raspberry Pop-Tarts, made the way they are, sold at the price they're sold at, are profitable—I can nevertheless watch Kellogg's move in a particular direction and deduce the presence of profit in that direction, just like I can see a human spray bug spray in a particular direction and deduce the presence of bugs in that direction. Similarly, if a politician suddenly starts showing off many pictures of themselves hugging a bald eagle, I don't have to understand why this earns them votes to deduce that this behavior preserves some of the structure of voters.
The tendencies that people, firms and politicians exhibit reflect their incentives, and the reason that we only ever see such strongly incentivized systems is because of the selection effects at play. Firms that fail to make money go out of business; politicians that fail to win votes lose office; people who ignore the structure of their environment get killed by their environment, or at the very least fail to consistently acquire food and water. Selection effects themselves are a kind of physical structure that determine which tendencies tend to be exhibited. A firm that maximizes office space is a well-incentivized system, but it will nevertheless be destroyed by the selection effects: that particular tendency does not tend to exist.
Some commenters observed that bureaucracies do not tend to exhibit only a narrow set of tendencies. Here is rur [LW · GW]:
Assuming the bureaucracy is hierarchical, the maximizer may vary depending on the level. At the lower levels, a process-maximizer may best model behavior. Map versus territory. Akin to a mis-aligned AI paperclip-maximizer, reward is based on adherence to process, results do not matter. Mid-hierarchical levels are budget-maximizers. Body-count may be a surrogate. The bureaucratic topology that emerges and morphs at these mid-levels is where things become chaotic for the higher levels. Perhaps entrenchment, power, consensus, and hubris-maximizers join the dance. Predicting behavior at these higher levels may be more a matter of profiling than modeling. Regardless, the bureaucracy as a whole is more like an oil tanker than a jet ski. Its behavior in the near term is rather obvious.
Phil Scadden [LW · GW]:
I worked in and with a few bureaucracies in NZ and I very much doubt there is a single model to explain or predict behavior, because multiple utilities and motivations are present. They are plagued (as are private companies) by the levels problem where information between levels of management can get twisted by differing motivations and skill level. As other commentators have pointed out, upper levels of the management can be extremely risk adverse because they crucified for mistakes and unrewarded for success. While "blame-minimization" might seem appropriate, there are other factors at play. Large among them would be motivation. Some bureaucrats are empire-builders and their utility function is ever-increasing areas of control, (career administrators in middle-management role) but others got into the game in the first place because they wanted to change the world, and the tools of government seemed like a good place to find power. With that kind of motivation, they tend to rise quickly and I see a fair no. of them in high positions, especially in education, health, welfare. They feel the forces of blame, but are individually motivated to make change. Good luck predicting outcomes there.
The other prediction problem would relate to where in the organization that a decision is made. The more technical the decision, the more likely that is being made at low level in organization among the technocrats. The decision may still have to percolate up the levels which it may be misunderstood or subtly reframed to make a middle manager look good, (another predictability problem) but mostly I would expect such decisions to reflect perceived technical utility. (eg best timing for a booster vaccination).
michaelkeenan [LW · GW] quotes from a lengthy post by Dominic Cummings as to why structure-preservation in a bureaucracy is so unlikely, which is certainly worth reading if you want to peer into the nuts and bolts of the system. And shminux [LW · GW] points out what this means for a potential success condition:
A bureaucracy works well when every person has a vested interest in the shared success more than in whatever Goodhart incentives tend to emerge in the bureaucratic process. An essential (but by no means sufficient) part of it is the right amount of slack [LW · GW]. With too little slack the Goodhart optimization pressures defeat all other incentives.
This mirrors my own experiences that the quality of parts of a bureaucracy can depend strongly on the personal characteristics of the members. People who want to make things work can overcome great adversity, and people who don't can fail to hit a bullseye the size of a football field.
So let's try to make this clear. For a person to be utility-maximizing means that they act so as to preserve their own preferences. So if we see them reach for a box of Pop-Tarts at the grocery store, we predict that we can go back and see that this reaching-behavior preserves the preference-structure within the brain, meaning that, if we had some sort of appropriate measuring apparatus, we would expect to see that the arm's reach was an instruction from the brain; the arm functions so as to fulfill the brain's orderly, self-consistent predictions. Finding a signal-passing connection between the brain and the arm, like nerves extending between them, would bolster this theory.
A firm is profit-maximizing, meaning that they sell what their customers spend money on. So if they sell Pop-Tarts, we predict that customers tend to buy Pop-Tarts, a prediction we can confirm by standing in the store and noticing that customers do indeed come in and buy Pop-Tarts far more often that people who have no such tendency could ever be expected to do by random chance.
And while the median voter is a statistical construct and not a precisely identifiable figure, nevertheless we can see a politician do something and suppose that it preserves the structure of what the middle of their voters tend to vote for, which we can test for by figuring out approximately who that is and finding out what they tend to vote for.
What does the system do, why does it behave the way that it does? The answer, the general structure of economic explanation, is to say, "The system behaves so as to consistently preserve some consistent structure." It does so because it is incentivized to do so—there are physical structures in place that cause the tendency to be achieved very consistently—and because of selection effects—there are physical structures in place that cause only a narrow set of tendencies to be achieved consistently.
When things tend to achieve the same narrow, self-consistent set of things over and over, we can eventually model them as if they "want" those things. The rare states consistently achieved are "preferred". But psychologizing aside, what we're really observing is merely a highly consistent set of mutually consistent outcomes over time. Hence people talk about evolution "wanting" us to reproduce, even though we know that evolution is a statistical tendency, not a psychology.
(But what, then, is a psychology?)
We may not observe such tendencies in a bureaucracy, in which case we will not be able to model them as consistently achieving a consistent and self-consistent set of goals. In other words, they will make us go "Bwuh?!" a lot. This will happen even when they strongly declare certain goals and often do seem to be trying to achieve them. I am pretty sure, for example, that most everyone working at the CDC really do want to minimize the harms of COVID-19, and many of the things done by the CDC are probably best explained in terms of attempting to achieve said minimization. Nevertheless, I do not feel like I can model the CDC as a harm-minimizer or anything else in particular.
Moral Mazes
A few commenters pointed out that bureaucracies seem almost designed to obfuscate. Viliam [LW · GW]:
I would also expect some combination of: putting in the minimum effort, playing it safe, and optionally moral-maze behavior, and some form of rent seeking (e.g. taking bribes).
Pure blame minimization would motivate bureaucracies to reduce their jurisdiction, but expanding the jurisdiction provides more opportunities for rent seeking... if there is a standard way to make decisions about many things and yet carry no responsibility for their failure, I would expect bureaucracy to optimize for this.
Something like: Someone else is responsible for the success, but at every step they need to ask the bureaucracy for a permission; if the project fails because they didn't get the permission, the person responsible is fully blamed regardless, because they should have found another solution.
And davidestevens [LW · GW] links to a series of essays on The Office, one of the major themes of which is that workplaces depend on somewhat ambiguous hierarchies, as if we are actually predictably better off without some information and clarity on certain things.
This suggests a potential theory of bureaucracy as a system which we intend to not be tightly incentivize to preserve a particular narrow structure because we are sometimes better off not building clearly motivated systems.
Let me go back to the idea of degrees of freedom. Under normal circumstances, you'd think that economists prefer workers to be highly incentivized to produce value. But in some markets, it's hard for customers to declare ahead of time what a valuable product is. One example is research: how do you know who to pay to do research, and how do you know what results you're buying? Research, by definition, is finding out things we don't already know. Why would someone ever buy an I-don't-know-what?
There isn't necessarily a great solution to this problem. But we can nevertheless be confident that some skilled researchers exist, and if we give them space and time and money to research, they will produce socially valuable things. So a second-best solution may be to create an entity called a tenured professor. This thing, like a bureaucrat, cannot be fired and has no extrinsic motivation to do much of anything. Yet, if they are intrinsically motivated, which we can possibly select for by forcing them to go through graduate school and produce lots of papers to get tenure, then they might produce very valuable research anyway, a predictable tendency in the absence of any structures outside of themselves that would seem to cause said tendency.
Maybe you don't like tenure, I'm not saying it's necessarily a great system or one without problematic tendencies. But I am saying there's a sense in which we can perceive the potential value of such systems, of systems that don't do what we want but may help us nevertheless.
Another example is the police. Theoretically, a police officer should behave so as to preserve the structure of the law, punishing any lawbreakers they observe. If a society has some bad laws on its books, however, we may not want to tightly incentivize police officers. For example, we may prefer that officers look the other way when someone is clearly hiding marijuana or drinking alcohol from a brown paper bag. Tightly incentivized police officers only work well when the law is tightly incentivized to only make and keep good laws.
But you can't tell the police not to enforce the law. There must be plausible deniability at every level, which requires a system whose motives cannot be determined even when closely examined.
Again, I'm not saying that this system doesn't have obvious drawbacks. But as per the general theory of second best, when your system is imperfect in multiple ways, moving any one part of it toward "perfection" while leaving the other parts unchanged may make you substantially worse off. If we have a tendency to ask for bad things to be preserved, or if there are some good things that we don't know how to ask for, then we might not want a system that does a good job of preserving our structure, even if this requires that the system be very confusing to interact with.
So perhaps bureaucracies are selected for by forces that are trying to regulate some set of variables in an obfuscatory manner, with some obvious benefits and drawbacks accordingly. Similarly bureaucracies are incentivized to achieve a lack of internal clarity: Anyone who does too good of a job of giving clear directions gets fired, or at least transferred and sidelined, if firing someone really is too difficult.
Conclusion: What does an alien bureaucracy look like?
Bureaucracies are an important part of our society, but my interest in them is due to my interest in economic structure more generally. So when thinking about bureaucracy, I might encourage you to ask the question: what does an alien bureaucracy look like?
Here's what I mean. If alien life does exist, although their tastes and preferences may be very different from ours, I nevertheless expect to be able to model them as rational utility-maximizers, for basically the same reasons that a physicist would expect to model them according to our physics. I would expect them to have an economy, and I would expect them to have something broadly analogous to profit-maximizing firms in the sense that they would have created resource-management systems that maximize some mutually commensurable quantity such that the systems can "talk" to each and coordinate easily, allowing the alien society as a whole to trade off between various allocations of scarce resources in optimal ways.
And while it's less obvious that aliens would use a voting system per se to make collective decisions, if they did have a politician-based democracy, I would expect their politicians to be incentivized to get votes and to be selected on the basis of their success at doing so.
So these economic systems, although traditionally understood in human terms, may be thought of more generally as ways that complex systems behave under certain conditions, allowing us to take the human part out and focus on the abstract general relationships that really define the systems in question. If we can do so, then we might have found a "natural" economic structure, which consequently may be relatively simple to design and apply in many contexts.
But it is extremely non-obvious to me that aliens would have bureaucracies. Bureaucracies, it seems to me, really are a function of human psychology, such as the hypocritical way that we support laws that we do not want to be enforced. So obfuscatory systems may in fact be selected for in human societies, and the systems in question may be incentivized to avoid having a clear incentive structure.
And while this doesn't clear up the "Bwuh?!" it does clear up the "Bwuh?!" of "Bwuh?!"—I am no longer confused about why it is that economics doesn't have a clear, obvious, natural way of modeling the behavior of bureaucracy. And that itself seems like some kind of a step forward, for me at least.
So, if you're not already sick of the subject, I'm curious to know what you think of that idea—or anything related that's occurred to you on this subject.
8 comments
Comments sorted by top scores.
comment by matto · 2022-03-31T12:08:52.239Z · LW(p) · GW(p)
I'm confused by what we mean by bureaucracy. Is is a governenment-run agency, like the DMV for example? Or is it a low-feedback cost center inside of a for-profit company?
To me it seems that a bureaucracy is any organization, including a suborganization, where incentives and feedback loops weaken or become unaligned, making the whole thing more extractive toward those people it was supposed to benefit. Eg. the DMV still provides a useful service for citizens, but it does so inefficiently at a high cost. Or an IT department might not be run well, leading to blame-shifting behavior among its leadership, which will slow down any IT-related projects the larger company is demanding from it--this can go on for months or years at a big enough place because of how illegible such behavior can be made to be.
In this sense, I would absolutely expect alien to have bureaucracies. At least those aliens that need coordination mechanisms (Zerg don't), which introduce the possibility of individuals being bad at coordinating, leading to the creation of maze-like behaviors.
P.S. Thanks for writing this. I see value in studying bureaucracies in order to make them better, so I find discussions like this very interesting.
comment by Bucky · 2022-03-30T13:34:15.666Z · LW(p) · GW(p)
To a first order approximation I think of bureaucracies as status maximisers. I'll admit that status can be a bit nebulous and could be used to explain almost anything but I think a common sense understanding of status gives a good prediction in most cases.
- Will a bureaucracy usually perform its tasks kinda adequately? Yes
- Will a bureaucracy excel at its tasks? Not unless excellence comes with status, so almost never
- Will a bureaucracy look to expand its remit? Yes
- Will a bureaucracy often look like a blame minimiser? Yes (due to asymmetric justice [LW · GW])
For second order effects I would probably say bureaucracies are effort minimisers.If a bureaucracy's status isn't going to change much between 2 actions, just do whatever is easiest.
comment by DirectedEvolution (AllAmericanBreakfast) · 2022-03-30T07:10:20.391Z · LW(p) · GW(p)
What is a bureaucracy? What is "Bwuh?"
I am much more often surprised by what businesses do than by what the DMV does, and yet the DMV is one of the classic examples of a bureaucracy. At least for the limited purposes I've used them for, they've been extremely predictable.
Nor am I surprised by the IRS's behavior, although I am "surprised" (in the sense of finding the decisions cynically nonsensical) about the way the United States IRS is funded, its tax code structured, and the fact that in the USA, everybody has to do their own taxes despite the fact that the IRS is also doing them. Yet these "surprising" features don't originate within the IRS.
When we speak of the surprising "Bwuh?" behavior of bureaucracies, how do we distinguish this from bureaucracies that simply make mistakes, as all organizations, including businesses, do routinely? How do we distinguish "Bwuh?" from policy choices that we happen to disagree with? From incapacity? From failure?
Alternatively, if businesses don't give you that "Bwuh?" reaction, and the key distinction between a business and a bureaucracy is that businesses have an overriding profit motive, then maybe what's going on is that businesses are uniform and uncomplicated, so you've gotten used to them. Whereas with non-businesses, the heterogenous mixture of motives is just harder to parse.
I think that defining what precisely you mean by "bureaucracy" and what precisely you mean by this "Bwuh?" reaction would be helpful if you're trying to pursue a more serious analysis going forward. We need general definitions, not just examples.
comment by gbear605 · 2022-03-30T04:10:27.110Z · LW(p) · GW(p)
I suspect that bureaucracy would be present in alien worlds, so long as the aliens aren’t able to perfectly communicate with each other.
When one person is working on their own, they can just do whatever they think accomplishes their goal best. (Modulo how humans are not rational on an individual level.) Once you have two people working together though, they’ll start to have disagreements. Perhaps the two can work out their disagreements directly, but once you start to have more, the disagreements need mediators to resolve. Even in small companies, huge amounts of bureaucracy are introduced to guide people or to insulate people.
A bureaucracy can’t be tightly coupled because the primary purpose of it is to decrease coupling between the people subjected to it. Imagine a group of chefs are designing the next Poptart flavor. One chef thinks that it should be grape while another thinks it should be watermelon and a third thinks that peanut butter is best. Like in many situations, the three options are about equal and each has their pros and cons. The watermelon chef has a child with a peanut allergy. The grape chef doesn’t think that watermelons are tasty. The peanut butter chef thinks that grapes will taste too much like cough syrup. All three would probably affect the bottom line of the company equally, but they still need to choose one. Bureaucracy gives them each plausible deniability by introducing arbitrary rules that allow them to decide on a flavor without making it personal. Pergaps the company has a rule to not sell any products with allergens and all products have to meet a minimum bar of flavor to get made, so they go with grape. Maybe the grape poptart will be really unpopular though because of it’s cough syrup flavor and next time they’ll use a rule of “no foods that taste like medicine.”
Once you reach a group size of millions or billions of people, coordinating it without causing strife between individuals is difficult (or likely impossible), but we still want to do the best we can. So the FDA says “we need to take our time studying this drug” because that’s what they’ve said in the past, and it’s done a good-enough job in the past at keeping all the different parties happy and not fighting each other.
One downside to this is that it breaks in situations that are new or intense. The CDC did a good job with past epidemics but when they applied the same rules to a new pandemic, many of the rules no longer worked. Another downside is that the rules don’t inherently care about fairness, only about satisficing the relevant parties. The ancien regime worked well for France (or at least, the people the French bureaucracy cared about) until the bourgeoisie and the peasantry become relevant.
That is all to say, bureaucracy exists as a satisficer, primarily guarding unknown Chesteron’s Fences and preventing individuals from making unilateral decisions that affect everyone else.
comment by ryan_b · 2022-03-31T17:24:35.741Z · LW(p) · GW(p)
I have looked into this same series of questions from time to time, and have these three books on my (future) reading list. They all have a good reputation, and all seem to hit different elements of the problem; the first is about individual bureaucrats; the second about whole agencies; the third mostly about culture-level. I have not read any of them yet, however.
Street-Level Bureaucracy, 30th Anniversary Edition: Dilemmas of the Individual in Public Service
by Michael Lipsky (Kindle Edition)
Bureaucracy: What Government Agencies Do And Why They Do It
by James Q. Wilson (Paperback)
The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy
by David Graeber (Kindle Edition)
comment by tailcalled · 2022-03-30T10:14:03.212Z · LW(p) · GW(p)
Is there any reason you cut away the core part of the comment by John Wentworth that I linked? Do you disagree with it, or something?
So why do bureaucracies (and large organizations more generally) fail so badly?
My main model for this is that interfaces are a scarce resource [? · GW]. Or, to phrase it in a way more obviously relevant to factorization: it is empirically hard for humans to find good factorizations of problems which have not already been found. Interfaces which neatly split problems are not an abundant resource (at least relative to humans' abilities to find/build such interfaces). If you can solve that problem well, robustly and at scale, then there's an awful lot of money to be made.
Also, one major sub-bottleneck (though not the only sub-bottleneck) of interface scarcity is that it's hard to tell [? · GW] who has done a good job on a domain-specific problem/question without already having some domain-specific background knowledge. This also applies at a more "micro" level: it's hard to tell whose answers are best without knowing lots of context oneself.
comment by crl826 · 2022-04-23T22:51:11.685Z · LW(p) · GW(p)
I believe you've said it.
If bureaucracies generally do not get shut down, and individuals generally do not lose their jobs, the they can have inconvenient hours at offices in inconvenient locations. They can make lots of rules and forms that make life difficult for the very people that they serve. Even if no bureaucrat maliciously wants to make things difficult for anyone, in the absence of forces that weed out such inconveniences, they will only ever increase in prevalence.
I'll pull from my comment on your original article (written after you published both of these).
Politicians certainly rail against bureaucracies, but off the top of my head, I'm not aware of any bureaucracy that had its budget or its power cut.
Even the places where "defund the police" got some traction, it was generally accounting tricks. In many cases they ended up having funding restored shortly after or funding simply came from other sources.
My point being, it's not at all obvious to me that there are actually repercussions for swollen, mis-managed bureaucracies. But I would very much love to be wrong.
If you model bureaucracies as ROI-maximizers (getting the max reward for least effort) that can never be shut down....that seems to explain everything to me.
comment by TLW · 2022-03-30T04:15:34.025Z · LW(p) · GW(p)
And while the median voter is a statistical construct and not a precisely identifiable figure, nevertheless we can see a politician do something and suppose that it preserves the structure of what the middle of their voters tend to vote for,
Nit: the optimum for a politician to target is often not the median/middle person who voted for them.
Consider a toy example: how much to spend on widgets. $0-$4,000. Every person has an opinion, uniformly distributed across the entire range. Assume that voters vote for the candidate who is closest to their target.
If you have two candidates, the optimums are not $1,000 and $3,000. It's for one candidate to be $2,000-, and the other to be $2,000+.