Steelmanning Inefficiencypost by Stuart_Armstrong · 2014-07-03T12:17:13.923Z · score: 18 (21 votes) · LW · GW · Legacy · 27 comments
The strongest argument The measurement problem Inefficient efficiency implementation A fully general counterargument Efficiency, management, and undermining things that work Everything else being equal... Efficiency for evul! Self-improvement, fun, and games Burke, society, and co-evolution The case for increasing inefficiency Conclusion Real conclusion None 27 comments
When considering writing a hypothetical apostasy or steelmanning an opinion I disagreed with, I looked around for something worthwhile, both for me to write and others to read. Yvain/Scott has already steelmanned Time Cube, which cannot be beaten as an intellectual challenge, but probably didn't teach us much of general use (except in interesting dinner parties). I wanted something hard, but potentially instructive.
So I decided to steelman one of the anti-sacred cows (sacred anti-cows?) of this community, namely inefficiency. It was interesting to find that it was a little easier than I thought; there are a lot of arguments already out there (though they generally don't come out explicitly in favour of "inefficiency"), it was a question of collecting them, stretching them beyond their domains of validity, and adding a few rhetorical tricks.
The strongest argument
Let's start strong: efficiency is the single most dangerous thing in the entire universe. Then we can work down from that:
A superintelligent AI could go out of control and optimise the universe in ways that are contrary to human survival. Some people are very worried about this; you may have encountered them at some point. One big problem seems to be that there is no such thing as a "reduced impact AI": if we give a superintelligent AI a seemingly innocuous goal such as "create more paperclips", then it would turn the entire universe into paperclips. Even if it had a more limited goal such as "create X paperclips", then it would turn the entire universe into redundant paperclips, methods for counting the paperclips it has, or methods for defending the paperclips it has - all because these massive transformations allow it to squeeze just a little bit more expected utility from the universe.
The problem is one of efficiency: of always choosing the maximal outcome. The problem would go away if the AI could be content with almost accomplishing its goal, or of being almost certain that its goal was accomplished. Under those circumstances, "create more paperclips" could be a viable goal. It's only because a self-modifying AI drives towards efficiency, that we have the problem in the first place. If the AI accepted being inefficient in its actions, even a little bit, the world would be much safer.
So the first strike against efficiency is that it's the most likely thing to destroy the world, humanity, and everything of worth and value in the universe. This could possibly give us some pause.
The measurement problem
The principal problem with efficiency is the measurement problem. In order to properly maximise efficiency, we have to measure how well we're doing. So we have to construct some system of measurement, and then we maximise that.
And the problem with formal measurement systems is that they're always imperfect. They're almost never exactly what we really want to maximise. First of all, they're constructed from the map, not the territory, so they depend on us having a perfect model of reality (little-know fact: we do not, in fact, have a perfect model of reality). This can have dramatic consequences - see, for instance, the various failures of central planers (in governments and in corporations) when their chosen measurement scale turned out to not correspond with what they truly wanted.
This could happen if we mix up correlations and causations - sadness cannot be prevented by banning frowns. But it's also true if a true causation stops being true in new circumstances - exercise can prevent sadness, but only up to a point. Each component of the measurement scale has a "domain of validity", a set of circumstances in which it corresponds truly to something desirable. Except that we don't know the domain of validity ahead of time, we don't know how badly it fails outside that domain, and we have only a very hazy and approximate impression of what "desirable" is in the first place.
On that last point, there's often a mixup between instrumental and terminal goals. Many things that are seen as "intrinsically valuable" also have great instrumental advantages (eg freedom of speech, democracy, freedom of religion). As we learn, we may realise that we've overestimated the intrinsic value of that goal, and that we'd be satisfied with the instrumental advantages. This can be best illustrated by looking at the past: there were periods when "honour", "reputation", or "being a man of one's word" were incredibly important and valuable goals. With the advent of modern policing, contract law, and regulations, this is far less important, and a once-critical terminal goal has been reduced to a slightly desirable human feature.
That was just a particular example of the general point that moral learning and moral progress have become impossible, once a measurement system has been fixed. So we better get it perfect the first time, or we're going in the wrong direction. And - I hope I'm not stretching your credibility too far here - we won't get it perfect the first time. Even if we allow a scale to be updated as we go along, note that this updating is not happening according to efficiency criteria (we don't have a meta-scale that provides the efficient way of updating value scales). So the most important part of safe efficiency comes from non-efficient approaches.
The proof the imperfection of measurement systems can be found by looking through the history of philosophy: many philosophers have come up with scales of value that they thought were perfect. Then these were subject to philosophical critiques that pointed out certain pathologies (repugnant conclusion! levelling down objection! 10100 variants of the trolley problem!). The system's creators can choose to accept these pathologies into their system, but they generally didn't think of beforehand. Thus any formal measurement system will contain unplanned for pathologies.
Most critically, what cannot be measured (or what can only be measured badly) gets shunted aside. GDP, for instance, is well known to correspond poorly with anything of value, yet it's often targeted because it can be measured much better than things we do care about, such as the happiness and preference satisfaction of individual humans. So the process of building a scale introduces uncountable distortions.
So efficiency relies of maximising a formal measurement system, while we know that maximising every single past formal system would have been a disaster. But don't worry - we've certainly got it right, this time.
Inefficient efficiency implementation
Once the imperfect, simplified, and pathology-filled measurement system has been decided upon, then comes the question of efficiently maximising it. We can't always measure exactly each component of the system, so we'll often have to approximate or estimate the inputs - adding yet another layer of distortion.
More critically, if the task is hard, it's unlikely that one person can implement it on their own. So the system of measurement must pass out of the hands of those that designed it, those that are aware of (some) of its limitations, to those that have nothing but the system to go on. They'll no doubt misinterpret some of it (adding more distortions), but, more critically, they're likely to implement it blindly, without understanding what it's for. This might be because they don't understand it, but the most likely option is because the incentives are misaligned: they are rewarded for efficiently maximising the measurement system, not the underlying principle. The purpose of the initial measurement system has been lost.
And it's not just that institutions tend to have bad incentives (which is a given), it's that any formal measurement system is exceptionally likely to produce bad incentives. It's because it offers a seemingly objective measure of what must be optimised, so the temptation is exceptionally strong to just use the measure, and forget about its subtleties. This reduces performance to a series of box ticking, of "teaching to the test" and other equivalents. There's no use protesting that this was not intended: it's a general trend for all formal measurement systems, when actually implemented in an organisation staffed by actual humans.
Indeed, Campbell's law (or Goodhart's law) revolve around this issue: when a measure becomes a target, it ceases to be a good measure. A formal standard of efficiency will not succeed in its goals, as it will become corrupted in the process of implementation. If it were easy to implement efficiency in a way that offered genuine gains, Campbell's law would not exist. This strongly correlates with experience as well: how often have some efficiency improvements achieved their stated goals, without causing unexpected losses? This almost never happens, whether they are implemented by governments or companies, individuals or institutions. Efficiency gains are never as strong as estimated ahead of time.
A further problem is that once the measurement system has been around for some time, it starts to become the standard. Rather than GDP/unemployment/equality being a proxy for human utility, it becomes a target in its own right, with many people coming to see it as an goal worth maximising/minimising in its own right. Not only has the implementation been mangled, but that mangling has ended up changing future values in pernicious directions.
A fully general counterargument
Fortunately, those espousing efficiency have a fully general counterargument. If efficiency doesn't work, the answer is... more efficiency! If efficiency falls short, then we must estimate the amount to which it falls short, analyse the implementation, improve incentives, etc... Do you see what's going on there? The solution to a badly implemented system of measurement, is to add extra complications to the system, to measure even more things, the add more constraints, more boxes to tick.
The beauty of the argument is that it cannot be wrong. If anything fails, then you weren't efficient enough! Plug the whole thing into a new probability distribution, go up one level of meta if you need to, estimate the new parameters, and you're off again. Efficiency can never fail, it can only be failed. It's an infinite regress that never leads to questioning its foundational assumptions.
Efficiency, management, and undermining things that work
Another sneaky trick that efficiency proponents use is to sneak in any improvement under the banner of efficiency. Did some measure fail to improve outcomes? Then bring in some competent manager to oversee its implementation, with powers to put things right. If this fails, then more efficiency is needed (see above); maybe we should start estimating the efficiency of management? If this succeeds, then this is a triumph of efficiency.
But it isn't. It's likely a triumph of management. Most likely, there was no complicated cost-benefit estimate that good management would improve things; this is a generally known fact. There are many sensible procedures that can bring great good to organisations, or improve implementations; generally speaking, the effects of these procedures can't be properly measured, but we do them anyway. This is a triumph of anti-efficiency, not of efficiency.
In fact, efficiency often worsens things in organisations, by undermining un-measured advantages were causing it to function smoothly (see also the Burkean critique, below). If an organisational culture is destroyed by adherence to rigid objectives, then that culture is lost, no matter how many disasters the objectives end up causing in practice. Consider, for instance, recognition-primed decision theory, used successfully by naval ship commander, tank platoon leaders, fire commanders, design engineers, offshore oil installation managers, infantry officers, commercial aviation pilots, and chess players. By its nature, it is inefficient (it doesn't have a proper measure to maximise, it doesn't compare enough options, etc...). So we have great performance, through inefficient means.
Yet if we insisted on efficiency (by, for instance, getting each of those professionals to fill out detailed paperwork justifying their decisions, or more training in classical decision theory), we would dramatically reduce performance. As more and more experts would get trapped in the new way of thinking (or of accounting for their thinking), the old expertise would whither away from disuse, and the performance of the whole field would degrade.
Everything else being equal...
Efficiency advocates have a few paradigmatic examples of efficiency. For instance, they set up a situation in which you can save one child for $100, or two for $50 each, conclude you should do the second, and then pat themselves on the back for being rational and kind. Fair enough.
But where in the world are these people who are standing in front of rows of children with $100 or $50 cures in their hands, seriously considering going for the first option? They don't exist; instead the problem is built by assuming "everything else being equal". But everything else is not equal; if it were, there wouldn't be a debate. It's precisely because so many things are not equal, that we can argue that, say curing AIDS in a Ugandan ten-month old whose mother was raped, is not directly comparable to curing malaria in two Brazilian teenagers who picked it up on a trip abroad. This is a particularly egregious type of measurement problem: only one aspect of the situation (maybe the number of lives saved, maybe the years of life gained, maybe the quality-adjusted years of life gained... notice how the measure is continually getting more complex?) is deemed worthy of consideration. And all other aspects of the problem are deemed unworthy of measurement, and thus ignored. And the judgement of those closest to the problem - those with the best appreciation of the whole issues - is suspect, overruled by the abstract statistics decided upon by those far away.
Efficiency for evul!
Now, we might want efficiency in our own pet cause, but we're probably pretty indifferent to efficiency gains for causes we don't care about, and we'd be opposed to efficiency gains to causes that are antithetical to our own. Or let's be honest, and replace "antithetical" with "evil". There are, for instance, many groups dedicated to building AGIs with (in the view of many on this list) a dramatic lack of safeguards. We certainly wouldn't want them to increase their efficiency! Especially since it's quite likely that they would be far more successful at increasing their "build an AGI" efficiency than their safety efficiency.
Thus, even if efficiency worked well, it is very debatable as to whether we want it generally spread. Just like in a prisoner's dilemma, we might want increased efficiency for us, but not for others; and the best equilibrium might be that we don't increase our own efficiency, and instead accept the status quo. If opponents suddenly start breaking out the efficiency guns, we can always follow suit and retaliate.
At this point, people might argue that efficiency, like science and knowledge itself, is a neutral force, that can be used for good or evil, and that how it is used is a separate problem. But I hope that people on this list have a slightly smarter understanding of the situation than that. There are such things as information hazards. If someone publishes detailed plans for building atomic weapons or weaponising anthrax or bird flu, we don't buy the defence that "they're just providing information; it's up to others to decide how it is used". Similarly, we can't go around promoting a culture of efficiency without a clear view of the entire consequences of such a culture on the world.
In practice it seems that a general lack of efficiency culture could be of benefit for everyone. This was the part of the essay where I was going to break out the alienation argument, and start bringing out the Marxist critiques. But that proved to be unnecessary. We can stick with Adam Smith:
The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become. The torpor of his mind renders him not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgement concerning many even of the ordinary duties of private life... But in every improved and civilized society this is the state into which the labouring poor, that is, the great body of the people, must necessarily fall, unless government takes some pains to prevent it.
An Inquiry into the Nature and Causes of the Wealth of Nations (1776), Adam Smith
This is not unexpected. The are some ways of increasing efficiency that also increases the experience of the employees (Google seems to manage that). But generally speaking, efficiency is targeted at some measure that is not employee satisfaction (most likely the target is profit). When you change something without optimising for feature X, it is likely that feature X will do worse. This is partially because feature X is generally carefully constructed and low entropy, so any random change is pernicious, and partially because of limited resources: effort that goes away from X will reduce X. At the lower-to-mid level of the income scale, it seems that this pattern has been followed exactly: more and more jobs are becoming lousy, even as economic efficiency is rising. Indeed, I would argue that they are becoming lousy precisely because economic efficiency is rising. The amount of low income jobs with dignity is in sharp decline.
The contrast can be seen in the difference between GDP (easy to measure and optimise for) and happiness (hard to measure and optimise for). The modern economy has been transformed by efficiency drives, doubling every 35 years or so. But it's clear that human happiness has not been doubling every 35 years or so. The cult of efficiency has resulted in a lot of effort being put in inefficient directions in terms of what we truly value, with perverse results on the lower incomes. A little less efficiency, or at least a halt to the drive for ever greater efficiency, is certainly called for.
The proof can be seen in the status of different jobs. High status employees are much more likely to have flexible schedules or work patterns, to work without micromanaging superiors, and so on. Thus, as soon as they get enough power, people move away from imposed efficiency and fight to defend their independence and their right to not constantly be measured and directed. Often, this turns out to be better for their employers or clients as well. Reducing the drive towards efficiency results in outcomes that are better for everyone.
Self-improvement, fun, and games
Let's turn for a moment from the large scale to the small. Would you want more efficiency in your life? Many people on this list have made (or claimed) great improvements through improved efficiency. But did they implement these after a careful cost-benefit analysis, keeping careful track of their effects, and only making the changes that could be strictly justified? Of course not: most of the details of the implementation was done through personal judgement, honed through years of (inefficient) experience (no one tried hammering rusty nails into their hands to improve concentration - and not because of a "Rusty Nail self-hammering and concentration: a placebo controlled randomised trial" publication).
How do we know most of the process wasn't efficiency based? For the same reason that its so hard to teach a computer to do anything subtle: most of what we do is implemented by complicated systems we do not consciously control. "Efficiency" is a tiny conscious tweak that we add to a process that relies on massive unconscious processes, as well as skills and judgements that we developed in inefficient ways. Those who have tried to do more than that - for instance, those who have tried to use an explicit utility function as their decision criteria - have generally failed.
For example, imagine you were playing a game, and wanted to optimise this. One solution is to program a complicated algorithm to spit out the perfect play, guaranteeing your victory. But this would be boring; the game is no longer a challenge. Actually, what we wanted to optimise is fun. We could try to measure this (see the fully general counterargument above), but the measure would certainly fail, as we forgot to include challenge, or camaraderie, or long term re-playability, or whatever. It's the usual problem with efficiency - we just can't list all the important factors. And nobody really tries. Instead, we can do what we always do: try different things (different games, different ways of playing, etc...), get a feel for what works for us, and gradually improve our playing experience, without needing efficiency criteria. Once again, this demonstrates that great improvements are possible, without them being "efficiency gains".
In many people, there is a certain vicarious pleasure to see some project fail, if it was over-efficient, over-marketed, perfectly-designed-to-exploit-common-features. Bland and targeted Hollywood movies are like this, as are some sports teams; the triumph of the quirky, of the spontaneous, of the unexpected underdog, is something we value and enjoy. By definition, a complicated system that measures the "allowable spontaneity and underdog triumph" is not going to give us this enjoyment. Efficiency can never capture our many anti-efficiency desires, meaning it can never capture our desires, and optimising it would lose us much of what we value.
Burke, society, and co-evolution
Let's get Burkean. One of Burke's key insights was that the power and effectiveness of a given society are not found only in the explicit rules and resources. A lot of the strength is in the implicit organisation of society - in institutional knowledge and traditions. Constitutional rules about freedom of expression are of limited use, without a strong civil society that appreciates freedom of expression and pushes back at attempts to quash or undermine it. The legal system can't work efficiently without a culture of law-abiding among most citizens. People have set-up their lives, their debts, their interactions, in ways that best benefit themselves, given the social circumstances they find themselves in. Thus we should suspect that there is a certain "wisdom" in the way that society has organised itself; a certain resilience and adaptation. Countries like the US have so many laws we don't know how to count them; nevertheless, the country continues to function because we have reached an understanding as to which laws are enforced in which circumstances (so that the police investigates murder with more assiduity that suspected jaywalking, for instance). Without this understanding, neither the population nor the police could do anything, paralysed by uncertainty as to what was allowed and what wasn't. And this understanding is an implicit, decentralised object: its written down nowhere, but is contained in people's knowledge and expectations across the country.
Sweep away all these social structures in the name of efficient change, and a lot of value is destroyed - perhaps permanently. Transform the teaching profession into a chase for box ticking and test results, and the culture of good teaching is slowly eradicated, never to return even if the changes are reversed. Consider bankers, for instance. There have been legal changes in the last decades, but the most important ones were cultural, transforming banking from a staid and dull profession into a high risk casino (and this change was often justified in the name of economic efficiency).
The social capital of societies is being drained by change, and the faster the change (thus, the more strict we are at pursuing efficiency), the less time it has to reconstitute itself. Changing absolutely everything in the name of higher ideals (as happened in early communist Russia) is a recipe for disaster.
Having been Marxist/Adam Smithist before, let's also be social conservative for a moment. Drives for efficiency, whether direct or indirectly through capitalistic competition, tend to undermine the standard structures of society. Even without the Burkean argument above, these structures provide some value to many people. Some people appreciate being in certain hierarchies, in having society organised a particular way, in the stability of relationships within it. When you create change, some of these structures are destroyed and the new structures almost never provide equal value - at least at first. Even if you disagree with the social conservative values here, they are genuine values held by genuine people, who genuinely suffer when these structures are destroyed. And we all share these values to some extent: humans are risk averse, so that if you exchanged the positions of the average billionaire and the average beggar, the lost value from the billionaire would dwarf the gain for the beggar. A proposition to randomise the position of people in society, would never pass by majority vote.
Humans are complicated beings , with complicated desires shaped by the society we find ourselves in. Our desires, our capital (of all kinds), our habits, all these have co-evolved with the social circumstances we find ourselves in. Similarly, our formal and informal institutions have co-evolved with the technological, social and legal facts of our society. As has been often demonstrated, if you take co-evolved traits and "improve" one of them, the result can often be disastrous. But efficiency seeks to do just that. You can best make change my making it less efficient, by slowing it down, and letting society and institutions catch up and adapt to the transformations.
The case for increasing inefficiency
So far, we have seen strong arguments for avoiding an increase in efficiency; but this does not translate into a case for increased inefficiency.
But it seems that this must be the case. First of all, we must avoid a misleading status quo bias. It is extraordinarily unlikely that we are currently at the "optimum level of efficiency". Thus, if efficiency is suspect, its just as likely that we would need to decrease it as we need to increase it.
But we can make five positive points in favour of increase inefficiency. The first is that increased inefficiency gives more scope for developing the cultural and social structures that Bruke valued and that blunt the sharp edge of changes. Such structures can never evolve if everything one does is weighed and measured.
Secondly there is the efficiency-resilience tradeoff. Efficient systems tend to be brittle, with every effort bent towards optimising, and none left in reserve (as it is a cardinal sin to leave any resource under-utilised). Thus when disaster strikes, there is little left over to cope, and the carefully optimised, intricate machinery, is at risk of collapsing all at once. A more inefficient system, on the other hand, has more reserves, more extras to draw upon, more room to adapt.
Thirdly, increased inefficiency can allow a greater scope for moral compromises. Different systems of morality can differ strongly on what the best course of action is; that means that in an "efficient" society, the standard by which efficiency is measured is the target of an all out war. Gain control of that measure of efficiency, and you have gained control of the entire moral framework. Less efficient societies allow more compromise, by leaving aside many issues around which there is no consensus: since we know that the status quo has a large inertia, the fight to control the direction of change is less critical. We generally see it as a positive thing that political parties lack the power to completely reorganise society every time they win an election. Similarly, a less efficient society might be a less unequal society, since it seems that gains in strongly efficient societies are distributed much more unevenly than in less.
Fourthly, inefficiency adds more friction in the system, and hence more stability. People value the stability, and a bit more friction in many domains - such as financial trades - is widely seen as desirably.
Finally, inefficiency allows more exploration, more focus on speculative ideas. In a world where everything must reach the same rate of return, and do so quickly, there is much less tolerance of variety or difference in approaches. Long term R&D investments, for one, are made principally by governments and by monopolies, secure in their positions. Blue sky thinking and tinkering are luxuries that efficiency seldom tolerates.
I hope you've taken the time to read this. Enjoyed it. Maybe while taking a bath, or listening to soft music. Savoured it, or savoured the many mistakes within it. That it has added something to your day, to your wisdom and understanding. That you started it at one point, then grew bored, then returned later, or not. But, above all else, that you haven't zoomed through it, seeking key ideas, analysing them and correcting them in a spirit of... you know. That thing. That "e"-word.
I learnt quite a few things along the way of writing this apostasy, which was the point. Most valuable insight: the worth of "efficiency" is critically dependent on what improvements gets counted under that heading - at its not always clear, at all. We do have a general tendency to label far too many improvements as efficiency gains. If someone smart applies efficiency and gets better, was the smartness or the efficiency the key? I also think the "exploration vs exploitation" and the various problems with strict models and blind implementation are very valid, including the effect measurement can have on expertise.
I won't critique my own apostasy; I think others will learn more from the challenge of taking it apart themselves. As to whether I believe this argument - it's an apostasy, so... Of course I do. In some ways: I just found the bits in me that agreed with what I was writing, and gave them free reign for once. Though I had to dig very deep to find some of those bits (eg social conservatism).
EDIT: What, no-one taken the essay apart yet? Please go ahead!
Comments sorted by top scores.