Status Regulation and Anxious Underconfidence
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2017-11-16T19:35:00.533Z · LW · GW · 18 commentsContents
i. ii. iii. iv. v. None 18 comments
Follow-up to: Against Modest Epistemology
I’ve now given my critique of modesty as a set of explicit doctrines. I’ve tried to give the background theory, which I believe is nothing more than conventional cynical economics, that explains why so many aspects of the world are not optimized to the limits of human intelligence in the manner of financial prices. I have argued that the essence of rationality is to adapt to whatever world you find yourself in, rather than to be “humble” or “arrogant” a priori. I’ve tried to give some preliminary examples of how we really, really don’t live in the Adequate World where constant self-questioning would be appropriate, the way it is appropriate when second-guessing equity prices. I’ve tried to systematize modest epistemology into a semiformal rule, and I’ve argued that the rule yields absurd consequences.
I was careful to say all this first, because there’s a strict order to debate. If you’re going to argue against an idea, it’s bad form to start off by arguing that the idea was generated by a flawed thought process, before you’ve explained why you think the idea itself is wrong. Even if we’re refuting geocentrism, we should first say how we know that the Sun does not orbit the Earth, and only then pontificate about what cognitive biases might have afflicted geocentrists. As a rule, an idea should initially be discussed as though it had descended from the heavens on a USB stick spontaneously generated by an evaporating black hole, before any word is said psychoanalyzing the people who believe it. Otherwise I’d be guilty of poisoning the well, also known as Bulverism.
But I’ve now said quite a few words about modest epistemology as a pure idea. I feel comfortable at this stage saying that I think modest epistemology’s popularity owes something to its emotional appeal, as opposed to being strictly derived from epistemic considerations. In particular: emotions related to social status and self-doubt.
Even if I thought modesty were the correct normative epistemology, I would caution people not to confuse the correct reasoning principle with those particular emotional impulses. You’ll observe that I’ve written one or two things above about how not to analyze inadequacy, and mistakes not to make. We hear far too little from its advocates about potential misuses and distortions of modest epistemology, if we’re going to take modest epistemology seriously as a basic reasoning mode, technique, or principle.
And I’ll now try to describe the kinds of feelings that I think modesty’s appeal rests on. Because I’ve come to appreciate increasingly that human beings are really genuinely different from one another, you shouldn’t be surprised if it seems to you like this is not how you work. I claim nonetheless that many people do work like this.
i.
Let’s start with the emotion—not restricted to cases of modesty, just what I suspect to be a common human emotion—of “anxious underconfidence.”
As I started my current writing session, I had just ten minutes ago returned from the following conversation with someone looking for a job in the Bay Area that would give them relevant experience for running their own startup later:
eliezer: Are you a programmer?
aspiring founder: That’s what everyone asks. I’ve programmed at all of my previous jobs, but I wouldn’t call myself a programmer.
eliezer: I think you should try asking (person) if they know of any startups that could use non-super programmers, and look for a non-doomed startup that’s still early-stage enough that you can be assigned some business jobs and get a chance to try your hand at that without needing to manage it yourself. That might get you the startup experience you want.
aspiring founder: I know how to program, but I don’t know if I can display that well enough. I don’t have a Github account. I think I’d have to spend three months boning up on programming problems before I could do anything like the Google interview—or maybe I could do one of the bootcamps for programmers—
eliezer: I’m not sure if they’re aimed at your current skill level. Why don’t you try just one interview and see how that goes before you make any complicated further plans about how to prove your skills?
This fits into a very common pattern of advice I’ve found myself giving, along the lines of, “Don’t assume you can’t do something when it’s very cheap to try testing your ability to do it,” or, “Don’t assume other people will evaluate you lowly when it’s cheap to test that belief.”
I try to be careful to distinguish the virtue of avoiding overconfidence, which I sometimes call “humility,” from the phenomenon I’m calling “modest epistemology.” But even so, when overconfidence is such a terrible scourge according to the cognitive bias literature, can it ever be wise to caution people against underconfidence?
Yes. First of all, overcompensation after being warned about a cognitive bias is also a recognized problem in the literature; and the literature on that talks about how bad people often are at determining whether they’re undercorrecting or overcorrecting.1 Second, my own experience has been that while, yes, commenters on the Internet are often overconfident, it’s very different when I’m talking to people in person. My more recent experience seems more like 90% telling people to be less underconfident, to reach higher, to be more ambitious, to test themselves, and maybe 10% cautioning people against overconfidence. And yes, this ratio applies to men as well as women and nonbinary people, and to people considered high-status as well as people considered low-status.
Several people have now told me that the most important thing I have ever said to them is: “If you never fail, you’re only trying things that are too easy and playing far below your level.” Or, phrased as a standard Umeshism: “If you can’t remember any time in the last six months when you failed, you aren’t trying to do difficult enough things.” I first said it to someone who had set themselves on a career track to becoming a nurse instead of a physicist, even though they liked physics, because they were sure they could succeed at becoming a nurse.
I call this “anxious underconfidence,” and it seems to me to share a common thread with social anxiety. We might define “social anxiety” as “experiencing fear far in excess of what a third party would say are the reasonably predictable exterior consequences, with respect to other people possibly thinking poorly of you, or wanting things from you that you can’t provide them.” If someone is terrified of being present at a large social event because someone there might talk to them and they might be confused and stutter out an answer—when, realistically, this at worst makes a transient poor impression that is soon forgotten because you are not at the center of the other person’s life—then this is an excess fear of that event.
Similarly, many people’s emotional makeup is such that they experience what I would consider an excess fear—a fear disproportionate to the non-emotional consequences—of trying something and failing. A fear so strong that you become a nurse instead of a physicist because that is something you are certain you can do. Anything you might not be able to do is crossed off the list instantly. In fact, it was probably never generated as a policy option in the first place. Even when the correct course is obviously to just try the job interview and see what happens, the test will be put off indefinitely if failure feels possible.
If you’ve never wasted an effort, you’re filtering on far too high a required probability of success. Trying to avoid wasting effort—yes, that’s a good idea. Feeling bad when you realize you’ve wasted effort—yes, I do that too. But some people slice off the entire realm of uncertain projects because the prospect of having wasted effort, of having been publicly wrong, seems so horrible that projects in this class are not to be considered.
This is one of the emotions that I think might be at work in recommendations to take an outside view on your chances of success in some endeavor. If you only try the things that are allowed for your “reference class,” you’re supposed to be safe—in a certain social sense. You may fail, but you can justify the attempt to others by noting that many others have succeeded on similar tasks. On the other hand, if you try something more ambitious, you could fail and have everyone think you were stupid to try.
The mark of this vulnerability, and the proof that it is indeed a fallacy, would be not testing the predictions that the modest point of view makes about your inevitable failures—even when they would be cheap to test, and even when failure doesn’t lead to anything that a non-phobic third party would rate as terrible.
ii.
The other emotions I have in mind are perhaps easiest to understand in the context of efficient markets.
In humanity’s environment of evolutionary adaptedness, an offer of fifty carrots for a roasted antelope leg reflects a judgment about roles, relationships, and status. This idea of “price” is easier to grasp than the economist’s notion; and given that somebody doesn’t have the economist’s very specific notion in mind when you speak of “efficient markets,” they can end up making what I would consider an extremely understandable mistake.
You tried to explain to them that even if they thought AAPL stock was underpriced, they ought to question themselves. You claimed that they couldn’t manage to be systematically right on the occasions where the market price swung drastically. Not unless they had access to insider information on single stocks—which is to say, they just couldn’t do it.
But “I can’t do that. And you can’t either!” is a suspicious statement in everyday life. Suppose I try to juggle two balls and succeed, and then I try to juggle three balls and drop them. I could conclude that I’m bad at juggling and that other people could do better than me, which comes with a loss of status. Alternatively, I could heave a sad sigh as I come to realize that juggling more than two balls is just not possible. Whereupon my social standing in comparison to others is preserved. I even get to give instruction to others about this hard-won life lesson, and smile with sage superiority at any young fools who are still trying to figure out how to juggle three balls at a time.
I grew up with this fallacy, in the form of my Orthodox Jewish parents smiling at me and explaining how when they were young, they had asked a lot of religious questions too; but then they grew out of it, coming to recognize that some things were just beyond our ken.
At the time, I was flabbergasted at my parents’ arrogance in assuming that because they couldn’t solve a problem as teenagers, nobody else could possibly solve it going forward. Today, I understand this viewpoint not as arrogance, but as a simple flinch away from a painful thought and toward a pleasurable one. You can admit that you failed where success was possible, or you can smile with gently forgiving superiority at the youthful enthusiasm of those who are still naive enough to attempt to do better.
Of course, some things are impossible. But if one’s flinch response to failure is to perform a mental search for reasons one couldn’t have succeeded, it can be tempting to slide into false despair.
In the book Superforecasting, Philip Tetlock describes the number one characteristic of top forecasters, who show the ability to persistently outperform professional analysts and even small prediction markets: they believe that outperformance in forecasting is possible, and work to improve their performance.2
I would expect this to come as a shock to people who grew up steeped in academic studies of overconfidence and took away the lesson that epistemic excellence is mostly about accepting your own limitations.3 But I read that chapter of Superforecasting and laughed, because I was pretty sure from my own experience that I could guess what had happed to Tetlock: he had run into large numbers of naive respondents who smiled condescendingly at the naive enthusiasm of those who thought that anyone can get good at predicting future events.4
Now, imagine you’re somebody who didn’t read Superforecasting, but did at least grow up with parents telling you that if they’re not smart enough to be a lawyer, then neither are you. (As happened to a certain childhood friend of mine who is now a lawyer.)
And then you run across somebody who tries to tell you, not just that they can’t outguess the stock market, but that you’re not allowed to become good at it either. They claim that nobody is allowed to master the task at which they failed. Your uncle tripled his savings when he bet it all on GOOG, and this person tries to wave it off as luck. Isn’t that like somebody condescendingly explaining why juggling three balls is impossible, after you’ve seen with your own eyes that your uncle can juggle four?
This isn’t a naive question. Somebody who has seen the condescension of despair in action is right to treat this kind of claim as suspicious. It ought to take a massive economics literature examining the idea in theory and in practice, and responding to various apparent counterexamples, before we accept that a new kind of near-impossibility has been established in a case where the laws of physics seem to leave the possibility open.
Perhaps what you said to the efficiency skeptic was something like:
If it’s obvious that AAPL stock should be worth more because iPhones are so great, then a hedge fund manager should be able to see this logic too. This means that this information will already be baked into the market price. If what you’re saying is true, the market already knows it—and what the market knows beyond that, neither you nor I can guess.
But what they heard you saying was:
O thou, who burns with tears for those who burn,
In Hell, whose fires will find thee in thy turn
Hope not the Lord thy God to mercy teach
For who art thou to teach, or He to learn?5
This again is an obvious fallacy for them to suspect you of committing. They’re suggesting that something might be wrong with Y’s judgment of X, and you’re telling them to shut up because Y knows far better than them. Even though you can't point to any flaws in the skeptic's suggestion, can't say anything about the kinds of reasons Y has in mind for believing X, and can't point them to the information sources Y might be drawing from. And it just so happens that Y is big and powerful and impressive.
If we could look back at the ages before liquid financial markets existed, and record all of the human conversations that went on at the time, then practically every instance in history of anything that sounded like what you said about efficient markets—that some mysterious powerful being is always unquestionably right, though the reason be impossible to understand—would have been a mistake or a lie. So it’s hard to blame the skeptic for being suspicious, if they don’t yet understand how market efficiency works.
What you said to the skeptic about AAPL stock is justified for extremely liquid markets on short-term time horizons, but—at least I would claim—very rarely justified anywhere else. The claim is, “If you think you know the price of AAPL better than the stock market, then no matter how good the evidence you think you’ve found is, your reasoning just has some hidden mistake, or is neglecting some unspecified key consideration.” And no matter how valiantly they argue, no matter how carefully they construct their reasoning, we just smile and say, “Sorry, kid.” It is a final and absolute slapdown that is meant to be inescapable by any mundane means within a common person’s grasp.
Indeed, this supposedly inescapable and crushing rejoinder looks surprisingly similar to a particular social phenomenon I’ll call “status regulation.”
iii.
Status is an extremely valuable resource, and was valuable in the ancestral environment.
Status is also a somewhat conserved quantity. Not everyone can be sole dictator.
Even if a hunter-gatherer tribe or a startup contains more average status per person than a medieval society full of downtrodden peasants, there’s still a sense in which status is a limited resource and letting someone walk off with lots of status is like letting them walk off with your bag of carrots. So it shouldn’t be surprising if acting like you have more status than I assign to you triggers a negative emotion, a slapdown response.
If slapdowns exist to limit access to an important scarce resource, we should expect them to be cheater-resistant in the face of intense competition for that resource.6 If just anyone could find some easy sentences to say that let them get higher status than God, then your system for allocating status would be too easy to game. Escaping slapdowns should be hard, generally requiring more than mere abstract argumentation.
Except that people are different. So not everyone feels the same way about this, any more than we all feel the same way about sex.
As I’ve increasingly noticed of late, and contrary to beliefs earlier in my career about the psychological unity of humankind, not all human beings have all the human emotions. The logic of sexual reproduction makes it unlikely that anyone will have a new complex piece of mental machinery that nobody else has… but absences of complex machinery aren’t just possible; they’re amazingly common.
And we tend to underestimate how different other people are from ourselves. Once upon a time, there used to be a great and acrimonious debate in philosophy about whether people had “mental imagery” (whether or not people actually see a little picture of an elephant when they think about an elephant). It later turned out that some people see a little picture of an elephant, some people don’t, and both sides thought that the way they personally worked was so fundamental to cognition that they couldn’t imagine that other people worked differently. So both sides of the philosophical debate thought the other side was just full of crazy philosophers who were willfully denying the obvious. The typical mind fallacy is the bias whereby we assume most other people are much more like us than they actually are.
If you’re fully asexual, then you haven’t felt the emotion others call “sexual desire”… but you can feel friendship, the warmth of cuddling, and in most cases you can experience orgasm. If you’re not around people who talk explicitly about the possibility of asexuality, you might not even realize you’re asexual and that there is a distinct “sexual attraction” emotion you are missing, just like some people with congenital anosmia never realize that they don’t have a sense of smell.
Many people seem to be the equivalent of asexual with respect to the emotion of status regulation—myself among them. If you’re blind to status regulation (or even status itself) then you might still see that people with status get respect, and hunger for that respect. You might see someone with a nice car and envy the car. You might see a horrible person with a big house and think that their behavior ought not to be rewarded with a big house, and feel bitter about the smaller house you earned by being good. I can feel all of those things, but people’s overall place in the pecking order isn’t a fast, perceptual, pre-deliberative thing for me in its own right.
For many people, I gather that the social order is a reified emotional thing separate from respect, separate from the goods that status can obtain, separate from any deliberative reasoning about who ought to have those goods, and separate from any belief about who consented to be part of an implicit community agreement. There’s just a felt sense that some people are lower in various status hierarchies, while others are higher; and overreaching by trying to claim significantly more status than you currently have is an offense against the reified social order, which has an immediate emotional impact, separate from any beliefs about the further consequences that a social order causes. One may also have explicit beliefs about possible benefits or harms that could be caused by disruptions to the status hierarchy, but the status regulation feeling is more basic than that and doesn’t depend on high-level theories or cost-benefit calculations.
Consider, in this context, the efficiency skeptic’s perspective:
skeptic: I have to say, I'm baffled at your insistence that hedge fund managers are the summit of worldly wisdom. Many hedge fund managers—possibly most—are nothing but charlatans who convince pension managers to invest money that ought to have gone into index funds.
cecie: Markets are a mechanism that allow and incentivize a single smart participant to spot a bit of free energy and eat it, in a way that aggregates to produce a global equilibrium with no free energy. We don’t need to suppose that most hedge fund managers are wise; we only need to suppose that a tiny handful of market actors are smart enough in each case to have already seen what you saw.
skeptic: I’m not sure I understand. It sounds like what you’re saying, though, is that your faith is not in mere humans, but in some mysterious higher force, the “Market.”
You consider this Market incredibly impressive and powerful. You consider it folly for anyone to think that they can know better than the Market. And you just happen to have on hand a fully general method for slapping down anyone who dares challenge the Market, without needing to actually defend this or that particular belief of the Market.
cecie: A market’s efficiency doesn’t derive from its social status. True efficiency is very rare in human experience. There’s a very good reason that we had to coin a term for the concept of “efficient markets,” and not “efficient medicine” or “efficient physics”: because in those ecologies, not just anyone can come along and consume a morsel of free energy.
If you personally know better than the doctors in a hospital, you can’t walk in off the street tomorrow and make millions of dollars saving more patients’ lives. If you personally know better than an academic field, you can't walk in off the street tomorrow and make millions of dollars filling the arXiv with more accurate papers.
skeptic: I don’t know. The parallels between efficiency and human status relations seem awfully strong, and this “Market moves in mysterious ways” rejoinder seems like an awfully convenient trick.
Indeed, I would be surprised if there weren’t at least some believers in “efficient markets” who assigned them extremely high status and were tempted to exaggerate their efficiency, perhaps feeling a sense of indignation at those who dared to do better. Perhaps there are people who feel an urge to slap down anyone who starts questioning the efficiency of Boomville’s residential housing market.
So be it; Deepak Chopra can’t falsify quantum mechanics by being enthusiastic about a distorted version of it. The efficiency skeptic should jettison their skepticism, and should take care to avoid the fallacy fallacy—the fallacy of taking for granted that some conclusion is false just because a fallacious argument for that conclusion exists.7
I once summarized my epistemology like so: “Try to make sure you’d arrive at different beliefs in different worlds.” You don’t want to think in such a way that you wouldn’t believe in a conclusion in a world where it were true, just because a fallacious argument could support it. Emotionally appealing mistakes are not invincible cognitive traps that nobody can ever escape from. Sometimes they’re not even that hard to escape.
The remedy, as usual, is technical understanding. If you know in detail when a phenomenon switches on and off, and when the “inescapable” slapdown is escapable, you probably won’t map it onto God.
iv.
I actually can’t recall seeing anyone make the mistake of treating efficient markets like high-status authorities in a social pecking order.8 The more general phenomenon seems quite common, though: heavily weighting relative status in determining odds of success; responding to overly ambitious plans as though they were not merely imprudent but impudent; and privileging the hypothesis that authoritative individuals and institutions have mysterious unspecified good reasons for their actions, even when these reasons stubbornly resist elicitation and the actions are sufficiently explained by misaligned incentives.
From what I can tell, status regulation is a second factor accounting for modesty’s appeal, distinct from anxious underconfidence. The impulse is to construct “cheater-resistant” slapdowns that can (for example) prevent dilettantes who are low on the relevant status hierarchy from proposing new SAD treatments. Because if dilettantes can exploit an inefficiency in a respected scientific field, then this makes it easier to “steal” status and upset the current order.
In the past, I didn’t understand that an important part of status regulation, as most people experience it, is that one needs to already possess a certain amount of status before it’s seen as acceptable to reach up for a given higher level of status. What could be wrong (I previously thought) with trying to bestow unusually large benefits upon your tribe? I could understand why it would be bad to claim that you had already accomplished more than you had—to claim more respect than was due the good you’d already done. But what could be wrong with trying to do more good for the tribe, in the future, than you already had in the present?
It took me a long time to understand that trying to do interesting things in the future is a status violation because your current status right now determines what kinds of images you are allowed to associate with yourself, and if your status is low, then many people will intuitively perceive an unpleasant violation of the social order should you associate with yourself an image of possible future success above some level. Only people who already have something like an aura of pre-importance are allowed to try to do important things. Publicly setting out to do valuable and important things eventually is above the status you already have now, and will generate an immediate system-1 slapdown reaction.
I recognize now that this is a common lens through which people see the world, though I still don’t know how it feels to feel that.
Regardless, when I see a supposed piece of epistemology that looks to me an awful lot like my model of status regulation, but which doesn’t seem to cohere with the patterns of correct reasoning described by theorists like E. T. Jaynes, I get suspicious. When people cite the “outside view” to argue that one should stick to projects whose ambition and impressiveness befit one’s “reference class,” and announce that any effort to significantly outperform the “reference class” is epistemically suspect “overconfidence,” and insist that moving to take into account local extenuating factors, causal accounts, and justifications constitutes an illicit appeal to the “inside view” and we should rely on more obvious, visible, publicly demonstrable signs of overall auspiciousness or inauspiciousness… you know, I’m not sure this is strictly inspired by the experimental work done on people estimating their Christmas shopping completion times.
I become suspicious as well when this model is deployed in practice by people who talk in the same tone of voice that I’ve come to associate with status regulation, and when an awful lot of what they say sounds to me like an elaborate rationalization of, “Who are you to act like some kind of big shot?”
I observe that many of the same people worry a lot about “What do you say to the Republican?” or the possibility that crackpots might try to cheat—like they’re trying above all to guard some valuable social resource from the possibility of theft. I observe that the notion of somebody being able to steal that resource and get away with it seems to inspire a special degree of horror, rather than just being one more case of somebody making a mistaken probability estimate.
I observe that attempts to do much better than is the norm elicit many heated accusations of overconfidence. I observe that failures to even try to live up to your track record or to do as well as a typical member of some suggested reference class mysteriously fail to elicit many heated accusations of underconfidence. Underconfidence and overconfidence are symmetrical mistakes epistemically, and yet somehow I never see generalizations of the outside view even-handedly applied to correct both biases.
And so I’m skeptical that this reflects normative probability theory, pure epistemic rules such as aliens would also invent and use. Sort of like how an asexual decision theorist might be skeptical of an argument saying that the pure structure of decision theory implies that arbitrary decision agents with arbitrary biologies ought to value sex.
This kind of modesty often looks like the condescension of despair, or bears the “God works in mysterious ways” property of attributing vague good reasons to authorities on vague grounds. It’s the kind of reasoning that makes sense in the context of an efficient market, but it doesn’t seem to be coming from a model of the structure or incentives of relevant communities, such as the research community studying mood disorders.
No-free-energy equilibria do generalize beyond asset prices; markets are not the only ecologies full of motivated agents. But sometimes those agents aren’t sufficiently motivated and incentivized to do certain things, or the agents aren’t all individually free to do them. In this case, I think that many people are doing the equivalent of humbly accepting that they can’t possibly know whether a single house in Boomville is overpriced. In fact, I think this form of status-oriented modesty is extremely common, and is having hugely detrimental effects on the epistemic standards and the basic emotional health of the people who fall into it.
v.
Modesty can take the form of an explicit epistemological norm, or it can manifest in more quiet and implicit ways, as small flinches away from painful thoughts and towards more comfortable ones. It’s the latter that I think is causing most of the problem. I’ve spent a significant amount of time critiquing the explicit norms, because I think these serve an important role as canaries piling up in the coalmine, and because they are bad epistemology in their own right. But my chief hope is to illuminate that smaller and more quiet problem.
I think that anxious underconfidence and status regulation are the main forces motivating modesty, while concerns about overconfidence, disagreement, and theoreticism serve a secondary role in justifying and propagating these patterns of thought. Nor are anxious underconfidence and status regulation entirely separate problems; bucking the status quo is particularly painful when public failure is a possibility, and shooting low can be particularly attractive when it protects against accusations of hubris.
Consider the outside view as a heuristic for minimizing the risk of social transgression and failure. Relying on an outside view instead of an inside view will generally mean making fewer knowledge claims, and the knowledge claims will generally rest on surface impressions (which are easier to share), rather than on privileged insights and background knowledge (which imply more status).
Or consider the social utility of playing the fox's part. The fox can say that they rely only on humble data sets, disclaiming the hedgehog’s lofty theories, and disclaiming any special knowledge or special powers of discernment implied thereby. And by sticking to relatively local claims, or only endorsing global theories once they command authorities’ universal assent, the fox can avoid endorsing the kinds of generalizations that might encroach on someone else’s turf or otherwise disrupt a status hierarchy.
Finally, consider appeals to agreement. As a matter of probability theory, perfect rationality plus mutual understanding often entails perfect agreement. Yet it doesn’t follow from this that the way for human beings to become more rational is to try their best to minimize disagreement. An all-knowing agent will assign probabilities approaching 0 and 1 to all or most of its beliefs, but this doesn’t imply that the best way to become more knowledgeable is to manually adjust one’s beliefs to be as extreme as possible.
The behavior of ideal Bayesian reasoners is important evidence about how to become more rational. What this usually involves, however, is understanding how Bayesian reasoning works internally and trying to implement a causally similar procedure, not looking at the end product and trying to pantomime particular surface-level indicators or side-effects of good Bayesian inference. And a psychological drive toward automatic deference or self-skepticism isn’t the mechanism by which Bayesians end up agreeing to agree.
Bayes-optimal reasoners don’t Aumann-agree because they’re following some exotic meta-level heuristic. I don’t know of any general-purpose rule like that for quickly and cheaply leapfrogging to consensus, except ones that do so by sacrificing some amount of expected belief accuracy. To the best of my knowledge, the outlandish and ingenious trick that really lets flawed reasoners inch nearer to Aumann’s ideal is just the old-fashioned one where you go out and think about yourself and about the world, and do what you can to correct for this or that bias in a case-by-case fashion.
Whether applied selectively or consistently, the temptation of modesty is to “fake” Aumann agreement—to rush the process, rather than waiting until you and others can actually rationally converge upon the same views. The temptation is to call an early halt to risky lines of inquiry, to not claim to know too much, and to not claim to aspire to too much; all while wielding a fully general argument against anyone who doesn’t do the same.
And now that I’ve given my warning about these risks and wrong turns, I hope to return to other matters.
My friend John thought that there were hidden good reasons behind Japan’s decision not to print money. Was this because he thought that the Bank of Japan was big and powerful, and therefore higher status than a non-professional-economist like me?
I literally had a bad taste in my mouth as I wrote that paragraph.9 This kind of psychologizing is not what people epistemically virtuous enough to bet on their beliefs should spend most of their time saying to one another. They should just be winning hundreds of dollars off of me by betting on whether some AI benchmark will be met by a certain time, as my friend later proceeded to do. And then later he and I both lost money to other friends, betting against Trump’s election victory. The journey goes on.
I’m not scheming to taint all humility forever with the mere suspicion of secretly fallacious reasoning. That would convict me of the fallacy fallacy. Yes, subconscious influences and emotional temptations are a problem, but you can often beat those if your explicit verbal reasoning is good.
I’ve critiqued the fruits of modesty, and noted my concerns about the tree on which they grow. I’ve said why, though my understanding of the mental motions behind modesty is very imperfect and incomplete, I do not expect these motions to yield good and true fruits. But cognitive fallacies are not invincible traps; and if I spent most of my time thinking about meta-rationality and cognitive bias, I'd be taking my eye off the ball.10
Inadequate Equilibria is now available in electronic and print form on equilibriabook.com.
Conclusion: Against Shooting Yourself in the Foot.
-
From Bodenhausen, Macrae, and Hugenberg (2003):
[I]f correctional mechanisms are to result in a less biased judgment, the perceiver must have a generally accurate lay theory about the direction and extent of the bias. Otherwise, corrections could go in the wrong direction, they could go insufficiently in the right direction, or they could go too far in the right direction, leading to overcorrection. Indeed, many examples of overcorrection have been documented (see Wegener & Perry, 1997, for a review), indicating that even when a bias is detected and capacity and motivation are present, controlled processes are not necessarily effective in accurately counteracting automatic biases. ↩
-
From Superforecasting: “The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement. It is roughly three times as powerful a predictor as its closest rival, intelligence.” ↩
-
E.g., Alpert and Raiffa (1982), “A Progress Report on the Training of Probability Assessors. ↩
-
Or rather, get better at predicting future events than intelligence agencies, company executives, and the wisdom of crowds. ↩
-
From Edward FitzGerald’s Rubaiyat of Omar Khayyám. ↩
-
The existence of specialized cognitive modules for detecting cheating can be seen, e.g., in the Wason selection task. Test subjects perform poorly when asked to perform a version of this task introduced in socially neutral terms (e.g., rules governing numbers and colors), but perform well when given an isomorphic version of the task that is framed in terms of social rules and methods for spotting violators of those rules. See Cosmides and Tooby, “Cognitive Adaptations for Social Exchange.” ↩
-
Give me any other major and widely discussed belief from any other field of science, and I shall paint a picture of how it resembles some other fallacy—maybe even find somebody who actually misinterpreted it that way. It doesn’t mean much. There’s just such a vast array of mistakes human minds can make that if you rejected every argument that looks like it could maybe be guilty of some fallacy, you’d be left with nothing at all.
It often just doesn’t mean very much when we find that a line of argument can be made to look “suspiciously like” some fallacious argument. Or rather: being suspicious is one thing, and being so suspicious that relevant evidence cannot realistically overcome a suspicion is another. ↩
-
It’s a mistake that somebody could make, though, and people promoting ideas that are susceptible to fallacious misinterpretation do have an obligation to post warning signs. Sometimes it feels like I’ve spent my whole life doing nothing else. ↩
-
Well, my breakfast might also have had something to do with it, but I noticed the bad taste while writing those sentences. ↩
-
There’s more I can say about how I think modest epistemology and status dynamics work in practice, based on past conversations; but it would require me to digress into talking about my work and fiction-writing. For a supplemental chapter taking a more concrete look at these concepts, see Hero Licensing. ↩
18 comments
Comments sorted by top scores.
comment by Ben Pace (Benito) · 2017-11-17T02:04:45.280Z · LW(p) · GW(p)
Woop! The final chapter (bar the conclusion)! I would promote this to Featured just for that, to let folks know it's concluded and that you can read each excellent post in a single sitting.
However, there's a bunch of other great things: the significance of the effects of anxious underconfidence - the damage it does to our ability to stop humanity going extinct and create a world we care about - is really valuable, and making sure I'm trying hard enough to fail regularly is important.
I found the motivating discussion regarding status regulation interesting, but I do not feel confident in the model. I especially appreicated the final remarks on it:
I’ve critiqued the fruits of modesty, and noted my concerns about the tree on which they grow. I’ve said why, though my understanding of the mental motions behind modesty is very imperfect and incomplete, I do not expect these motions to yield good and true fruits [emphasis added]. But cognitive fallacies are not invincible traps; and if I spent most of my time thinking about meta-rationality and cognitive bias, I'd be taking my eye off the ball.
For these reasons, I've promoted it to Featured.
comment by ChristianKl · 2017-11-17T20:35:43.660Z · LW(p) · GW(p)
If you personally know better than the doctors in a hospital, you can’t walk in off the street tomorrow and make millions of dollars saving more patients’ lives.
Even through there's a risk of annoying through repetition, if we would have Prediction-based Medicine we would have a market that would make this possible as ssoon a new person has done enough treatments to get calibrated. EY framework would also allow this, but my proposed framework is actually doable via a startup run by good people.
Replies from: waveman↑ comment by waveman · 2017-11-18T09:55:04.613Z · LW(p) · GW(p)
Would you mind expanding on this point?
How would a prediction market get accurate access to test results? Treatment details? Outcomes?
Getting access to anonymized data even from government sponsored studies is hard enough. Let alone confidential patient data. Even if the medicos wanted you to have it, the IT systems are such a mess you may not want it.
Anyone who thinks getting this data is easy please send me the data from he PROTECT trial in the UK. https://pubpeer.com/publications/B74A9A4D097C05E2C9B38888696044
comment by CronoDAS · 2017-11-17T01:26:04.359Z · LW(p) · GW(p)
I feel this is solving the wrong problem.
There are four general categories of people professing beliefs that I might run into:
1) Laypeople: people who know they aren’t experts and have the baseline level of knowledge in their society
2) Crackpots: people who disagree with experts, and are wrong, but not in a way that a layperson can prove for himself
3) Experts: people who have studied a subject matter, are familiar with and understand the relevant evidence, and can be justified in rejecting crackpot claims based on the evidence
4) “Visionaries” - people who disagree with the majority of experts, and are right
Since most people are laypeople with respect to any given field, the problem is that, as a layperson, how can you tell the difference between a crackpot, an expert, and a visionary?
Relying on the object level is a terrible idea. It’s hard for a layperson to refute a crackpot. Anyone can create a reasonable sounding argument in favor of a position - all they have to do is say a bunch of lies and half-truths from which their conclusion logically follows. It’s impossible for the ancient Egyptians to have built the Pyramids, because they’re so massive we couldn’t duplicate them today with modern technology, so aliens must have done it! Except that we most definitely could build an exact replica of the Great Pyramid today if we wanted to spend a couple billion dollars on the project. Crackpots have zillions of arguments that a layperson can’t refute on the object level by themselves.
It is really incredibly easy to be bamboozled by a crackpot argument, and looking at the object level is rather futile when you haven’t achieved expertise yourself, which usually takes the form of graduate school or direct experience working in a relevant field.
As far as I can tell, some form of modesty is the only reasonable way to avoid being suckered by clever crackpots...
Replies from: dxu, countingtoten↑ comment by dxu · 2017-11-17T04:20:14.465Z · LW(p) · GW(p)
This is true, but also doesn't seem to engage with the point of the book, which is largely about when to trust yourself over others, as opposed to some random (person who may or may not be a) crackpot. (In the latter case, you can't trust that you're not being presented with deliberately filtered evidence.)
Moreover, even in the latter case, it's possible to be skeptical of someone's claims without making the further assertion that they cannot possibly know what they claim to know. It's one thing to say, "What you say appears to makes sense, but I don't know enough about the subject to be able to tell if that's because it actually makes sense, or because I just can't see where the flaw is," and quite another to say, "No, I unilaterally reject the argument you're making because you don't have the credentials to back it up."
EDIT: For some reason I can't get the site to stop mangling the second hyperlink. Although I kept said hyperlink for reference, here is the actual page address: http://squid314.livejournal.com/350090.html
↑ comment by countingtoten · 2017-11-17T10:02:34.989Z · LW(p) · GW(p)
Um, you just refuted a crackpot claim on the object level, using the kind of common-sense argument that I (a layman) heard from a physics teacher in high school. ETA: This may illustrate a problem with the neat, bright-line categories you're assuming.
On a similar note: I remember a speech given by a young-Earth creationist that I think differs from lesser crankdom mainly in being more developed. As the lie aged it needed to birth more lies in response to the real world of entangled truths. And while I couldn't refute everything the guy said - that's the point of a Gish Gallop - I knew a cat probably couldn't be a vegetarian.
Replies from: CronoDAS, waveman↑ comment by CronoDAS · 2021-12-13T23:29:12.136Z · LW(p) · GW(p)
No, I contradicted a crackpot claim by stating that the opposite was true. I didn't refute it; that would have required providing evidence (in this case, by explaining how someone without budget constraints actually could go about making a replica of the Great Pyramid using modern technology).
Replies from: countingtoten↑ comment by countingtoten · 2021-12-14T04:20:10.718Z · LW(p) · GW(p)
Not sure what you just said, but according to the aforementioned physics teacher people have absolutely brought beer money, recruited a bunch of guys, and had them move giant rocks around in a manner consistent with the non-crazy theory of pyramid construction. (I guess the brand of beer used might count as "modern technology," and perhaps the quarry tools, but I doubt the rest of it did.) You don't, in fact, need to build a full pyramid to refute crackpot claims.
comment by tlhonmey · 2018-07-20T07:03:24.441Z · LW(p) · GW(p)
Another potential reason for the disparity in social reaction to overconfidence vs underconfidence may be that, for primitive people, overconfidence would likely get one killed immediately when taking on too large a challenge while underconfidence would merely result in being hungry but usually living to find another opportunity later.
In the modern world very few of our challenges are of a nature where failure results in immediate death, but our brains are still wired as though we're debating the wisdom of leaping onto a mammoth's back. Being pack animals we are naturally inclined to curb the exuberance of others to avoid incurring fatality rates that would jeopardize the survival of the tribe, but most of us have a mis-calibrated scale due to never having had to take on significant, life-or-death decisions.
comment by romeostevensit · 2017-11-17T23:20:50.574Z · LW(p) · GW(p)
This is good for highlighting the ideosyncracy of our own flawed reasoning: we're each overloading different subsystems with additional tasks based on our history of comparative advantage (I'm really good at X! Better start funelling everything through the X subsystem!), and, vice-versa, coping mechanisms for weak subsystems.
comment by Emiya (andrea-mulazzani) · 2021-05-30T14:25:10.632Z · LW(p) · GW(p)
I actually can’t recall seeing anyone make the mistake of treating efficient markets like high-status authorities in a social pecking order.
I've seen often enough, or at least I think I've seen often enough, people treating efficient markets or just "free, deregulated market" as some kind of benevolent godly being that is able to fix just any problem.
I admit that I came from the opposite corner and that I flinched at the first paragraphs of the explanation on efficient market, but I still feel that a lot of bright people aren't asking the questions
"Is it more profit-efficient to fix the problem or to just cheat?"
"Can actors get more profit by causing damages worse than the benefits they provide?"
"Is the share of actors that, seeing that the cheaters niche of the market is already filled when they get there, would go on to do okayish profits by trying to genuinely fix the problem able to produce more public value than the damage cheaters produce?"
Asking an unregulated free market to fix a problem in exchange for rewards is like asking an unaligned human intelligence with thousands of brains to do it.
I have seen more blatant examples of this toward the concept of free market, but a lot of people still seem to interpret the notion of "efficient market" as "and given the wisdom of the efficient market, the economy would improve and produce more value for everyone", and I feel the two views are related, though I might be wrong about how many people have a clear difference of the two concepts in their heads.
"If these investments really are bogus and will horribly crush the economy when they collapse, surely someone in the efficient market would have seen it coming" is the mindset I'm trying to describe, though this mindset seem to have a blurry idea of what an efficient market is about.
Replies from: CronoDAS, TAG↑ comment by CronoDAS · 2021-12-13T23:32:32.689Z · LW(p) · GW(p)
Some people did see the mortgage-backed securities crash of 2008 coming and made money on it!
Replies from: andrea-mulazzani↑ comment by Emiya (andrea-mulazzani) · 2022-01-27T14:54:35.133Z · LW(p) · GW(p)
Indeed, including the people who willingly caused it. But profiting from a problem is not the same as fixing it.
↑ comment by TAG · 2021-05-30T20:39:21.839Z · LW(p) · GW(p)
Yes In the same sense that's there's no such thing as being optimal but not optimising anything in particular, or optimising everything in general, there is no sense that a market being unspecifically efficient will solve a problem that has never been fed into it.
There is also a constant confusion between unregulated markets and free markets. Unregulated markets can be captured by monpolies, and thereby cease to be free in important senses.
What is the utility function of a market, absent regulation?
Replies from: andrea-mulazzani↑ comment by Emiya (andrea-mulazzani) · 2021-05-31T14:48:12.058Z · LW(p) · GW(p)
I'm not 100% sure I understood the first paragraph, could you clarify it for me if I got it wrong?
Essentially, the "efficient-markets-as-high-status-authorities" mindset I was trying to describe seems to me that work as such:
Given a problem A, let's say providing life saving medicine to max number of people, it assumes that letting agents motivated by profit act freely, unrestricted by regulations or policies that even be aimed to try fix problem A, would provide said medicine to more people than an intentional policy of a government that's trying to provide said medicine to max number of people.
The market doesn't seem to have a utility function in this model, but any agent in this market (that is able to survive in it) is motivated by an utility function that just wants to maximise profit.
Part of the reason for the assumption that "free market of agents motivated by profit" should be so good at producing solution for problem A (save lives with medicine) is that the "free market" is awesomely good at pricing actions and at finding ways to get profits, because a lot of agents are trying different things at their best to get profit and everything that works get copied. (If anyone has a roughly related theory and feels I butchered or got wrong the reasoning involved, you are welcomed to express it right, I'm genuinely interested).
My main objection to this is that I fail to see how this is different by asking an unaligned AI that's not super intelligent, but still a lot smarter than you, to get your mother out of a burning building so you'd press the reward button the AI wants you to press.
If I understood your first paragraph correctly, we are both generally skeptic that a market of agents set about to maximise profit would be, on average in many different possible cases, good at generating value that's different than maximising profit.
Thank you for the clarification between unregulated and free.
I was aware of how one wouldn't lead to the other, but I'm now unsure about how many of the people I talked to about this had this distinction in mind.
I saw a lots of arguments for deregulation in political press that made appeals to the idea of the "free market", so I think I usually assumed that one arguing for one of these positions would assume that a free market would be an unregulated one and not foresee this obvious problem.
comment by Crazy philosopher (commissar Yarrick) · 2024-07-11T09:34:53.445Z · LW(p) · GW(p)
So it shouldn’t be surprising if acting like you have more status than I assign to you triggers a negative emotion, a slapdown response.
I think there's a different mechanism here. I don't like it if Mr. A can't do X, but doesn't know about it, publicly announces that he's going to do X, and gets a lot of prestige upfront. At the same time, I understand that he will not succeed, and he should not get prestige. And after that, A fails, and it makes me feel worse about those who claim that they can do X if they have no experience.
Imagine that some philosopher announces that he is going to create an aligned AGI in a month, after which everyone begins to admire him. That's exactly the feeling.
In other words, the problem is not that Mr. A doesn't have enough prestige, but that he doesn't have enough chances to succeed.
... but even if Mr. A decides to create an aligned AGI in a month without announcing it publicly, then you will wisely say, "This is impossible. Once I also thought that I could do it in a month, but it's not like that.". Wait - this is the reaction "juggling 3 balls is impossible"!
What did I understand: most of the exclamations "you don't have enough experience / look at yourself from the outside / it's not possible" from experts in this domainare true. I mean, if you decide to do X, but all the experts in the domain say that you will not succeed, this is quite strong Bayesian evidence in favor of the fact that you will not succeed. You can't dismiss it by deciding that they're just afraid to share their status.
But otherwise I agree with Eliezer.