Inadequacy and Modesty

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2017-10-28T21:51:01.339Z · LW · GW · 79 comments

Contents

  i.
  ii.
  iii.
  iv.
None
79 comments

The following is the beginning of Inadequate Equilibria, a new sequence/book on a generalization of the notion of efficient markets, and on this notion's implications for practical decision-making and epistemic rationality.

 


 

This is a book about two incompatible views on the age-old question: “When should I think that I may be able to do something unusually well?”

These two viewpoints tend to give wildly different, nearly cognitively nonoverlapping analyses of questions like:

When I think about problems like these, I use what feels to me like a natural generalization of the economic idea of efficient markets. The goal is to predict what kinds of efficiency we should expect to exist in realms beyond the marketplace, and what we can deduce from simple observations. For lack of a better term, I will call this kind of thinking inadequacy analysis.

Toward the end of this book, I’ll try to refute an alternative viewpoint that is increasingly popular among some of my friends, one that I think is ill-founded. This viewpoint is the one I’ve previously termed “modesty,” and the message of modesty tends to be: “You can’t expect to be able to do X that isn’t usually done, since you could just be deluding yourself into thinking you’re better than other people.”

I’ll open with a cherry-picked example that I think helps highlight the difference between these two viewpoints.

 

i.

I once wrote a report, “Intelligence Explosion Microeconomics,” that called for an estimate of the economic growth rate in a fully developed country—that is, a country that is no longer able to improve productivity just by importing well-tested innovations. A footnote of the paper remarked that even though Japan was the country with the most advanced technology—e.g., their cellphones and virtual reality technology were five years ahead of the rest of the world’s—I wasn’t going to use Japan as my estimator for developed economic growth, because, as I saw it, Japan’s monetary policy was utterly deranged.

Roughly, Japan’s central bank wasn’t creating enough money. I won’t go into details here.

A friend of mine, and one of the most careful thinkers I know—let’s call him “John”—made a comment on my draft to this effect:

How do you claim to know this? I can think of plenty of other reasons why Japan could be in a slump: the country’s shrinking and aging population, its low female workplace participation, its high levels of product market regulation, etc. It looks like you’re venturing outside of your area of expertise to no good end.

“How do you claim to know this?” is a very reasonable question here. As John later elaborated, macroeconomics is an area where data sets tend to be thin and predictive performance tends to be poor. And John had previously observed me making contrarian claims where I’d turned out to be badly wrong, like endorsing Gary Taubes’ theories about the causes of the obesity epidemic. More recently, John won money off of me by betting that AI performance on certain metrics would improve faster than I expected; John has a good track record when it comes to spotting my mistakes.

It’s also easy to imagine reasons an observer might have been skeptical. I wasn’t making up my critique of Japan myself; I was reading other economists and deciding that I trusted the ones who were saying that the Bank of Japan was doing it wrong… … Yet one would expect the governing board of the Bank of Japan to be composed of experienced economists with specialized monetary expertise. How likely is it that any outsider would be able to spot an obvious flaw in their policy? How likely is it that someone who isn’t a professional economist (e.g., me) would be able to judge which economic critiques of the Bank of Japan were correct, or which critics were wise?

How likely is it that an entire country—one of the world’s most advanced countries—would forego trillions of dollars of real economic growth because their monetary controllers—not politicians, but appointees from the professional elite—were doing something so wrong that even a non-professional could tell? How likely is it that a non-professional could not just suspect that the Bank of Japan was doing something badly wrong, but be confident in that assessment?

Surely it would be more realistic to search for possible reasons why the Bank of Japan might not be as stupid as it seemed, as stupid as some econbloggers were claiming. Possibly Japan’s aging population made growth impossible. Possibly Japan’s massive outstanding government debt made even the slightest inflation too dangerous. Possibly we just aren’t thinking of the complicated reasoning going into the Bank of Japan’s decision.

Surely some humility is appropriate when criticizing the elite decision-makers governing the Bank of Japan. What if it’s you, and not the professional economists making these decisions, who have failed to grasp the relevant economic considerations?

I’ll refer to this genre of arguments as “modest epistemology.”

In conversation, John clarified to me that he rejects this genre of arguments; but I hear these kinds of arguments fairly often. The head of an effective altruism organization once gave voice to what I would consider a good example of this mode of thinking:

I find it helpful to admit to unpleasant facts that will necessarily be true in the abstract, in order to be more willing to acknowledge them in specific cases. For instance, I should expect a priori to be below average at half of things, and be 50% likely to be of below average talent overall; to know many people who I regard as better than me according to my values; to regularly make decisions that look silly ex post, and also ex ante; to be mistaken about issues on which there is expert disagreement about half of the time; to perform badly at many things I attempt for the first time; and so on.

The Dunning-Kruger effect shows that unskilled individuals often rate their own skill very highly. Specifically, although there does tend to be a correlation between how competent a person is and how competent they guess they are, this correlation is weaker than one might suppose. In the original study, people in the bottom two quartiles of actual test performance tended to think they did better than about 60% of test-takers, while people in the top two quartiles tended to think they did better than 70% of test-takers.

This suggests that a typical person’s guesses about how they did on a test are evidence, but not particularly powerful evidence: the top quartile is underconfident in how well they did, and the bottom quartiles are highly overconfident.

Given all that, how can we gain much evidence from our belief that we are skilled? Wouldn’t it be more prudent to remind ourselves of the base rate—the prior probability of 50% that we are below average?

Reasoning along similar lines, software developer Hal Finney has endorsed “abandoning personal judgment on most matters in favor of the majority view.” Finney notes that the average person’s opinions would be more accurate (on average) if they simply deferred to the most popular position on as many issues as they could. For this reason:

I choose to adopt the view that in general, on most issues, the average opinion of humanity will be a better and less biased guide to the truth than my own judgment.

[…] I would suggest that although one might not always want to defer to the majority opinion, it should be the default position. Rather than starting with the assumption that one’s own opinion is right, and then looking to see if the majority has good reasons for holding some other view, one should instead start off by following the majority opinion; and then only adopt a different view for good and convincing reasons. On most issues, the default of deferring to the majority will be the best approach. If we accept the principle that “extraordinary claims require extraordinary evidence”, we should demand a high degree of justification for departing from the majority view. The mere fact that our own opinion seems sound would not be enough.1

In this way, Finney hopes to correct for overconfidence and egocentric biases.

Finney’s view is an extreme case, but helps illustrate a pattern that I believe can be found in some more moderate and widely endorsed views. When I speak of “modesty,” I have in mind a fairly diverse set of positions that rest on a similar set of arguments and motivations.

I once heard an Oxford effective altruism proponent crisply summarize what I take to be the central argument for this perspective: “You see that someone says X, which seems wrong, so you conclude their epistemic standards are bad. But they could just see that you say Y, which sounds wrong to them, and conclude your epistemic standards are bad.”2 On this line of thinking, you don’t get any information about who has better epistemic standards merely by observing that someone disagrees with you. After all, the other side observes just the same fact of disagreement.

Applying this argument form to the Bank of Japan example: I receive little or no evidence just from observing that the Bank of Japan says “X” when I believe “not X.” I also can’t be getting strong evidence from any object-level impression I might have that I am unusually competent. So did my priors imply that I and I alone ought to have been born with awesome powers of discernment? (Modest people have posed this exact question to me on more than one occasion.)

It should go without saying that this isn’t how I would explain my own reasoning. But if I reject arguments of the form, “We disagree, therefore I’m right and you’re wrong,” how can I claim to be correct on an economic question where I disagree with an institution as reputable as the Bank of Japan?

The other viewpoint, opposed to modesty—the view that I think is prescribed by normative epistemology (and also by more or less mainstream microeconomics)—requires a somewhat longer introduction.

 

ii.

By ancient tradition, every explanation of the Efficient Markets Hypothesis must open with the following joke:

Two economists are walking along the street, and one says, “Hey, someone dropped a $20 bill!” and the other says, “Well, it can’t be a real $20 bill because someone would have picked it up already.”

Also by ancient tradition, the next step of the explanation is to remark that while it may make sense to pick up a $20 bill you see on a relatively deserted street, if you think you have spotted a $20 bill lying on the floor of Grand Central Station (the main subway nexus of New York City), and it has stayed there for several hours, then it probably is a fake $20 bill, or it has been glued to the ground.

In real life, when I asked a group of twenty relatively young people how many of them had ever found a $20 bill on the street, five raised their hands, and only one person had found a $20 bill on the street on two separate occasions. So the empirical truth about the joke is that while $20 bills on the street do exist, they’re rare.

On the other hand, the implied policy is that if you do find a $20 bill on the street, you should go ahead and pick it up, because that does happen. It’s not that rare. You certainly shouldn’t start agonizing over whether it’s too arrogant to believe that you have better eyesight than everyone else who has recently walked down the street.

On the other other hand, you should start agonizing about whether to trust your own mental processes if you think you’ve seen a $20 bill stay put for several hours on the floor of Grand Central Station. Especially if your explanation is that nobody else is eager for money.

Is there any other domain such that if we think we see an exploitable possibility, we should sooner doubt our own mental competence than trust the conclusion we reasoned our way to?

If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.” This is something into which human civilization puts an actual effort.

This is certainly not perfect, but it is literally as good as it gets on modern-day Earth.

I don’t think I can beat the estimates produced by that process. I have no significant help to contribute to it. With study and effort I might become a decent hedge fundie and make a standard return. Theoretically, a liquid market should be just exploitable enough to pay competent professionals the same hourly rate as their next-best opportunity. I could potentially become one of those professionals, and earn standard hedge-fundie returns, but that’s not the same as significantly improving on the market’s efficiency. I’m not sure I expect a huge humanly accessible opportunity of that kind to exist, not in the thickly traded centers of the market. Somebody really would have taken it already! Our civilization cares about whether Microsoft stock will be priced at $37.70 or $37.75 tomorrow afternoon.

I can’t predict a 5% move in Microsoft stock in the next two months, and neither can you. If your uncle tells an anecdote about how he tripled his investment in NetBet.com last year and he attributes this to his skill rather than luck, we know immediately and out of hand that he is wrong. Warren Buffett at the peak of his form couldn’t reliably triple his money every year. If there is a strategy so simple that your uncle can understand it, which has apparently made him money—then we guess that there were just hidden risks built into the strategy, and that in another year or with less favorable events he would have lost half as much as he gained. Any other possibility would be the equivalent of a $20 bill staying on the floor of Grand Central Station for ten years while a horde of physics PhDs searched for it using naked eyes, microscopes, and machine learning.

In the thickly traded parts of the stock market, where the collective power of human civilization is truly at its strongest, I doff my hat, I put aside my pride and kneel in true humility to accept the market’s beliefs as though they were my own, knowing that any impulse I feel to second-guess and every independent thought I have to argue otherwise is nothing but my own folly. If my perceptions suggest an exploitable opportunity, then my perceptions are far more likely mistaken than the markets. That is what it feels like to look upon a civilization doing something adequately.

The converse side of the efficient-markets perspective would have said this about the Bank of Japan:


conventional cynical economist:  So, Eliezer, you think you know better than the Bank of Japan and many other central banks around the world, do you?

eliezer:  Yep. Or rather, by reading econblogs, I believe myself to have identified which econbloggers know better, like Scott Sumner.

c.c.e.:  Even though literally trillions of dollars of real value are at stake?

eliezer:  Yep.

c.c.e.:  How do you make money off this special knowledge of yours?

eliezer:  I can’t. The market also collectively knows that the Bank of Japan is pursuing a bad monetary policy and has priced Japanese equities accordingly. So even though I know the Bank of Japan’s policy will make Japanese equities perform badly, that fact is already priced in; I can’t expect to make money by short-selling Japanese equities.

c.c.e.:  I see. So exactly who is it, on this theory of yours, that is being stupid and passing up a predictable payout?

eliezer:  Nobody, of course! Only the Bank of Japan is allowed to control the trend line of the Japanese money supply, and the Bank of Japan’s governors are not paid any bonuses when the Japanese economy does better. They don’t get a million dollars in personal bonuses if the Japanese economy grows by a trillion dollars.

c.c.e.:  So you can’t make any money off knowing better individually, and nobody who has the actual power and authority to fix the problem would gain a personal financial benefit from fixing it? Then we’re done! No anomalies here; this sounds like a perfectly normal state of affairs.


We don’t usually expect to find $20 bills lying on the street, because even though people sometimes drop $20 bills, someone else will usually have a chance to pick up that $20 bill before we do.

We don’t think we can predict 5% price changes in S&P 500 company stock prices over the next month, because we’re competing against dozens of hedge fund managers with enormous supercomputers and physics PhDs, any one of whom could make millions or billions on the pricing error—and in doing so, correct that error.

We can expect it to be hard to come up with a truly good startup idea, and for even the best ideas to involve sweat and risk, because lots of other people are trying to think up good startup ideas. Though in this case we do have the advantage that we can pick our own battles, seek out one good idea that we think hasn’t been done yet.

But the Bank of Japan is just one committee, and it’s not possible for anyone else to step up and make a billion dollars in the course of correcting their error. Even if you think you know exactly what the Bank of Japan is doing wrong, you can’t make a profit on that. At least some hedge-fund managers also know what the Bank of Japan is doing wrong, and the expected consequences are already priced into the market. Nor does this price movement fix the Bank of Japan’s mistaken behavior. So to the extent the Bank of Japan has poor incentives or some other systematic dysfunction, their mistake can persist. As a consequence, when I read some econbloggers who I’d seen being right about empirical predictions before saying that Japan was being grotesquely silly, and the economic logic seemed to me to check out, as best I could follow it, I wasn’t particularly reluctant to believe them. Standard economic theory, generalized beyond the markets to other facets of society, did not seem to me to predict that the Bank of Japan must act wisely for the good of Japan. It would be no surprise if they were competent, but also not much of a surprise if they were incompetent. And knowing this didn’t help me either—I couldn’t exploit the knowledge to make an excess profit myself—and this too wasn’t a coincidence.

This kind of thinking can get quite a bit more complicated than the foregoing paragraphs might suggest. We have to ask why the government of Japan didn’t put pressure on the Bank of Japan (answer: they did, but the Bank of Japan refused), and many other questions. You would need to consider a much larger model of the world, and bring in a lot more background theory, to be confident that you understood the overall situation with the Bank of Japan.

But even without that detailed analysis, in the epistemological background we have a completely different picture from the modest one. We have a picture of the world where it is perfectly plausible for an econblogger to write up a good analysis of what the Bank of Japan is doing wrong, and for a sophisticated reader to reasonably agree that the analysis seems decisive, without a deep agonizing episode of Dunning-Kruger-inspired self-doubt playing any important role in the analysis.

 

iii.

When we critique a government, we don’t usually get to see what would actually happen if the government took our advice. But in this one case, less than a month after my exchange with John, the Bank of Japan—under the new leadership of Haruhiko Kuroda, and under unprecedented pressure from recently elected Prime Minister Shinzo Abe, who included monetary policy in his campaign platform—embarked on an attempt to print huge amounts of money, with a stated goal of doubling the Japanese money supply.5

Immediately after, Japan experienced real GDP growth of 2.3%, where the previous trend was for falling RGDP. Their economy was operating that far under capacity due to lack of money.6

Now, on the modest view, this was the unfairest test imaginable. Out of all the times that I’ve ever suggested that a government’s policy is suboptimal, the rare time a government tries my preferred alternative will select the most mainstream, highest-conventional-prestige policies I happen to advocate, and those are the very policy proposals that modesty is least likely to disapprove of.

Indeed, if John had looked further into the issue, he would have found (as I found while writing this) that Nobel laureates had also criticized Japan’s monetary policy. He would have found that previous Japanese governments had also hinted to the Bank of Japan that they should print more money. The view from modesty looks at this state of affairs and says, “Hold up! You aren’t so specially blessed as your priors would have you believe; other academics already know what you know! Civilization isn’t so inadequate after all! This is how reasonable dissent from established institutions and experts operates in the real world: via opposition by other mainstream experts and institutions, not via the heroic effort of a lone economics blogger.”

However helpful or unhelpful such remarks may be for guarding against inflated pride, however, they don’t seem to refute (or even address) the central thesis of civilizational inadequacy, as I will define that term later. Roughly, the civilizational inadequacy thesis states that in situations where the central bank of a major developed democracy is carrying out a policy, and a number of highly regarded economists like Ben Bernanke have written papers about what that central bank is doing wrong, and there are widely accepted macroeconomic theories for understanding what that central bank is doing wrong, and the government of the country has tried to put pressure on the central bank to stop doing it wrong, and literally trillions of dollars in real wealth are at stake, then the overall competence of human civilization is such that we shouldn’t be surprised to find the professional economists at the Bank of Japan doing it wrong.

We shouldn’t even be surprised to find that a decision theorist without all that much background in economics can identify which econbloggers have correctly stated what the Bank of Japan is doing wrong, or which simple improvements to their current policies would improve the situation.

 

iv.

It doesn’t make much difference to my life whether I understand monetary policy better than, say, the European Central Bank, which as of late 2015 was repeating the same textbook mistake as the Bank of Japan and causing trillions of euros of damage to the European economy. Insofar as I have other European friends in countries like Italy, it might be important to them to know that Europe’s economy is probably not going to get any better soon; or the knowledge might be relevant to predicting AI progress timelines to know whether Japan ran out of low-hanging technological fruit or just had bad monetary policy. But that’s a rather distant relevance, and for most of my readers I would expect this issue to be even less relevant to their lives.

But you run into the same implicit background questions of inadequacy analysis when, for example, you’re making health care decisions. Cherry-picking another anecdote: My wife has a severe case of Seasonal Affective Disorder. As of 2014, she’d tried sitting in front of a little lightbox for an hour per day, and it hadn’t worked. SAD’s effects were crippling enough for it to be worth our time to consider extreme options, like her spending time in South America during the winter months. And indeed, vacationing in Chile and receiving more exposure to actual sunlight did work, where lightboxes failed.

From my perspective, the obvious next thought was: “Empirically, dinky little lightboxes don’t work. Empirically, the Sun does work. Next step: more light. Fill our house with more lumens than lightboxes provide.” In short order, I had strung up sixty-five 60W-equivalent LED bulbs in the living room, and another sixty-five in her bedroom.

Ah, but should I assume that my civilization is being opportunistic about seeking out ways to cure SAD, and that if putting up 130 LED light bulbs often worked when lightboxes failed, doctors would already know about that? Should the fact that putting up 130 light bulbs isn’t a well-known next step after lightboxes convince me that my bright idea is probably not a good idea, because if it were, everyone would already be doing it? Should I conclude from my inability to find any published studies on the Internet testing this question that there is some fatal flaw in my plan that I’m just not seeing?

We might call this argument “Chesterton’s Absence of a Fence.” The thought being: I shouldn’t build a fence here, because if it were a good idea to have a fence here, someone would already have built it. The underlying question here is: How strongly should I expect that this extremely common medical problem has been thoroughly considered by my civilization, and that there’s nothing new, effective, and unconventional that I can personally improvise?

Eyeballing this question, my off-the-cuff answer—based mostly on the impressions related to me by every friend of mine who has ever dealt with medicine on a research level—is that I wouldn’t necessarily expect any medical researcher ever to have done a formal experiment on the first thought that popped into my mind for treating this extremely common depressive syndrome. Nor would I strongly expect the intervention, if initial tests found it to be effective, to have received enough attention that I could Google it.

But this is just my personal take on the adequacy of 21st-century medical research. Should I be nervous that this line of thinking is just an excuse? Should I fret about the apparently high estimate of my own competence implied by my thinking that I could find an obvious-seeming way to remedy SAD when trained doctors aren’t talking about it and I’m not a medical researcher? Am I going too far outside my own area of expertise and starting to think that I’m good at everything?

In practice, I didn’t bother going through an agonizing fit of self-doubt along those lines. The systematic competence of human civilization with respect to treating mood disorders wasn’t so apparent to me that I considered it a better use of resources to quietly drop the issue than to just lay down the ~$600 needed to test my suspicion. So I went ahead and ran the experiment. And as of early 2017, with two winters come and gone, Brienne seems to no longer have crippling SAD—though it took a lot of light bulbs, including light bulbs in her bedroom that had to be timed to go on at 7:30am before she woke up, to sustain the apparent cure.7

If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency. My view is that this is best done from a framework of incentives and the equilibria of those incentives—which is to say, from the standpoint of microeconomics. This is the main topic I’ll cover here.

In the process, I will also make the case that modesty—the part of this process where you go into an agonizing fit of self-doubt—isn’t actually helpful for figuring out when you might outperform some aspect of the equilibrium.

But one should initially present a positive agenda in discussions like these—saying first what you think is the correct epistemology, before inveighing against a position you think is wrong.

So without further ado, in the next chapter I shall present a very simple framework for inadequate equilibria.

 


 

Next chapter: An Equilibrium of No Free Energy.

The full book will be available November 16th. You can go to equilibriabook.com to pre-order the book, or sign up for notifications about new chapters and other developments.

 


 

  1. See Finney, “Philosophical Majoritarianism.” 

  2. Note: They later said that I’d misunderstood their intent, so take this example with some grains of salt. 

  3. This is why I specified relative prices: stock-trading professionals are usually graded on how well they do compared to the stock market, not compared to bonds. It’s much less obvious that bonds in general are priced reasonably relative to stocks in general, though this is still being debated by economists. 

  4. This is why I specified near-term pricing of liquid assets. 

  5. That is, the Bank of Japan purchased huge numbers of bonds with newly created electronic money. 

  6. See “How Japan Proved Printing Money Can Be A Great Idea” for a more recent update.

    For readers who are wondering, “Wait, how the heck can printing money possibly lead to real goods and services being created?” I suggest Googling “sticky wages” and possibly consulting Scott Sumner’s history of the Great Depression, The Midas Paradox

  7. Specifically, Brienne’s symptoms were mostly cured in the winter of 2015, and partially cured in the winter of 2016, when she spent most of her time under fewer lights. Brienne reports that she suffered a lot less even in the more recent winter, and experienced no suicidal ideation, unlike in years prior to the light therapy.

    I’ll be moderately surprised if this treatment works reliably, just because most things don’t where depression is concerned; but I would predict that it works often enough to be worth trying for other people experiencing severe treatment-resistant SAD. 

79 comments

Comments sorted by top scores.

comment by Zvi · 2017-10-29T14:17:42.932Z · LW(p) · GW(p)

The intervention of using a larger quantity of the thing that has some effect is mind-bogglingly likely to not be tried at all, at least by anyone reporting the results and doing studies.

The first case I did work on for MetaMed had exactly this phenomenon: There was a well-known treatment that helped some but didn't cure the condition in question, which helped both with symptoms and biomarkers (and where the biomarkers correlated very well with symptoms), and seemed to have a linear dose-response relationship, but we could find no evidence that anyone had ever tried using enough to get biomarkers down to where they were in healthy people. We also couldn't find any sign that doing so would pose any risks.

So if there's a situation in which it seems like brute force would solve your problem if only you'd use enough, but no one ever seems to think of or try using enough, it's probably because no one's thought about or tried using enough.

I do think this generalizes, but even the object level point is valuable. Simply doing more of the thing that's working is an often neglected option, despite that seeming like the more obvious possible thing.

Replies from: James6, SatvikBeri, multiarmedmindset
comment by James6 · 2017-10-31T14:45:55.291Z · LW(p) · GW(p)

I was skeptical when I read this yesterday that a medical system with so much money and so many lives on the line could miss something so obvious.

Then today I run across a JAMA article from FDA researchers saying the same thing:

"Failure to determine the most appropriate dose for clinical use was a major reason for nonapproval. Dosing is frequently decided early in drug development, and optimization of doses to maximize efficacy and minimize toxicity is seldom formally explored in phase 3 studies. Adaptive trial designs and other strategies (such as treating phase 3 trial participants with a randomized sequence of different doses) may help to optimize doses."

Basically, 'stop making us reject your drugs for stupid reasons like not trying to optimize the dose'.

Replies from: brambleboy
comment by brambleboy · 2024-04-15T03:43:15.435Z · LW(p) · GW(p)

I encountered this while I was reading about an obscure estradiol ester, Estradiol undecylate, used for hormone replacement therapy and treating prostate cancer. It's very useful because it has a super long half-life, but it was discontinued. I had to reread the article to be sure I understood that the standard dose chosen arbitrarily in the first trials was hundreds of times larger than necessary, leading to massive estrogen overdoses and severe side effects that killed many people due to cardiovascular complications, and yet these insane doses were typical for decades and might've caused its discontinuation.

comment by SatvikBeri · 2017-10-30T13:43:50.602Z · LW(p) · GW(p)

I think the main cause is that people who view themselves are solving a problem are often using the procedure "look at the current pattern and try to find issues with it." A process that complements this well is "look at what's worked historically, and do more of it."

Some examples I wrote about a while back: lesswrong.com/lw/iro/systematic_lucky_breaks/

comment by multiarmedmindset · 2017-10-30T03:17:56.248Z · LW(p) · GW(p)

Can you give the condition and treatment?

comment by Chris_Leong · 2017-10-29T04:57:04.766Z · LW(p) · GW(p)

I'm really happy that you are writing a book on this topic. I mean, the Sequences and the other discussion on Less Wrong has given us a lot of tools with which to form our own opinion, but then we need to figure out how to balance this against the opinions of experts with more domain-specific knowledge. There's a sense in which all the other knowledge isn't of any use unless we know when to actually use it.

"Now, on the modest view, this was the unfairest test imaginable. Out of all the times that I’ve ever suggested that a government’s policy is suboptimal, the rare time a government tries my preferred alternative will select the most mainstream, highest-conventional-prestige policies I happen to advocate, and those are the very policy proposals that modesty is least likely to disapprove of."

This is a pretty big deal, so I wanted to emphasise it. Let's suppose you come up with 50 policies you think the government should implement. 10 get implemented and 8 work out well. Pretty good right? But what if 30 of your policies would have been utterly stupid and this is obvious to any of the experts? This effect could completely destroy your attempts at callibration.

Replies from: Zvi, evand
comment by Zvi · 2017-10-29T17:42:09.167Z · LW(p) · GW(p)

That is indeed pretty good! If the experts would only call things utterly stupid that are in fact utterly stupid, you've got it made. All you have to do is run your policies by the experts and have them explain why those 30 are utterly stupid, and then advocate for the remaining 20. If they'd call 40 of them stupid and only 30 are, now we have the problem that we are discarding 10 good ideas, but that still leaves 10 good ideas, 8 of which worked out. Sweet!

It's certainly possible to think you're better than you are, this way, but this is far from inevitable. But it's pretty immodest to claim "I generate ideas that, conditional on being implemented, have an 80% chance of working." Provided that far less than 80% of similar implemented policies work, at least.

Advocating strongly for a policy that would work in the worlds in which it could possibly get implemented is a good idea even if most of your policies would be diasterous. I can't think of a source of good ideas that doesn't mostly generate bad ideas until it encounters criticism, but the process working at all seems like a hugely immodest claim.

Replies from: TheWakalix
comment by TheWakalix · 2017-11-29T23:15:21.866Z · LW(p) · GW(p)

I don't understand what you mean by "conditional on being implemented." Do you mean that for each policy, it is implemented regardless of being impossible, then out of these words we find the number that have gotten better relative to their controls? Or do you mean that we find the number of possible worlds in which the policy is implemented, and compare it to a similar possible world in which it is not, and determine if a positive correlation between "has Policy X implemented" and "is a world with Y utilons"? The former doesn't seem right, but in context the latter doesn't seem to fit.

comment by evand · 2017-10-30T03:18:34.924Z · LW(p) · GW(p)

The advanced answer to this is to create conditional prediction markets. For example: a market for whether or not the Bank of Japan implements a policy, a market for the future GDP or inflation rate of Japan (or whatever your preferred metric is), and a conditional market for (GDP given policy) and (GDP given no policy).

Then people can make conditional bets as desired, and you can report your track record, and so on. Without a prediction market you can't, in general, solve the problem of "how good is this prediction track record really" except by looking at it in detail and making judgment calls.

comment by ESRogs · 2017-10-28T23:52:24.110Z · LW(p) · GW(p)
In the process, I will also make the case that modesty—the part of this process where you go into an agonizing fit of self-doubt—isn’t actually helpful for figuring out when you might outperform some aspect of the equilibrium.

I suspect that for Hal Finney and other advocates of modesty, it doesn't usually feel like an agonizing fit of self doubt. And for any modest epistemology that does agonize, we can imagine another that just matter-of-factly adjusts their priors to match the outside view.

So if "agonizing fit of self-doubt" is more than just a figure of speech, I'm worried that this analysis may miss some of what's going on.

Nevertheless, I'm very excited about this sequence and eagerly look forward to reading the rest of the argument.

comment by evolution-is-just-a-theorem · 2017-10-28T23:01:43.861Z · LW(p) · GW(p)

So why didn't the Bank of Japan print more money? If they didn't have an incentive one way or another I would expect them to cave to the political pressure, so what was the counter-incentive? Did they genuinely disagree and think that printing money was a bad idea? Were they reluctant to change policies because then they would look stupid?

Also I propose the "I can't do anything because doing something would be arrogant" attitude be termed modesty-paralysis.

Replies from: jsalvatier, PeterMcCluskey, RobbBB
comment by jsalvatier · 2017-10-29T00:07:35.416Z · LW(p) · GW(p)

Genuinely held austarity-type ideologies are popular among people who care a lot about central banking (possibly due to the great depression?), and I'm guessing that's what happened at the BoJ. It seems to be what happened in the US, which made similar mistakes, though less badly.

comment by PeterMcCluskey · 2017-10-29T03:05:51.461Z · LW(p) · GW(p)

Scott Sumner suggests here that central banks worry about small risks that they'll need to be bailed out if their balance sheets get too large. (http://econlog.econlib.org/archives/2017/04/what_were_the_c.html)

Replies from: Chris_Leong
comment by Chris_Leong · 2017-10-29T04:50:16.506Z · LW(p) · GW(p)

Actually that link quite interesting. Forward guidance (signalling that you intend to create inflation) can create inflation if people feel it is credible. Unfortunately, committees tend to result in compromise decisions in the face of uncertainty, rather than decisive action and further than that, it is harder for a committee to credibly commit seeing that the members don't know whether other member will keep their support up over time. So it's actually really hard for a committee to implement.

comment by Rob Bensinger (RobbBB) · 2017-11-03T02:40:44.360Z · LW(p) · GW(p)

Eliezer's account seems to be the one in his inflation target FAQ. In the above post, though, Eliezer isn't claiming to know of any strong reason for the Bank of Japan to be behaving this way; he's saying that the obvious incentives favoring competence aren't so strong that they allow us to make confident predictions one way or the other. He's sufficiently uncertain at the outset about how good their decision-making will be that he can take local evidence of poor decision-making more or less at face value, even without knowing the specifics of why they're behaving in this particular way.

The key decisionmakers at the Bank of Japan might been vulnerable to any number of ideological commitments or epistemic missteps. They might have had respected colleagues or friends whose beliefs or esteem they especially valued, and who happened to skew toward bad views of monetary policy. It's surprising if someone is willing to forgo large financial incentives in order to look good to a handful of friends at dinner parties, but it's not so surprising if someone is willing to forgo looking good to one group of people in order to look good to another group.

comment by Ben Pace (Benito) · 2017-10-29T02:42:17.284Z · LW(p) · GW(p)

I've promoted this to Featured because it's a great piece of writing, and also because this book seems to be about some of the fundamental epistemological disagreements in our broad communities. I really hope this is the place we can communicate clearly and successfully together on these topics.

Replies from: Lucretius
comment by Lucretius · 2017-10-29T13:24:48.560Z · LW(p) · GW(p)

What are the grounds for relying on individual judgment to promote posts to Featured? Isn't it more reasonable to rely on community opinion, as expressed in the epistocratic karma system?

(I'm aware that this is a tangent, but the point seemed germane given that the reasoning behind this decision may itself reflect the "fundamental epistemological disagreements" which you refer to.)

Replies from: Benito, DragonGod
comment by Ben Pace (Benito) · 2017-10-29T16:27:21.152Z · LW(p) · GW(p)

[Edit: I've turned this comment into a thread in Meta.]

comment by DragonGod · 2017-10-29T15:26:15.636Z · LW(p) · GW(p)

My support for this.

comment by sarahconstantin · 2017-10-30T18:16:13.967Z · LW(p) · GW(p)

My belief on medical interventions (based on many, many examples) is that if you want to claim to be literally the first to try something like "brighter lightboxes", there's a good chance you're wrong and somebody has tried it before. But it's totally unsurprising for an intervention that would work to have failed to reach as far as your family doctor, if it's a DIY or intrinsically unprofitable intervention (brighter lightboxes, yes, but also things like vitamins/supplements which nobody could patent). Mild, partly-psychological, or poorly understood medical issues, like SAD or chronic back pain, often do respond to some kind of DIY trick that you don't get from a doctor.

It would be surprising if you could cure cancer with items purchaseable at a drugstore, though; for major and heavily-studied diseases, usually the only way you could improve over the medical establishment is by being willing to take the risk on an experimental treatment, or sometimes by refusing an unnecessary treatment that's being prescribed for "defensive medicine" reasons.

comment by tristanm · 2017-10-29T05:23:42.423Z · LW(p) · GW(p)

I think the central reason it’s possible for an individual to know something better than the solution currently prescribed by mainstream collective wisdom is that the vastness in the number of degrees of freedom in optimizing civilization guarantees that there will always be some potential solutions to problems that simply haven’t received any attention yet. The problem space is simply way, way too large to expect that even relatively easy solutions to certain problems are known yet.

While modesty may be appropriate in situations regarding a problem that is widely visible and considered urgent by society, I think even within this class of problems, there are still so many inefficiencies and non-optimalities that, if you go out looking for one, it’s likely you’ll be able to actually find one. The existence of people who actively go looking for these types of problems, like within Effective Altruism, may demonstrate this.

The stock market is a good example of a problem that is relatively narrow in scope and also receiving a huge amount of society’s collective brainpower. But I just don’t think there’s nearly enough brainpower to expect even most of the visible and urgent problems to already have adequate solutions, or to even have solutions proposed.

There may also be certain dynamics where there are trade-offs between, how much energy and effort does society have to spend in order to implement a specific solution, and how much would this subtract from the effort and energy currently needed to support the other mechanisms of civilization? This dynamic may result in the existence of problems that are easy to notice, perhaps even easy to define a solution for, but in practice immensely complex to implement.

For example, it’s within the power of individual non-experts to understand the basic causes of the Great Recession. And there may have been individuals who predicted it to occur. But it could still have been the case that, actually, it was not feasible for society to simply recognize this and change course quickly enough to avert the disaster.

But rather than for society to simply say, once a disaster becomes predictable, “yes we all know this is a problem, but we really don’t know what to do about it, or if it’s even possible to do anything about it”, the incentive structures are such that it’s easier to spend brainpower to come up with reasons why it’s not really that bad and perhaps the problem doesn’t even exist in the first place. Therefore the correct answer gets hidden away and the commonly accepted answer is incorrect.

In other words, modesty is most reasonable when the systems that support knowledge accumulation don’t filter out any correct answers.

comment by Rossin · 2017-10-29T02:31:36.725Z · LW(p) · GW(p)

I think the example with the lightbulbs and SAD is very important because it illustrates well that in areas that humanity is not prioritizing especially, one is much more justified in expecting civilizational inadequacy.

I think a large portion of the judgment of whether one should expect that inadequacy should be a function of how much work and money is being spent on a particular subject.

Replies from: jacobjacob
comment by jacobjacob · 2017-11-09T21:18:47.735Z · LW(p) · GW(p)
inadequacy should be a function of how much work and money is being spent on a particular subject.

I strongly disagree. Society seems to have no problem squandering money on e.g. irreproducible subfields of psychology or ineffective charity.

comment by Jiro · 2017-11-02T18:09:09.241Z · LW(p) · GW(p)

"It is unlikely that there's a 20 dollar bill in the street" doesn't imply "if you see a 20 dollar bill in the street, it's probably fake". Whether it's fake depends on the relative likelihood of fake and real 20 dollar bills. The relative proportion of failed to successful ideas isn't anywhere near as favorable as the proportion of fake to real $20 bills.

comment by Larks · 2017-10-29T23:58:10.382Z · LW(p) · GW(p)

Great article, and I'm glad to see you've returned to Less(er)wrong.

One very very small question: speaking as one of the hedge fund guys you mention who happened to be long MSFT into a very successful quater on friday, why did your Microsoft example use a share price of $37.70? We're at $83.81 now!

Replies from: ESRogs
comment by ESRogs · 2017-10-30T00:10:11.647Z · LW(p) · GW(p)

I believe the first draft of this book was written in 2015.

(Though that wouldn't quite explain it -- looks like MSFT was >40 all that year -- so maybe it was 2014?)

Replies from: lahwran, Larks
comment by the gears to ascension (lahwran) · 2017-10-30T01:43:57.705Z · LW(p) · GW(p)

The last day it was $37.70 was 2014-03-14

comment by Larks · 2017-11-01T02:00:03.086Z · LW(p) · GW(p)

Yup, just logged back in to make that guess. Would also explain the Japan commentary.

comment by Pasha Kamyshev (pasha-kamyshev) · 2017-10-29T21:02:16.711Z · LW(p) · GW(p)

The intersting question here is to what extent does "Efficient Market Hypothesis" apply to rationality techniques? My guess is that it doesn't invalidate rationality techniques the same way it invalidates stock-picking techniques, but it still constraints what we can reasonably expect to be functional. More thoughts here: http://lifeinafreemarket.tumblr.com/post/152482166093/efficient-technique-hypothesis-and-its-limits

Replies from: Elo
comment by Elo · 2017-10-30T02:33:45.938Z · LW(p) · GW(p)

How does the Efficient Technique Hypothesis stand up to the classic problems of the failure of the efficient market hypothesis? That some inefficiencies still stick around for various reasons...

comment by avturchin · 2017-10-29T16:49:37.510Z · LW(p) · GW(p)

I think that information that Japan has to increase its money supply was not unique, but Japan regulator was not able to do so until recently for several reasons:

  1. Japan had agreement with US to prevent artificial lowering its exchange rate. Japan economic miracle was based on artificial lowering of exchange rate, so increasing the money supply is one of the ways to lower the rate. After Japan in 80s agreed to stop lowering it exchange rate (under treat of sanctions), its economy stopped to grow. https://en.wikipedia.org/wiki/Plaza_Accord
  2. Japan has largest public debt (200 per cent) with almost zero interest. Increasing money supply may increase interest and result  in runaway collapse of debt. https://en.wikipedia.org/wiki/National_debt_of_Japan
  3. "Yen carry-trade". Carry trade is one of the most fuck up things you could do with your own economy. It is borrowing money in yens to invest them in dollar actives. Most Japanese wives are in debt as they do so, and they earn on it because of low interest rate and expectation of lower yen exchange rate in the future. It all resulted, however, in that yen rate behave counter-intuitively:  it fails on good news and grows on bad. If increasing money supply is good news, it will be bad news for carry trade and bad news for Japan. https://www.cnbc.com/2016/02/11/yen-jumps-against-dollar-as-carry-trade-wanes-despite-bojs-negative-rates-policy.html

It all is probably much more complex than I remember from the time I was interested in the Japanese economy. However, it looks like the situation has changed recently:

1) Japan economic growth and export is not a problem now, as China now is the problem. So probably Japan got right to lower yen.

2) After 2008 US increased its money supply several times without damaging its inflation rate or interest rates, because it used Goldilocks principle. https://en.wikipedia.org/wiki/Goldilocks_economy Basically, all new money exactly covered bad debts on banks balance sheets and didn’t spilled much in the markets. European central bank did the same and it is not surprised that Japan did it too. 

For one of this reasons, Japan central bank probably decided that risks of increasing money supply is worse a try.

This is how I understand it and I would like be corrected from the one who knows more about Japanese economy.

comment by Ben Pace (Benito) · 2017-10-29T03:30:51.144Z · LW(p) · GW(p)

There's been a number of good comments here regarding the question of how the Bank of Japan (BoJ) was able to make such a macroeconomic blunder - what incentive structure allowed for this? I've been ruminating on this essay for a while, and I want to add a note that while that is an interesting question, it is in some ways tangential to the central point.

The key claim is that it's perfectly feasible for a lay-person to confidently know a better monetary policy than the BoJ. This generalises such that when you read someone making such a passing claim that BoJ's monetary policy is insane, this is basically zero evidence regarding whether the speaker is overconfident or not, because given the world we live in they could totally be able to know that.

That, I believe, is the empirical disagreement between modesty-epistemology and its opponents. The degree to which finding out that someone disagrees with an academic field / governmental institution / major company should be counted as overconfidence, versus needing to know more to judge who is right because our civilization is so inadequate that some random decision theorist can make a better macroeconomic decision than the Bank of Japan.

This post doesn't provide the evidence to persuade me of the view it holds, but it makes me think that modesty epistemology is much less an abstract claim than an empirical one.

Replies from: Zvi, tommsittler
comment by Zvi · 2017-10-29T14:00:15.213Z · LW(p) · GW(p)

Scott Sumner often points out that the market responds instantly to changes in monetary policy, and it responds based on its expectations of the future path of monetary policy. During the relevant period for the Bank of Japan, looking at market reactions to its actions and announcements it is utterly obvious that the market expects Japan to do better when the bank prints more money or people think it will print more, and do worse when the bank prints less money or people think it will print less.

The modesty argument should (and by Scott frequenly has been) actually be made against the Bank of Japan. The market is screaming that the bank should print more money, so what right does a committee have to decide it knows better? The counter-argument is that you could read the market as saying that given BoJ has made a decision, decisions to print more money are good news, but that BoJ has other considerations slash hidden information, so it can be wrong to print more money but right for the decision to not print to be interpreted as bad news. This in turn requires BoJ to have hidden information, which could be political rather than economic. I would second Eliezer in recommending The Midas Paradox if you want to know more about such things.

How did the Bank make such a huge mistake? I can think of a number of good reasons, all of which come down to politics, perception and the interests of the individuals making the decision, including their intellectual commitments. Their utility function is not the RGDP of Japan, or human flourishing, or anything remotely like either of these things. There's also the bad reason of they thought they were right. A mix of both seems plausible - some members thought they were right, others weren't sure, and their incentives were bad.

I assume it is a major point of the full book that in many or most situations, the academic field / governmental institution / major company is not optimizing (or at least, only partially optimizing) for the right answer, so there shouldn't be much presumption that they will get the right answer.

Replies from: Benito
comment by Ben Pace (Benito) · 2017-10-29T17:23:10.308Z · LW(p) · GW(p)
The modesty argument should (and by Scott frequenly has been) actually be made against the Bank of Japan.

This is true, but I was actually trying to zoom into the particular encounter between Jack and Eliezer, where Jack didn't know the other evidence, and the question was one of whether it was accurate to go "Given my current state of knowledge, I know that Eliezer is being overconfident" or whether it was accurate to go "Given my current state of knowledge, I don't know that Eliezer is being overconfident", and that the disagreement is an empirical one about the background state of institutions in the world, as opposed to one of discussing ideal bayesian agents.

I am talking about group rationality a bit, but I'm realising more and more how much modesty is a strong empirical claim as opposed to a theoretical one.

Replies from: Zvi, ESRogs
comment by Zvi · 2017-10-29T18:50:13.158Z · LW(p) · GW(p)

I think this comes down to many things, including Eliezer's history of calibration in such situations, and the state of such institutions in general. In this case I think Jack was being quite fair to think that given what he knew about Eliezer, and his knowledge of Japan at the time, on average Eliezer was being overconfident here.

But that's a calibration question as opposed to a question of whether it is generally reasonable for a person such as Eliezer to think that BoJ could be doing something insane, or what evidence Eliezer would need before being able to claim that.

To that, I would strongly answer that it is very reasonable to think BoJ could be doing something insane in this spot, it's just a question of how much evidence you have and how confident you should be, even before we learn that actually it is consensus reality among economists and traders not in BoJ that BoJ is acting insane.

I'd also make the argument that the very fact that everyone thinks BoJ was doing something insane, and it turned out to have done something insane, is evidence that institutions like the BoJ do insane things. Not only do they do insane things, they often don't fix them even when everyone is telling them their actions are insane.

Obviously this is one non-random example, so it isn't that strong as evidence on its own, but the class of thing "seemingly insane thing being done by major institution that people told them was insane, they eventually fixed, and that then turned out to be insane" is reasonably large.

At a minimum such institutions strongly disagree with the argument from modesty, in the sense that they seem to believe things that the outside world tells them are wrong, quite often. Using the argument from modesty, to believe decisions made by groups and institutions that are disrespecting the argument from modesty, is at least highly weird. Why trust a group to make better decisions than you, when they're using a worse rule to make their decisions?

comment by ESRogs · 2017-10-30T00:06:39.378Z · LW(p) · GW(p)
the particular encounter between Jack and Eliezer

John?

comment by [deleted] (tommsittler) · 2017-10-29T10:42:20.678Z · LW(p) · GW(p)

Your comment is a little bit confusing.

The key claim is that it's perfectly feasible for a lay-person to confidently know a better monetary policy than the BoJ. This generalises such that

I assume you mean "the key claim" to apply to the second sentence as well. As in:

The key claim is that this generalises such that when you read someone making such a passing claim that BoJ's monetary policy is insane, this is basically zero evidence regarding whether the speaker is overconfident or not, because given the world we live in they could totally be able to know that.

Later you say:

our civilization is so inadequate that some random decision theorist can make a better macroeconomic decision than the Bank of Japan.

Now I take you to be endorsing, not describing, the key claim. Which is it?

At the end you say:

This post doesn't provide the evidence to persuade me of the view it holds, but it makes me think that modesty epistemology is much less an abstract claim than an empirical one.

Call M the view: "I don't expect a layperson can make a better macroeconomic decision than the Bank of Japan. Until further evidence, I'll call such claims overconfident". Are you making the very weak** claim that M is an empirical statement? Or the much stronger claim that not-M?

[ ** In a way it's trivially true that if you're a bayesian, there are no epistemological questions left (modulo anthropics), only empirical ones.]

Replies from: Benito
comment by Ben Pace (Benito) · 2017-10-29T17:29:40.048Z · LW(p) · GW(p)
Your comment is a little bit confusing.

This seems correct, oops.

Now I take you to be endorsing, not describing, the key claim. Which is it?

I was intending to describe it.

Regarding M - I am making that 'weak' claim, but I'm trying to emphasise that I was surprised by that. Often when I talk about modesty epistemology, people come to me with arguments of the sort "You should model yourself as a black box outputting claims and others as the same, and average based on your weighting over expertise", and I respond with questions about this theoretical claim. But I now want to say "But let's talk about whether the BoJ is insane". What I thought was a theoretical disagreement is in fact largely empirical.

comment by Lucretius · 2017-10-29T00:19:59.227Z · LW(p) · GW(p)

It's unclear what point the Bank of Japan example is meant to illustrate. You acknowledge that Nobel laureates and other prestigious economists agreed that the macroeconomic policy of the Bank of Japan would have very bad consequences. So I'd expect Hal Finney, and other epistemically modest folk, to be skeptical of the Bank of Japan on the basis of what these experts believe.

Replies from: ESRogs
comment by ESRogs · 2017-10-29T00:27:07.770Z · LW(p) · GW(p)

But should he have distrusted his own view before learning that fact?

Replies from: Lucretius
comment by Lucretius · 2017-10-29T00:50:45.163Z · LW(p) · GW(p)

I agree that's the relevant question.

Note, however, that Eliezer didn't take the inside view here: he was deferring to bloggers ("by reading econblogs, I believe myself to have identified which econbloggers know better"). So, again, it seems that this example doesn't illustrate a clear thesis. If we agree modest folk could criticize the Bank of Japan out of deference to Nobel laureates, why should we take Eliezer's deference to econbloggers as illustrative of epistemic immodesty?

Replies from: Benito, Unnamed, dxu
comment by Ben Pace (Benito) · 2017-10-29T01:17:59.893Z · LW(p) · GW(p)

One difference in my model of a modest-epistemologist vs a person who isn't, is that when they both read Eliezer's passing remark in the Macroeconomics papers (saying that Japan's econ polict is deranged) the modesty person says "I believe you are wrong because the Bank of Japan are experts" and the other person says "I do not have a strong belief here". I feel the modest epistemologist is quicker to make judgements about what you can't know, whereas the other person thinks you may indeed know better than the Bank of Japan, because this isn't that far out of sync with how the world generally is.

Replies from: Lucretius
comment by Lucretius · 2017-10-29T12:46:38.719Z · LW(p) · GW(p)

I think the reasonable reaction upon reading the statement that the Bank of Japan's macroeconomic policy is deranged, made by someone who lacks formal credentials in the area and who appears to have a less-than-stellar track-record of defying expert consensus, is indeed to be skeptical. By contrast, that position becomes much more reasonable if one learns that Nobel laureates, prestigious economists and smart econbloggers agree with it.

So the claim "I believe you are wrong because the Bank of Japan are experts" seems to be a caricature of modest epistemology, based on an equivocation between

  1. "In the absence of other information, I would rather trust the Bank of Japan than Eliezer Yudkowsky here" and
  2. "It's reasonable to side with Nobel laureates, prestigious economists and smart econbloggers if they disagree with the Bank of Japan."
comment by Unnamed · 2017-10-29T05:11:02.618Z · LW(p) · GW(p)

The clearest description that Eliezer gave of this part of his process looks to be:

when I read some econbloggers who I’d seen being right about empirical predictions before saying that Japan was being grotesquely silly, and the economic logic seemed to me to check out, as best I could follow it, I wasn’t particularly reluctant to believe them.

So it wasn't just that he trusted Scott Sumner's judgment, it was the combination of I think Sumner has good judgment about things like this (outside view) and the parts of his argument that I can evaluate all look solid (inside view). I wouldn't call that "deferring."

comment by dxu · 2017-10-29T03:05:22.765Z · LW(p) · GW(p)
If we agree modest folk could criticize the Bank of Japan out of deference to Nobel laureates

Point of clarification: Eliezer was not, in fact, deferring to Nobel laureates who were critical of Japan's monetary policy, or even aware that such laureates existed at the time. He was specifically deferring to econ bloggers who he happened to follow. Nor should we consider it an act of modesty ("not taking the inside view") to side with one set of experts over another; to do so is to call the opposing side wrong, after all.

Replies from: Lucretius
comment by Lucretius · 2017-10-29T09:51:33.572Z · LW(p) · GW(p)
Eliezer was not, in fact, deferring to Nobel laureates who were critical of Japan's monetary policy, or even aware that such laureates existed at the time. He was specifically deferring to econ bloggers who he happened to follow.

I completely agree—this is what I was claiming in my previous comment. My point was that Finney's (hypothetical) decision to defer to Nobel laureates and Eliezer's decision to defer to econ bloggers are similar in the relevant respects, so it's unclear why one decision would be an instance of epistemic modesty while the other an instance of epistemic immodesty.

comment by ChristianKl · 2017-11-02T19:09:05.403Z · LW(p) · GW(p)

I agree that the incentives in our medical system are horrible. A while ago I wrote https://www.lesserwrong.com/posts/TYA2nsPypoNaLsczd/prediction-based-medicine-pbm to suggest how to build a startup that would fix the incentive.

comment by AndHisHorse · 2017-10-30T14:49:08.804Z · LW(p) · GW(p)

I think that there is a potentially dangerous implication in the comparison between the BoJ and the stock market, that the real essence of the difference between them is incentives . (At least, the way that I read it allowed for that interpretation; I'm not sure if this is sufficient universal).

I think that the general class of thing which is present in a stock market but not a central bank is an error-correction mechanism . In this case, that mechanism is in the form of very clear and direct monetary incentives. But we should expect other mechanisms to achieve the same purpose (though many may be or be connected to non-central examples of incentives as well).

Experimentation is one; I believe physicists, for example, because they have data to back up their predictions and the field tends to check theories against data. Peer review (formal or informal - namely, the ability of a field to call out and reject bad ideas) is another; my trust in this mechanism for correcting errors is the basis of my trust in a great deal of science (basically the idea that many qualified persons are keeping watch and could effectively raise an alarm if something important went wrong).

It seems reasonable to allow for disagreement with a field or institution if you can determine that its conclusions seem to have been reached in the absence of such a mechanism. In particular, if a field lacks, say, expert consensus, or an institution is going against that consensus, it seems reasonable to assume that there is an opportunity for a layperson to do reasonably well at interpreting expert-generated evidence from the rest of the field.

The requirements are even more lax, I believe, for errors of omission , which Eliezer mentions in his description of Brienne's light issues. I think this could reasonably be called a different category of problem.

comment by skybrian · 2017-10-30T06:01:14.772Z · LW(p) · GW(p)

Maybe compare with epistemic learned helplessness?

http://squid314.livejournal.com/350090.html

comment by Ziz · 2017-10-30T01:28:21.465Z · LW(p) · GW(p)

I once found a $20 bill on a sidewalk at a marina. I searched the surrouning sidewalk, grass, and bushes and found 3 more. It reminded me of the mentioned talk from Eliezer and I was thinking, "this event would make really good propaganda for the boat startup I'm in, too bad I don't believe in omens no wait believing in them doesn't make them real."

comment by dlr · 2020-03-22T14:53:58.457Z · LW(p) · GW(p)

So we should expect Dunning-Kruger to not replicate if the subjects were offered a nontrivial reward for how well they predicted their test scores.

comment by Shmi (shminux) · 2017-10-31T07:11:00.270Z · LW(p) · GW(p)

How well replicated is the lightbox cure?

comment by habryka (habryka4) · 2017-10-29T22:58:41.242Z · LW(p) · GW(p)

Meta note: We noticed a bug where some users were able to vote multiple times on this post, and so it's current karma score is probably a bit inflated. I am investigating the issue right now, and will try to restore it to an accurate state as soon as possible.

Replies from: habryka4, ESRogs
comment by habryka (habryka4) · 2017-10-31T03:14:40.227Z · LW(p) · GW(p)

Ok, the current score should reflect the correct amount. Sorry for things being inflated for a bit.

comment by ESRogs · 2017-10-30T00:04:15.364Z · LW(p) · GW(p)

I noticed a few times that my vote seemed to have disappeared, and so I re-voted. Maybe each of those votes counted?

Replies from: habryka4
comment by habryka (habryka4) · 2017-10-30T00:27:31.948Z · LW(p) · GW(p)

Yeah, all of those counted and increased the total karma. I am currently waiting on a response from our DB provider to send us a full copy of our logs, and then I can reset the karma score to what it's intended to be.

comment by Andrew Me (andrew-me) · 2017-11-04T14:31:11.505Z · LW(p) · GW(p)

Is this book really getting a print version before Rationality: From A to Z ?

Replies from: malo
comment by Malo (malo) · 2017-11-06T08:54:37.578Z · LW(p) · GW(p)

Yes.

Replies from: andrew-me
comment by Andrew Me (andrew-me) · 2017-11-08T12:24:38.801Z · LW(p) · GW(p)

wtf happened to rationality's print version?

Replies from: malo
comment by Malo (malo) · 2017-11-09T04:34:21.084Z · LW(p) · GW(p)

It's just a really big project. It's almost an order of magnitude longer then In Eq, and it was written in a way that makes it much more challenging to turn into a paper book. E.g., links are pretty important when reading the Sequences. Said another way, the task of getting a physical book up for sale on Amazon is pretty trivial. The process of transforming the actual content of the Sequences into something that works in book form is significantly harder. In Eq doesn't have this issue.

The enormity of the task combined with other competing priorities at MIRI are the reason it's not out yet.

Replies from: andrew-me
comment by Andrew Me (andrew-me) · 2017-11-11T05:23:38.345Z · LW(p) · GW(p)

I don't understand. It's already in book form, just only available as an e-book. Wasn't the plan to turn the ebook into a physical book? (not create an entirely new book?)

Also, links are great, but they aren't preventing an audio book. And a goal of R:AZ was that "You can simply read the book as a book."

MIRI themselves stated in 2015 that " Paper versions should be available later this year." I guess they were just demonstrating this: https://www.readthesequences.com/Planning-Fallacy

We should start a pool if this will be out before Winds of Winter!

Replies from: malo
comment by John_Maxwell (John_Maxwell_IV) · 2017-10-31T17:36:58.046Z · LW(p) · GW(p)

Why does it make sense to talk in terms of the "overall competence of human civilization" instead of just naming a specific flaw? (E.g. "national banks don't face a good set of incentives", "medical researchers are reluctant to try extreme interventions".) Naming a specific flaw provides more information and, at least to my ears, does not sound nearly as obnoxious as decrying the "overall competence of human civilization".

comment by arthole · 2017-10-30T21:31:15.743Z · LW(p) · GW(p)

Re: SAD. you may want to consider that SAD is a side-effect of a decay or change of molecules in the body produced by sunlight. Skin exposed to sunlight produces D3, because of a UVB reaction in skin cells. The lack of available D3 in her blood stream, and thus her brain may be the physical component producing the psychological phenomena of SAD.

Replies from: ChristianKl
comment by ChristianKl · 2017-11-02T17:49:59.681Z · LW(p) · GW(p)

This is obvious advice. In this case I think it's relatively unlikely that Brienne doesn't take D3 supplements or tested herself, but I don't think there a reason to downvote people for writing obvious advice.

comment by m0ltz · 2017-10-30T13:39:06.985Z · LW(p) · GW(p)

What is wrong with Gary Taubes’ theories? I legitimately want to know, because I am also a follower or GT.

Replies from: arundelo
comment by arundelo · 2017-10-30T16:50:17.389Z · LW(p) · GW(p)

Scott Alexander writes about Taubes here (and elsewhere).

(Edit 3: Got rid of some stuff about the in-browser editor. For the record, the "maybe also allow users on desktop to switch to the markdown mode" link is this.)

Replies from: habryka4
comment by habryka (habryka4) · 2017-10-30T19:42:10.234Z · LW(p) · GW(p)

Nah, the only thing you need to do is press space after you insert a link, or any other markdown syntax. (Edited it for you)

Replies from: arundelo
comment by arundelo · 2017-10-30T21:43:08.605Z · LW(p) · GW(p)

Thanks, but you accidentally removed the href attributes from my links. I added them back ... never mind, they're still dead. Can't get it to work.

They are:

http://slatestarcodex.com/2015/08/04/contra-hallquist-on-scientific-rationality/

https://www.google.com/search?q=taubes+site%3Aslatestarcodex.com

https://github.com/Discordius/Lesswrong2/issues/226

Replies from: habryka4, habryka4
comment by habryka (habryka4) · 2017-10-30T21:50:04.370Z · LW(p) · GW(p)

Huh, weird. Sorry for that. I will figure out what caused that.

comment by habryka (habryka4) · 2017-11-02T21:52:31.442Z · LW(p) · GW(p)

Btw, the relevant bug is now fixed and markdown links should work properly again.

comment by [deleted] (tommsittler) · 2017-10-28T23:14:53.832Z · LW(p) · GW(p)

Even if we believed that central bankers are purely selfish, and don't care at all about the mandate they have nominally taken on, they still have some incentive to produce higher employment (inflation being equal). Politicians encourage them to do so, and they get prestige among macroeconomists (e.g. "wow FED chairperson X presided over the longest period of peacetime growth since 1900."). To paraphrase evolution-is-just-a-theorem: what incentive do central bankers have not to puruse adequately loose monteray policy?

comment by Martin Randall (martin-randall) · 2024-12-15T03:25:54.444Z · LW(p) · GW(p)

I'm an epistemically modest person, I guess. My main criticism is one that is already quoted in the text, albeit with more exclamation points than I would use:

You aren’t so specially blessed as your priors would have you believe; other academics already know what you know! Civilization isn’t so inadequate after all!

It's not just academics. I recall having a similar opinion to Yudkowsky-2013. This wasn't a question of careful analysis of econobloggers, I just read The Economist, the most mainstream magazine to cover this type of question, and I deferred to their judgment. I started reading The Economist because my school and university had subscriptions. The reporting is paywalled but I'll cite Revolution in the Air (2013-04-13) and Odd men in (1999-05-13) for anyone with a subscription, or just search for Haruhiko Kuroda's name.

Japan 2013 monetary policy is a win for epistemic modesty. Instead of reading econblogs and identifying which ones make the most sense, or deciding which Nobel laureates and prestigious economists have the best assessment of the situation, you can just upload conventional economic wisdom into your brain as an impressionable teenager and come to good conclusions.

Disclaimer: Yudkowsky argues this doesn't impact his thesis about civilizational adequacy, defined later in this sequence. I'm not arguing that thesis here, better to take that up where it is defined and more robustly defended.

comment by Calorion · 2022-12-01T15:29:23.566Z · LW(p) · GW(p)

>John had previously observed me making contrarian claims where I’d turned out to be badly wrong, like endorsing Gary Taubes’ theories about the causes of the obesity epidemic.

Um…what? This might not be the *only* cause, but surely emphasizing sugar over fat has been a *major* one. What am I missing here?

comment by nonwasp · 2017-11-07T12:21:43.838Z · LW(p) · GW(p)

Actually half of the people you meet are smarter then average, because you won't meet the people who are insufficiently intelligent to be verbal (as opposed to mute to other causes).

comment by m0ltz · 2017-10-30T13:43:09.916Z · LW(p) · GW(p)
Yet one would expect the governing board of the Bank of Japan to be composed of experienced economists with specialized monetary expertise. How likely is it that any outsider would be able to spot an obvious flaw in their policy?

I think there is a significant chance. Oftentimes, experts may know the truth, but outside forces influence their decisions. An example of that is lobbyists. They often lobby pro their own agenda, which may be quite the opposite of scientific consensus. E.g. think tobacco industry lobbying the government and doctors being opressed for speaking up. This happens in food and health industry all the time too. Scientists that go against the "common knowledge" get shamed and grants are reduced or removed.

comment by Nate_Rausch · 2017-10-30T05:15:14.363Z · LW(p) · GW(p)

Another key difference between startup idea / medican research and financial markets is that the latter is a bounded problem.

It is possible to know every relevant supply and demand-factor regarding a currently priced asset. However, the possible startup ideas now or in the future is probably infinite, or at least many orders of magnitude larger than the space of possible information to know about the pricing of a current asset.

And so, we should and likely do have much less modesty concerning medical research and startup ideas, than we have concerning asset prices.

comment by Jeff Rose · 2017-10-29T18:54:31.820Z · LW(p) · GW(p)

"If you want to outperform—if you want to do anything not usually done—then you’ll need to conceptually divide our civilization into areas of lower and greater competency. "

The idea quoted above seems wrong in practice. You don't need to conceptually divide our civilization into areas of comptency - you need to see what is actually being done in the area in which you want to outperform: in particular, (i) whether your proposed activity/solution has already been tried or assessed; and (ii) the degree to which existing evidence says it won't or will work.

Also, if civilizational competence is intended to cover something beyond an efficient market, it would make sense to use a different example.

Replies from: ESRogs
comment by ESRogs · 2017-10-30T00:59:57.690Z · LW(p) · GW(p)
Also, if civilizational competence is intended to cover something beyond an efficient market, it would make sense to use a different example.

Why do you say that? The efficient market seems like a helpful metaphor (for example, as used to describe the landscape of charity here: https://blog.givewell.org/2013/05/02/broad-market-efficiency/).

Are there specific cases you have in mind where you'd want to talk about civilizational competence, but the efficient market metaphor seems like a stretch?