Posts

The Motivated Reasoning Critique of Effective Altruism 2021-09-15T01:43:59.518Z
Linch's Shortform 2020-10-23T18:07:04.235Z
What are some low-information priors that you find practically useful for thinking about the world? 2020-08-07T04:37:04.127Z

Comments

Comment by Linch on Redwood Research’s current project · 2021-09-22T01:31:04.965Z · LW · GW

I get no visual feedback after clicking the "report" button in Talk to Filtered Transformer, so I have no idea whether the reported snippets got through.

For what it's worth, I got some violent stuff with a low score in my first few minutes of playing around with variations of the prompt below, but was unable to replicate it afterwards.

Joker: "Do you want to see a magic trick?" 

Comment by Linch on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-12T02:21:41.389Z · LW · GW

There are >7 billion people on the planet, and likely >100 active threads on LessWrong. Your prior should strongly be against interaction with any specific person on any specific topic being the best use of your time, not for it. 

Comment by Linch on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-12T00:18:15.765Z · LW · GW

Or the prediction that training cops to avoid shooting blacks could make a difference to the average lifespan of blacks.  This is impossible -- out of 42 million blacks in the U.S., a little over 200 per year are shot to death by cops.  For context that's more than the number that die from lightning strikes, but less than the number that die from drowning.

Concretely:

200 deaths/year*(75 years/lifetime)/42 million lifetimes)*40 years lost *(365 days/years) ~= 5.2 days/lifetime, so 5 days is the average lifetime lost for black people compared to if you get rid of all police shootings and there are no other secondary effects.

Realistically getting rid of 100% of police shootings is unrealistic, but 20%-50% (or extending average black lifetimes by 1-3 days) doesn't seem crazy to me.

I multiplied these numbers out because the dimensional analysis for yearly death rate and number of total people alive is pretty confusing unless you have an intuition for this stuff (which I at least don't have enough of), can imagine people walking away from just the raw numbers thinking the expected per capita loss is closer to hours or closer to weeks. 

Comment by Linch on Handicapping competitive games · 2021-07-25T02:55:22.362Z · LW · GW

"can six bronze players beat three grandmasters?"

Well, can they? 

It surprises me that this is remotely in question, like 3 GMs will almost certainly smoke 6 bronze players in Starcraft (I've seen far more impressive feats), and naively shooter games would be even more asymmetric (like if the GM player has much better aim, they can beat ~infinite bronze players).

Comment by Linch on Handicapping competitive games · 2021-07-22T07:03:03.481Z · LW · GW

Modifying the number of players seems promising as a handicap. Eg, if there are 3 players who want to play Go, you can pair the strongest player with the weakest player against the medium player, and the strongest and weakest players alternate moves for their side (no comms)

I've also seen versions of this for starcraft, where eg the professional player is in charge of microstrategy and the weaker player is in charge of macrostrategy, or vice versa.

Comment by Linch on MIRI location optimization (and related topics) discussion · 2021-05-09T00:54:15.160Z · LW · GW

(I was unimpressed by S'Pore's handling of covid, but it was still so much better than the US that it's not  really comparable).

Comment by Linch on MIRI location optimization (and related topics) discussion · 2021-05-09T00:49:25.763Z · LW · GW

Although we’ve been focusing heavily on the US in our search, we’re also still interested in country suggestions

Any strong reason for preferring US locations? For example, Singapore has many advantages like having a broadly competent government/bureaucracy, local politics different enough from typical AngloAmerican issues that staff will be disinclined to wade in, lots of smart and mathematically competent people from local universities, English native language, a thriving expat community, tropical weather, etc. 

(Btw the text editor is very annoying for quotes).

Comment by Linch on Strong Evidence is Common · 2021-03-15T23:25:47.613Z · LW · GW

Linking my comment from the Forum:

I think in the real world there are many situations where (if we were to put explicit Bayesian probabilities on such beliefs, which we almost never do), beliefs with ex ante ~0 credence quickly get extraordinary updates. My favorite example is sense perception. If I woke up after sleeping on a bus and were to put explicit Bayesian probabilities on anticipating what I will see next time I open my eyes, then my belief I'd assign in the true outcome (ignoring practical constraints like computation and my near inability to have any visual imagery) has ~0 credence. Yet it's easy to get strong Bayesian updates: I just open my eyes. In most cases, this should be a large enough update, and I go on my merry way. 

But suppose I open my eyes and instead see  people who are  approximate lookalikes of dead US presidents sitting around the bus. Then at that point (even though the ex ante probability of this outcome and that of a specific other thing I saw isn't much different), I will correctly be surprised, and have some reasons to doubt my sense perception.

Likewise, if instead of saying your name is Mark Xu, you instead said "Lee Kuan Yew", I at least would be pretty suspicious that your actual name is Lee Kuan Yew.

I think a lot of this confusion in intuitions can be resolved by looking at what MacAskill calls the difference between unlikelihood and fishiness:

Lots of things are a priori extremely unlikely yet we should have high credence in them: for example, the chance that you just dealt this particular (random-seeming) sequence of cards from a well-shuffled deck of 52 cards is 1 in 52! ≈ 1 in 10^68, yet you should often have high credence in claims of that form.  But the claim that we’re at an extremely special time is also fishy. That is, it’s more like the claim that you just dealt a deck of cards in perfect order (2 to Ace of clubs, then 2 to Ace of diamonds, etc) from a well-shuffled deck of cards. 

Being fishy is different than just being unlikely. The difference between unlikelihood and fishiness is the availability of alternative, not wildly improbable, alternative hypotheses, on which the outcome or evidence is reasonably likely. If I deal the random-seeming sequence of cards, I don’t have reason to question my assumption that the deck was shuffled, because there’s no alternative background assumption on which the random-seeming sequence is a likely occurrence.  If, however, I deal the deck of cards in perfect order, I do have reason to significantly update that the deck was not in fact shuffled, because the probability of getting cards in perfect order if the cards were not shuffled is reasonably high. That is: P(cards not shuffled)P(cards in perfect order | cards not shuffled) >> P(cards shuffled)P(cards in perfect order | cards shuffled), even if my prior credence was that P(cards shuffled) > P(cards not shuffled), so I should update towards the cards having not been shuffled.

Put another way, we can dissolve this by looking explicitly at Bayes' theorem. 

and in turn, 

 is high in both the "fishy" and "non-fishy" regimes. However, is much higher for fishy hypotheses than  for non-fishy hypotheses, even if the surface-level evidence looks similar!

Comment by Linch on Strong Evidence is Common · 2021-03-15T02:03:33.638Z · LW · GW

Also many people with East Asian birth names go by some Anglicized given name informally, enough that I'm fairly sure randomly selected "Mark Xu"s in the US will have well below 95% at "has a driver's license that says "Mark Xu"

Comment by Linch on Linch's Shortform · 2021-01-01T06:41:09.981Z · LW · GW

Crossposted from an EA Forum comment.

There are a number of practical issues with most attempts at epistemic modesty/deference, that theoretical approaches do not adequately account for. 

1) Misunderstanding of what experts actually mean. It is often easier to defer to a stereotype in your head than to fully understand an expert's views, or a simple approximation thereof. 

Dan Luu gives the example of SV investors who "defer" to economists on the issue of discrimination in competitive markets without actually understanding (or perhaps reading) the relevant papers. 

In some of those cases, it's plausible that you'd do better trusting the evidence of your own eyes/intuition over your attempts to understand experts.

2) Misidentifying the right experts. In the US, it seems like the educated public roughly believes that "anybody with a medical doctorate" is approximately the relevant expert class on questions as diverse as nutrition, the fluid dynamics of indoor air flow (if the airflow happens to carry viruses), and the optimal allocation of limited (medical)  resources. 

More generally, people often default to the closest high-status group/expert to them, without accounting for whether that group/expert is epistemically superior to other experts slightly further away in space or time. 

2a) Immodest modesty.* As a specific case/extension of this, when someone identifies an apparent expert or community of experts to defer to, they risk (incorrectly) believing that they have deference (on this particular topic) "figured out" and thus choose not to update on either object- or meta- level evidence that they did not correctly identify the relevant experts. The issue may be exacerbated beyond "normal" cases of immodesty, if there's a sufficiently high conviction that you are being epistemically modest!

3) Information lag. Obviously any information you receive is to some degree or another from the past, and has the risk of being outdated. Of course, this lag happens for all evidence you have. At the most trivial level, even sensory experience isn't really in real-time. But I think it should be reasonable to assume that attempts to read expert claims/consensus is disproportionately likely to have a significant lag problem, compared to your own present evaluations of the object-level arguments. 

4) Computational complexity in understanding the consensus. Trying to understand the academic consensus (or lack thereof) from the outside might be very difficult, to the point where establishing your own understanding from a different vantage point might be less time-consuming. Unlike 1), this presupposes that you are able to correctly understand/infer what the experts mean, just that it might not be worth the time to do so.

5) Community issues with groupthink/difficulty in separating out beliefs from action. In an ideal world, we make our independent assessments of a situation, report it to the community, in what Kant calls the "public (scholarly) use of reason" and then defer to an all-things-considered epistemically modest view when we act on our beliefs in our private role as citizens.

However, in practice I think it's plausibly difficult to separate out what you personally believe from what you feel compelled to act on. One potential issue with this is that a community that's overly epistemically deferential will plausibly have less variation, and lower affordance for making mistakes.
 

--

*As a special case of that, people may be unusually bad at identifying the right experts when said experts happen to agree with their initial biases, either on the object-level or for meta-level reasons uncorrelated with truth (eg use similar diction, have similar cultural backgrounds, etc)

Comment by Linch on Pain is not the unit of Effort · 2020-12-11T04:16:18.625Z · LW · GW

Fun fact, Hillary Clinton's autobiography quoted Clarkson quoting Nietzsche. 

Comment by Linch on Covid-19 IFR from a Bayesian point of view · 2020-11-05T05:39:43.645Z · LW · GW

Apologies if you've already thought of this, but some quick points:

  1. I think it's probably wrong to assume that covid19 IFR is a static quantity.
  2. It seems very plausible to me that (esp in the US) empirical covid-19 IFR dropped a lot over time, through a combination of better treatment and self-selection in who gets infected.
  3. In addition, IFR varies a lot from location to location due to demographic differences.
  4. Finally, one issue with using anti-body testing as ground truth for "once infected" that it's plausible that people lose antibodies over time.
Comment by Linch on Linch's Shortform · 2020-11-02T10:46:41.231Z · LW · GW

There should maybe be an introductory guide for new LessWrong users coming in from the EA Forum, and vice versa.

I feel like my writing style (designed for EAF) is almost the same as that of LW-style rationalists, but not quite identical, and this is enough to be substantially less useful for the average audience member here.

For example, this identical question is a lot less popular on LessWrong than on the EA Forum, despite naively appearing to appeal to both audiences (and indeed if I were to guess at the purview of LW, to be closer to the mission of this site than that of the EA Forum).

Comment by Linch on The Treacherous Path to Rationality · 2020-11-02T05:37:29.724Z · LW · GW

The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts.

Based on anecdotal reports, I'm not convinced that rationalist social media early on is substantially better than educated Chinese social media. I'm also not convinced that I would rather have rationalists in charge of the South Korean or Taiwanese responses than the actual people on the ground.

It's probable that this group did better than many Western authorities, but the bar of Kalia Beach, Palestine, is not very high

I think it is true that in important ways the rationalist community did substantially better than plausible "peer" social groups, but nonetheless, ~2 million people still died, and the world is probably worse off for it. 

And yet, we basically got everything about COVID-19 right

This specifically is quite surprising to me. I have a list of >30 mistakes I've made about covid*, and my impression is that I'm somewhat above average at getting things right. Certainly my impression is that some individuals seem to be noticeably more accurate than me (Divia Eden, Rob Wiblin, Lukas Gloor, and several others come to mind), but I would guess that a reasonably high fraction of people in this community are off by at least as much as I am, were they to venture concrete predictions. 

(I have not read most of the post so I apologize if my points have already been covered elsewhere).

* I have not updated the list much since late May. If I were to do so, I suspect the list would at least double in size.

Comment by Linch on The Treacherous Path to Rationality · 2020-11-02T05:18:45.373Z · LW · GW

I know someone who ~5x'd. 

Comment by Linch on DanielFilan's Shortform Feed · 2020-10-29T00:13:40.153Z · LW · GW

Brazil is another interesting place. In addition to the large populations and GDP, anecdotally based on online courses I've taken, philosophy meme groups etc, Brazilians seem more interested in Anglo-American academic ethics than people from China or India, despite the presumably large language barrier.

Comment by Linch on Top Time Travel Interventions? · 2020-10-28T03:00:41.061Z · LW · GW

Couldn't that have the effect of dramatically accelerating human technological progress, without sufficiently increasing the quality of government or the state of AI safety?

You aren't bringing democracy or other significantly improved governmental forms to the world. In the end it's just another empire. It might last a few thousand years if you're really lucky.

Hmm I don't share this intuition. I think a possible crux is answering the following question:

Relative to possible historical trajectories, is our current trajectory unusually likely or unlikely to navigate existential risk well?

I claim that unless you have good outside view or inside view reasons to believe otherwise, you should basically assume our current trajectory is ~50th percentile of possible worlds. (One possible reason to think we're better than average is anthropic survivorship bias, but I don't find it plausible since I'm not aware of any extinction-level near misses). 

With the 50th percentile baseline in mind, I think that a culture that is broadly 

  • consequentialist
  • longtermist
  • one-world government (so lower potential for race dynamics)
  • permissive of privacy violations for the greater good
  • prone to long reflection and careful tradeoffs
  • has ancient texts explicitly warning of a) the dangers of apocalypse and b) a strong ingrained belief that the end of the world, is, in fact, bad.
  • Specific scenarios (from the ancient texts) warning of specific anticipated anthropogenic risks (dangers of intelligent golems, widespread disease, etc)

seems to just have a significantly better shot at avoiding accidental existential catastrophe than our current timeline. For example, you can imagine them spending percentage points of their economy on mitigating existential risks, the best scholars of their generation taking differential technological progress seriously, bureaucracies willing to delay dangerous technologies, etc.

Does this seem right to you? If not, approximately what percentile you will place our current trajectory?

___

In that case I think what you've done is essentially risk 2 thousand years of time for humans to live life on Earth, balancing this against the gamble that a Mohist empire offers a somewhat more sane and stable environment in which to navigate technological risks.

This seems like a bad bargain to me.

Moral uncertainty aside, sacrificing 2000 years of near subsistence-level existence of billions of humans seems like a fair price to trade for even a percentage point higher chance of achieving utopia for many orders of magnitude more sentient beings for billion of years (or avoiding S-risks, etc). And right now I think that (conditional upon success large enough to change the technological curve) this plan will increase the odds of an existential win by multiple percentage points.

Comment by Linch on Top Time Travel Interventions? · 2020-10-28T02:38:45.662Z · LW · GW

I'm also generally excited of many different stories involving Mohism and alternative history. I'd also like to see somebody exploring the following premises (for different stories):

1) a young Mohist disciple thought about things for a long time, discovered longtermism, and realized (after some calculations with simplified assumptions) that the most important Mohist thing to do is guarantee a good future hundreds or thousands of years in the future. He slowly convinces the others. The Mohists try to execute on thousand-year plans (like Asimov's Foundation minus the availability of computers and advanced math). 

2) An emperor converts to Mohism. 

3) The Mohists go underground after the establishment of Qin dynasty and alleged extreme suppression of dissenting thought. They develop into a secret society (akin to Freemasons) dedicated to safeguarding the longterm trajectory of the empire while secretly spreading consequentialist ideas.

4) Near-schism within the now-Mohist China due to the introduction of a compelling religion. Dissent about whether to believe in the supernatural, burden of proof, concerns with infinite ethics, etc

Comment by Linch on Top Time Travel Interventions? · 2020-10-28T02:29:27.725Z · LW · GW

Personally, I feel a lot of spiritual kinship towards Mohists (imo much cooler by my modern/Westernized tastes than Legalists, Daoists, Confucians and other philosophies popular during that time). 

(the story below is somewhat stylized. Don't take it too literally).

The Mohists' main shtick is that they'd travel the land teaching their ways during the Warring States period, particularly towards weaker nations at risk of being crushed by larger/more powerful ones. Their reputation was great enough that kings will call off invasions based only on the knowledge that Mohist disciples are defending targeted cities.

One (somewhat anachronistic) analogy I like thinking of Mohists is as nerdy Jedi. They are organized in semi-monastic orders. They live ascetic lifestyles, denying themselves worldly pleasures for the greater good. They are exquisitely trained in the relevant crafts (diplomacy and lightsaber combat for Jedi; logic, philosophy, and siege engineering for Mohists). 

Even their most critical flaws are similar to that of Jedi. In particular, their rejection of partiality and emotion feels reminiscent of what led to the fall of the Jedi (though I have no direct evidence it was actually bad for Mohist goals). More critically, their short-term moral goals do not align with a long-term stable strategy. In hindsight, we know that preserving "balance" between the various kingdoms was not a stable strategy since "empire" was an attractor state. 

In the Mohists' case, they fought on the side of losing states. Unfortunately, eventually one state won, and then the ruling empire's morality were not fans of philosophies that espoused defending the weak. 

Comment by Linch on Top Time Travel Interventions? · 2020-10-27T23:29:39.047Z · LW · GW

I'd love to see a short story written with this premise.

I'd love to see this. I've considered doing it myself but decided that I'm not good enough of a fiction writer (yet). 

Comment by Linch on Top Time Travel Interventions? · 2020-10-27T23:28:38.711Z · LW · GW

Darn. Hmm I guess another possibility is to see if ~300 years of advances in propaganda social technology would mean someone from our timeline is much more persuasive than 1700s people, and, after some pre-time travel reading and marketing/rhetoric classes, try to write polemical newsletters directly (I'm unfortunately handicapped by being the wrong ethnicity so I need someone else to be my mouthpiece if I do this). 

Preventing specific pivotal moments (like assassinations or Boston 'massacre') seems to rely on a very narrow theory of change, though maybe it's enough?

Comment by Linch on Top Time Travel Interventions? · 2020-10-27T23:20:43.223Z · LW · GW

Added some links! I love how Gwern has "American Revolution" under his "My Mistakes" list. 

Comment by Linch on Top Time Travel Interventions? · 2020-10-27T02:09:35.354Z · LW · GW

Broadly, I think I'm fairly optimistic about "increasing the power, wisdom, and maybe morality of good actors, particularly during times pivotal to humanity's history."

(Baseline: I'm bringing myself. I'm also bringing 100-300 pages of the best philosophy available in the 21st century, focused on grounding people in the best cross-cultural arguments for values/paradigms/worldviews I consider the most important). 

Scenario 0: Mohist revolution in China

When: Warring States Period (~400BC)

Who: The Mohists, an early school of proto-consequentialists in China, focused on engineering, logic, and large population sizes.

How to achieve power: Before traveling back in time, learn old Chinese languages and a lot of history and ancient Chinese philosophy. Bring with me technological designs from the future, particularly things expected to provide decisive strategic advantages to even small states (eg, gunpowder, Ming-era giant repeating cross bows, etc. Might need some organizational theory/logistical advances stuff to help maintain the empire later, but possible Mohists are smart enough to figure this out on their own. Maybe some agricultural advances too). Find the local Mohists, teach them the relevant technologies and worldviews. Help them identify a state willing to listen to Mohists to prevent getting crushed, and slowly change the government from within while winning more and more wars.

Desired outcome: Broadly consequentialist one-world government, expanding outwards from Mohist China. Aware of all the classical arguments for utilitarianism, longtermism, existential risks, long reflection, etc.

Other possible pivotal points:

  1. Give power to leaders of whichever world religion we think is most conducive for longterm prosperity (maybe Buddhism? High impartiality, scientific-ish, vegetarian, less of a caste system than close contender Hinduism)
    1. Eg, a) give cool toys to Ashoka and b) convince Ashoka of the "right" flavors of Buddhism
  2. Increase power to old-school English utilitarians.
    1. One possible way to do this is by stopping the American revolution. If we believe Bentham and Gwern, the American revolution was a big mistake.
      1. Talking to Ben Franklin and other reasonable people at the time might do this
      2. Might be useful in general to talk to people like Bentham and other intellectual predecessors to make them seem even more farsighted than actually were
    2. It's possible you can increase power to them through useful empirical/engineering demonstrations that helps people think they're knowledgeable.
  3. Achieve personal power
    1. Standard thing where you go back in time by <50 years and invest in early Microsoft, Google, Dominoes Pizza, bitcoin etc
    2. Useful if we believe now is at or near the hinge of history
  4. Increase power and wisdom to early transhumanists, etc.
    1. "Hello SL4. My name is John Titor. I am from the future, and here's what I know..."
    2. Useful in most of the same worlds #3 is useful.
  5. Long-haul AI Safety research
    1. Bring up current alignment/safety concerns to early pioneers like Turing, make it clear you expect AGI to be a long time away (so AGI fears aren't dismissed after the next AI winter).
    2. May need to get some renown first by casually proving/stealing a few important theorems from the present.

In general I suspect I might not be creative enough. I wouldn't be surprised if there are many other pivotal points around, eg, the birth of Communism, Christianity, the Scientific Revolution, etc.
 

Comment by Linch on Top Time Travel Interventions? · 2020-10-27T01:40:54.638Z · LW · GW

The ability to go back in time and rectify old mistakes is one thing I fantasize about from time to time, so this will be a fun exercise for me! Might think about more detailed answers later.

Comment by Linch on Linch's Shortform · 2020-10-23T18:07:06.160Z · LW · GW

What are the limitations of using Bayesian agents as an idealized formal model of superhuman predictors?

I'm aware of 2 major flaws:


1. Bayesian agents don't have logical uncertainty. However, anything implemented on bounded computation necessarily has this.

2. Bayesian agents don't have a concept of causality. 

Curious what other flaws are out there.

Comment by Linch on Against Victimhood · 2020-09-23T06:13:26.454Z · LW · GW

Right now it's very hard to determine whether I agree or disagree with the article.

I think there are a lot of verbal claims here, and it feels almost entirely like a question of mood affiliation to determine how in-alignment I am with the central thesis/which direction of claims do I agree with/how much do I agree with them. 

Not telling you how to live your life, but I'd personally benefit from more numerical claims/quantified uncertainty.

Comment by Linch on Why haven't we celebrated any major achievements lately? · 2020-09-13T19:06:11.808Z · LW · GW

Competitive programming, maybe? Though perhaps the skill ceiling is lower than in professional sports.

Comment by Linch on Multitudinous outside views · 2020-08-19T19:37:31.616Z · LW · GW

Another in-the-field example of differing reference class intuitions here, on the Metaculus question:

Will Ghislaine Maxwell be alive on 1 January 2021?

The other commentator started with a prior of actuarial tables on death rates of 58 year old women in the USA, and argued that going from a base rate of 0.3% to 10% means a 33.3x increase in log-odds, which is implausibly high given the evidence entailed.

I thought actuarial tables were not a plausibly good base rate to start from, since most of the Ghislaine Maxwell-relevant bits are not from possibility of natural death.

Hopefully the discussion there is helpful for some lesswrong readers in understanding how different forecasters' intuitions clash "in practice."

Comment by Linch on Multitudinous outside views · 2020-08-18T20:43:43.068Z · LW · GW

Some scattered thoughts:

1. I think it's very good to consider many different outside views for a problem. This is why I considered section 2.1 of Yudkowsky's Intelligence Explosion Microeconomics to be frustrating/a weak man, because I think it's plausibly much better to ensemble a bunch of weak outside views than to use a single brittle outside view.

"Beware the man of one reference class" as they say.

2. One interesting (obvious?) note on base rates that I haven't seen anybody else point out: across time, you can think of "base rate forecasting" as just taking the zeroth derivative (while linear regression is a first derivative, etc).

3.

So which reference class is correct? In my (inside) view as a superforecaster, this is where we turn to a different superforecasting trick, about considering multiple models. As the saying goes, hedgehogs know one reference class, but foxes consult many hedgehogs.

I think while consulting many models is a good reminder, the hard part is choosing which model(s) to use in the end. I think your ensemble of models can often do much better than an unweighted average of all the models you've considered, since some models are a) much less applicable, b) much more brittle, c) much less intuitively plausible, or d) much too strongly correlated than other models you have.

As you've illustrated in some examples above, sometimes the final ensemble is composed of practically only one model!

4. I suspect starting with good meta-priors (in this case, good examples of reference classes to start investigating) is a substantial fraction of the battle. Often, you can have good priors even when things are very confusing.

5. One thing I'm interested in is how "complex" do you expect a reasonably good forecast to be. How many factors go into the final forecast, how complex the interactions between the parameters are, etc. I suspect final forecasts that are "good enough" are often shockingly simple, and the hard part of a forecast is building/extracting a "correct enough" simplified model of reality and getting a small amount of the appropriate data that you actually need.

Once an experienced analyst has the minimum information necessary to make an informed judgment, obtaining additional information generally does not improve the accuracy of his or her estimates. Additional information does, however, lead the analyst to become more confident in the judgment, to the point of overconfidence.

Experienced analysts have an imperfect understanding of what information they actually use in making judgments. They are unaware of the extent to which their judgments are determined by a few dominant factors, rather than by the systematic integration of all available information. Analysts actually use much less of the available information than they think they do.
There is strong experimental evidence, however, that such self-insight is usually faulty. The expert perceives his or her own judgmental process, including the number of different kinds of information taken into account, as being considerably more complex than is in fact the case. Experts overestimate the importance of factors that have only a minor impact on their judgment and underestimate the extent to which their decisions are based on a few major variables. In short, people's mental models are simpler than they think, and the analyst is typically unaware not only of which variables should have the greatest influence, but also which variables actually are having the greatest influence.

From Psychology of Intelligence Analysis, as summarized in the forecasting newsletter (emphasis mine).

If this theory is correct, or broadly correct, it'd point to human judgmental forecasting being dramatically different from dominant paradigms in statistical machine learning, where more data and greater parameters usually improve accuracy.

(I think there may be some interesting analogies with the lottery ticket hypothesis that I'd love to explore more at one point)

Comment by Linch on Multitudinous outside views · 2020-08-18T20:20:15.972Z · LW · GW
And for COVID, I've written about my very early expectations - but maybe you think that a follow-up on why superforecasters mostly disagreed with my forecasts / I modeled things differently than them over the past 3-4 months would be interesting and useful

I'd be interested in this.

Comment by Linch on Are we in an AI overhang? · 2020-08-08T09:49:17.114Z · LW · GW

Re hardware limit: flagging the implicit assumption here that network speeds are spotty/unreliable enough that you can't or are unwilling to safely do hybrid on-device/cloud processing for the important parts of self-driving cars.

(FWIW I think the assumption is probably correct).

Comment by Linch on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-08T09:14:55.096Z · LW · GW

I think my base rate for basic comprehension failures is at a similar level!

Comment by Linch on What are some low-information priors that you find practically useful for thinking about the world? · 2020-08-08T09:07:42.065Z · LW · GW

Wow thank you! I found this really helpful.

Comment by Linch on Can you gain weirdness points? · 2020-07-31T09:00:21.310Z · LW · GW

Many of the thinker-heroes that we revere now, like Jeremy Bentham, Isaac Newton, Florence Nightingale, and Benjamin Franklin, among others, had ideas that were considered deeply weird within their time. Many of them were considered quite popular even within their own time.

Comment by Linch on What a 20-year-lead in military tech might look like · 2020-07-30T06:55:34.678Z · LW · GW

Information request: How large are leads in military tech for wars historically? My naive impression was that there was a >20 year lead for, eg, US-Vietnam, USSR-Afghanistan, both the first and second Italy-Ethiopian wars, great emu war etc.

I'm also curious how much of a lead the US currently has over, e.g., other permanent members of the UN Security Council.

Comment by Linch on A Personal (Interim) COVID-19 Postmortem · 2020-06-28T09:43:36.548Z · LW · GW

I agree with the following points:

  • That European countries very much appear to have this under control
  • That they did much better than the US and Latin America
  • Right-wing populist leaders did worse than I expected, in a non-coincidental way (Brazil's Bolsonaro is another example to add to the list).
  • "trying to control the narrative over dealing with problems is a particularly dangerous approach with infectious diseases" very strongly agreed. I'm a big fan of this write-up by NunoSempere, and this historian's touching reflection on the Spanish flu.

I think it's likely our disagreements are somewhat about framing than actual empirical differences. For example, "they seems poised to have gotten it under control before it ended up everywhere, though they didn't catch it enough to prevent spread at first, which would have been the goal" is a phrase I'd use to describe South Korea and Singapore, not Western Europe, where almost every locale had community transmission. I'd use "they caught it enough to prevent spread" to describe places like Mongolia with zero or close to zero community transmission, or contained community transmission to a single region.

I agree that Western European governments should get a lot of relative credit for managing to prevent more deaths, disability, and wanton economic destruction, despite being in an initially bad spot. But thousands of people nonetheless died, and those deaths appeared to be largely preventable (in a practical, humanly doable sense). So while I think we should also a) emphasize the relative successes (because in these dark times it's good to both hold on to hope and be grateful for what we have), and b) be unequivocally clear that the other Western governments mostly did better than the US, I do want to not lose sight of the target and also be clear that the relative failings of the US under Trump does not excuse the lesser failings of other institutions and governments.

Comment by Linch on A Personal (Interim) COVID-19 Postmortem · 2020-06-26T22:01:44.733Z · LW · GW

I know the conversation these days is (rightly) about preventing presymptomatic transmission from the wearer, but I'm personally still at ~80% that masks probably protect the wearer at least a little, though agree that the effect may not be huge.

Comment by Linch on A Personal (Interim) COVID-19 Postmortem · 2020-06-26T12:25:59.278Z · LW · GW
people find it far easier to forgive others for being wrong than being right

Harry Potter and the Half-Blood Prince

First of all, I really appreciate this postmortem. Admitting times when you were wrong couldn't have been an easy task, particularly if/when you staked a lot of your identity and reputation to being right. As EA and rationalist individuals and institutions become older and more professionalized, I'm guessing that institutional pressures will increasingly push us further and further away from wanting to admit mistakes; so I sincerely hope we get in the habit of publicly recognizing mistakes early on. (Unfinished list of my own mistakes, incidentally[1]). I hope to digest your post further and offer more insightful thoughts, but here are some initial thoughts:

Addendum on masks:

Another consideration about masks is that masks turn out in practice to be very reusable, a fact we (or at least I) should have investigated a lot more in early March.

On hospital-based transmission:

I don't know how much you believed in it, but as presented, this appears to be merely (ha!) a forecasting error rather than a strategic error. In the absence of a clear counterfactual, I don't think you were obviously wrong here, since it's quite plausible that if a lot of people like you ignored/downplayed the role of hospital-based transmission, it'd have gotten a lot worse.

On being a jerk re Jim and Elizabeth's post:

For what it's worth, I also (privately) asked them to take it down because I had similar considerations to you and thought the thing they wrote about masks was unilateralist-y and a bit of an infohazard. I think I was wrong there. But I think I mostly was object-level wrong about the relative tradeoffs and harms. To the extent I updated now, a) I updated object-level on how much I should cooperate or desire others to cooperate with specific institutions, and b) I updated broadly (but not completely) in general favor of openness and against censorship.

I continue to maintain that if I (and possibly you) had the same object-level beliefs as before, it was not incorrect to consider it an info-hazard (but not all object-level info-hazards are worth suppressing! Particularly if release promotes the relevant meta-level norms more than it harms), though of course not an existential one.

On superforecasting:

You said you think superforecasting is

materially worse than [you] hoped it would be at noticing rare events early.

I don't know how high your hopes were, but for what it's worth, I think this proves too much. I'm not sure about the exact aggregation algorithms that the Open Phil Good Judgement covid-19 project was running, but I feel like all I can realistically gather was that "of this specific set of part-time superforecasters that were on the Open Phil-funded project, more than 50% of them were way too optimistic."

While it's certainly some evidence against superforecasters being good at noticing rare events early, I don't think it's sufficient evidence against superforecasters being able to do this, and I definitely don't think this is a lot of evidence against superforecasting as a process.

As you weakly allude to, if you were on the project and paying attention more, you would probably have done better. Likewise, I know other superforecasters who I think were much more pessimistic than the GJ median. I suspect superforecasters who regularly read LessWrong and the EA Forum would have done better; and if I were to design a better system for superforecasting on rare events, I'd a) prime people to pay attention to a lot of rare events first, and b) have people train and score on log-loss or some other scoring system that's more punishing of overconfidence than Brier.

(All that said, I think Metaculus did okay but not great on covid-y questions relative to what someone with high hopes for prediction aggregation algorithms might reasonably expect).

On US Gov't Institutions:

I think there was a bunch of insights that your policy research experience has colored. For example, you mention how you trusted the FDA to have done a lot better under Scott Goettlieb. This might be obvious to you, but it's something I didn't even really think about until you highlighted this point. You also highlight a lot of useful specific uncertainties about whether the issue was political directors under Trump or nonpolitical directors of specific institutions. I think all of these things are very useful to know from the perspective of a policy researcher like yourself (and for students of US policy), since how to reform institutions is very decision-relevant to you and many other EAs.

That said, at a very coarse level, I think I'm a lot more cynical than you are implying with regard to how well US institutions would have handled this pre-Trump. It's possible we're not actually disagreeing, so I'm curious on your counterfactual probabilities on things being an order of magnitude better (<20,000 Americans dead of COVID-19 by now, say) in the following two worlds:

a) Clinton administration continuing all of Obama's policies?

b) Clinton administration continuing all of Obama's policies except for US CDC in China being equally understaffed as they are in our timeline.

My reasoning for why I'm generally pretty cynical (at least conditional upon this pandemic spreading at all, maybe a larger international presence could have helped contained it early) in those counterfactual worlds[2]:

1) There's sort of an existing counterfactual for preparedness of governments with a broadly American/Western culture but as competent at governance as a typical European country. It's called Europe. And I feel like every large geographically Western country was pretty bad at preparedness? People are praising Germany's response, but when it comes down to it, Germany has 9000+ confirmed covid-19 deaths in a population of 83 million, or >100 deaths/million, despite taking a large economic hit to suppress the pandemic. Japan had <1000 confirmed deaths in a population of 126.5 million. Now Japan was bad at testing, so maybe Japan actually had ~4000 deaths. But even at those numbers (~31 deaths/million), Japan still had <1/3 the number of deaths per capita as Germany. And object-level, Japan seemed to have screwed up a bunch of important things, so there's a simple transitivity argument where if a high-income country did worse than Japan, their policies/institutions couldn't have been that great.

Maybe I'm harping on this too much, but I really don't want us to succumb to the tyranny of low expectations here.

Now some culturally Western countries did fine (Australia, New Zealand). I'm not sure why they did well (maybe it's because they're islands, maybe seasonality is bigger than I think so Southern hemisphere had a huge initial advantage early on, maybe because they're around 10-15% East Asian so people had enough ties to China to be worrying earlier, maybe low population density, maybe their institutions are newer and better, maybe just luck), but regardless, I'd counterfactually bet on the response of Hillary's America looking more like a slightly less competent Europe or maybe Canada and less like Australia/NZ.

2) I didn't look at it that much, but at the high-level, the US response to 2009 H1N1 looked more competent, but ultimately the response didn't seem sufficient to have achieved containment if the mortality rates were as high as people thought it'd be? (Not sure of this, willing to be convinced otherwise on this one).

3) Some inside-view reasoning about specific actors.

___

Anyway, all these gripes aside, thank you again for your thoughtful (and well-written!) post. That couldn't have been easy to write, and I really appreciate it.

[1] Your post actually me to thinking about how I should be more honest/introspective about my strategic and not just predictive mistakes, so thanks for that! I plan to update the list soon with some strategic mistakes as well. For example, I considered myself to be on the "right" side of masks epistemically but not strategically.

[2] I'm maybe 35% on a) and 30% on b). A lot of the probability mass is considerations on there being enough chaos/sensitivity to initial conditions that this pandemic maybe wouldn't have happened at all, rather than Obama's or Hillary's response being an order of magnitude better conditional upon there being an epidemic.

Comment by Linch on Simulacra Levels and their Interactions · 2020-06-26T09:08:42.627Z · LW · GW

First of all, I really appreciate this article! It helps me conceptualize cleanly some vocabulary that's flying around in the rationalist community that I previously didn't really understand.

To me, the most obvious missing archetype is The Truth-Giver or perhaps The Teacher.

The Teacher is concerned with only conveying truthful messages. She will usually tell the truth, but she may occasionally omit truthful things, or even (rarely) tell half-truths if she thinks it's easier to convey truthful messages via half-truth. Importantly, she's different from the Sage or the Pragmatist in that she's not concerned with other object-level consequences, only in conveying truthful messages.

Consider the claim:

There’s a pandemic headed our way from China

Suppose that The Teacher believes that the following is more correct:

There's a pandemic headed our way from Italy.

The Teacher will choose usually to clarify and say the full message, however if she only has one bit of response, she'll say "yes" to "Is there a pandemic headed our way from China?" Importantly (unlike the Pragmatist) she'll do this even if the perceived consequences are negative, as long as the subject gets more truthful information than they otherwise would have.

Against Level 1 and Level 2 players, The Teacher will never see a need to resort to Level 3. However, against a fully Level 3 player, she will (begrudgingly) issue utterances correctly conveying the ideological faction she's on, as that's the most relevant/only bit to transfer to fully Level 3 players.

I think Teacher roles are incredibly important in practical everyday communication, since all information is lossy, inferential gaps are common, attention and text is limited, etc. Indeed, I would go so far as argue that Teacher roles are often preferable to Oracle roles in murky situations if the purpose is to collectively seek truth.

Comment by Linch on Thomas C. Schelling's "Strategy of Conflict" · 2014-01-31T19:23:31.925Z · LW · GW

Hi! First post here. You might be interested in knowing that not only is the broken radio example isomorphic to "Chicken," but there's a real-life solution to the Chicken game that is very close to "destroying your receiver." That is, you can set up a "committment" that you will, in fact, not swerve. Of course, standard game theory tells us that this is not a credible threat (since dying is bad). Thus, you must make your commitment binding, eg., by ripping out the steering wheel.