Posts

Confirmation Bias As Misfire Of Normal Bayesian Reasoning 2020-02-13T07:20:02.085Z · score: 45 (15 votes)
Map Of Effective Altruism 2020-02-03T06:20:02.200Z · score: 17 (7 votes)
Book Review: Human Compatible 2020-01-31T05:20:02.138Z · score: 77 (28 votes)
Assortative Mating And Autism 2020-01-28T18:20:02.223Z · score: 49 (10 votes)
SSC Meetups Everywhere Retrospective 2019-11-28T19:10:02.028Z · score: 36 (7 votes)
Mental Mountains 2019-11-27T05:30:02.107Z · score: 97 (34 votes)
Autism And Intelligence: Much More Than You Wanted To Know 2019-11-14T05:30:02.643Z · score: 62 (20 votes)
Building Intuitions On Non-Empirical Arguments In Science 2019-11-07T06:50:02.354Z · score: 62 (23 votes)
Book Review: Ages Of Discord 2019-09-03T06:30:01.543Z · score: 36 (11 votes)
Book Review: Secular Cycles 2019-08-13T04:10:01.201Z · score: 62 (29 votes)
Book Review: The Secret Of Our Success 2019-06-05T06:50:01.267Z · score: 137 (44 votes)
1960: The Year The Singularity Was Cancelled 2019-04-23T01:30:01.224Z · score: 67 (24 votes)
Rule Thinkers In, Not Out 2019-02-27T02:40:05.133Z · score: 103 (39 votes)
Book Review: The Structure Of Scientific Revolutions 2019-01-09T07:10:02.152Z · score: 82 (20 votes)
Bay Area SSC Meetup (special guest Steve Hsu) 2019-01-03T03:02:05.532Z · score: 30 (4 votes)
Is Science Slowing Down? 2018-11-27T03:30:01.516Z · score: 112 (46 votes)
Cognitive Enhancers: Mechanisms And Tradeoffs 2018-10-23T18:40:03.112Z · score: 44 (17 votes)
The Tails Coming Apart As Metaphor For Life 2018-09-25T19:10:02.410Z · score: 103 (41 votes)
Melatonin: Much More Than You Wanted To Know 2018-07-11T17:40:06.069Z · score: 93 (34 votes)
Varieties Of Argumentative Experience 2018-05-08T08:20:02.913Z · score: 129 (44 votes)
Recommendations vs. Guidelines 2018-04-13T04:10:01.328Z · score: 135 (38 votes)
Adult Neurogenesis – A Pointed Review 2018-04-05T04:50:03.107Z · score: 105 (32 votes)
God Help Us, Let’s Try To Understand Friston On Free Energy 2018-03-05T06:00:01.132Z · score: 96 (32 votes)
Does Age Bring Wisdom? 2017-11-08T07:20:00.376Z · score: 61 (23 votes)
SSC Meetup: Bay Area 10/14 2017-10-13T03:30:00.269Z · score: 4 (0 votes)
SSC Survey Results On Trust 2017-10-06T05:40:00.269Z · score: 13 (5 votes)
Different Worlds 2017-10-03T04:10:00.321Z · score: 92 (47 votes)
Against Individual IQ Worries 2017-09-28T17:12:19.553Z · score: 70 (39 votes)
My IRB Nightmare 2017-09-28T16:47:54.661Z · score: 27 (17 votes)
If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics 2017-09-03T20:56:25.373Z · score: 33 (15 votes)
Beware Isolated Demands For Rigor 2017-09-02T19:50:00.365Z · score: 53 (37 votes)
The Case Of The Suffocating Woman 2017-09-02T19:42:31.833Z · score: 7 (5 votes)
Learning To Love Scientific Consensus 2017-09-02T08:44:12.184Z · score: 11 (9 votes)
I Can Tolerate Anything Except The Outgroup 2017-09-02T08:22:19.612Z · score: 18 (15 votes)
The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible 2017-08-10T00:33:54.000Z · score: 11 (7 votes)
Where The Falling Einstein Meets The Rising Mouse 2017-08-03T00:54:28.000Z · score: 8 (5 votes)
Why Are Transgender People Immune To Optical Illusions? 2017-06-28T19:00:00.000Z · score: 15 (7 votes)
SSC Journal Club: AI Timelines 2017-06-08T19:00:00.000Z · score: 4 (4 votes)
The Atomic Bomb Considered As Hungarian High School Science Fair Project 2017-05-26T09:45:22.000Z · score: 26 (16 votes)
G.K. Chesterton On AI Risk 2017-04-01T19:00:43.865Z · score: 5 (5 votes)
Guided By The Beauty Of Our Weapons 2017-03-24T04:33:12.000Z · score: 13 (11 votes)
[REPOST] The Demiurge’s Older Brother 2017-03-22T02:03:51.000Z · score: 10 (9 votes)
Antidepressant Pharmacogenomics: Much More Than You Wanted To Know 2017-03-06T05:38:42.000Z · score: 3 (3 votes)
A Modern Myth 2017-02-27T17:29:17.000Z · score: 12 (8 votes)
Highlights From The Comments On Cost Disease 2017-02-17T07:28:52.000Z · score: 2 (2 votes)
Considerations On Cost Disease 2017-02-10T04:33:36.000Z · score: 10 (7 votes)
Albion’s Seed, Genotyped 2017-02-09T02:15:03.000Z · score: 5 (3 votes)
Discussion of LW in Ezra Klein podcast [starts 47:40] 2016-12-07T23:22:10.079Z · score: 9 (10 votes)
Expert Prediction Of Experiments 2016-11-29T02:47:47.276Z · score: 10 (11 votes)
The Pyramid And The Garden 2016-11-05T06:03:06.000Z · score: 34 (20 votes)

Comments

Comment by yvain on April Coronavirus Open Thread · 2020-04-01T05:21:10.440Z · score: 8 (4 votes) · LW · GW

No, it says:

The study design does not allow us to determine whether medical masks had efficacy or whether cloth masks were detrimental to HCWs by causing an increase in infection risk. Either possibility, or a combination of both effects, could explain our results. It is also unknown whether the rates of infection observed in the cloth mask arm are the same or higher than in HCWs who do not wear a mask, as almost all participants in the control arm used a mask. The physical properties of a cloth mask, reuse, the frequency and effectiveness of cleaning, and increased moisture retention, may potentially increase the infection risk for HCWs. The virus may survive on the surface of the facemasks,29 and modelling studies have quantified the contamination levels of masks.30 Self-contamination through repeated use and improper doffing is possible. For example, a contaminated cloth mask may transfer pathogen from the mask to the bare hands of the wearer. We also showed that filtration was extremely poor (almost 0%) for the cloth masks. Observations during SARS suggested double-masking and other practices increased the risk of infection because of moisture, liquid diffusion and pathogen retention.31 These effects may be associated with cloth masks... The study suggests medical masks may be protective, but the magnitude of difference raises the possibility that cloth masks cause an increase in infection risk in HCWs.
Comment by yvain on April Coronavirus Open Thread · 2020-04-01T01:45:02.299Z · score: 8 (4 votes) · LW · GW

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4420971/ is skeptical of cloth masks. Does anyone have any thoughts on it, or know any other studies investigating this question?

Comment by yvain on April Coronavirus Open Thread · 2020-03-31T23:47:45.532Z · score: 35 (18 votes) · LW · GW

In most major countries, daily case growth has switched from exponential to linear, an important first step towards the infection being under control. See https://ourworldindata.org/grapher/daily-covid-cases-3-day-average for more, you can change which countries are on the graph for more detail. The growth rate in the world as a whole has also turned linear, https://ourworldindata.org/grapher/daily-covid-cases-3-day-average?country=USA+CHN+KOR+ITA+ESP+DEU+GBR+IRN+OWID_WRL . Since this is growth per day, a horizontal line represents a linear growth rate.

If it was just one country, I would worry it was an artifact of reduced testing. Given almost every country at once, I say it's real.

The time course doesn't really match lockdowns, which were instituted at different times in different countries anyway. Sweden and Brazil, which are infamous for not taking any real coordinated efforts to stop the epidemic, are showing some of the same positive signs as everyone else - see https://ourworldindata.org/grapher/daily-covid-cases-3-day-average?country=BRA+SWE - though the graph is a little hard to interpret.

My guess is that this represents increased awareness of social distancing and increased taking-things-seriously starting about two weeks ago, and that this happened everywhere at once because it was more of a media phenomenon than a political one, and the media everywhere reads the media everywhere else and can coordinate on the same narrative quickly.

Comment by yvain on April Coronavirus Open Thread · 2020-03-31T22:07:26.473Z · score: 14 (8 votes) · LW · GW

Thanks for the shout-out, but I don't think the thing I proposed there is quite the same as hammer and dance. I proposed lockdown, then gradual titration of lockdown level to build herd immunity. Pueyo and others are proposing lockdown, then stopping lockdown in favor of better strategies that prevent transmission. The hammer and dance idea is better, and if I had understood it at the time of writing I would have been in favor of that instead.

(there was an ICL paper that proposed the same thing I did, and I did brag about preempting them, which might be what you saw)

Comment by yvain on SSC - Face Masks: Much More Than You Wanted To Know · 2020-03-24T17:42:36.669Z · score: 2 (1 votes) · LW · GW

Sorry, by "complete" I meant "against both types of transmission". I agree it was confusing/wrong as written, so I edited it to say "generalized".

Comment by yvain on Can crimes be discussed literally? · 2020-03-23T17:37:38.148Z · score: 8 (7 votes) · LW · GW

Agreed, it seems very similar to (maybe exactly like) the "Martin Luther King was a criminal" example from there.

Comment by yvain on March Coronavirus Open Thread · 2020-03-14T03:41:04.817Z · score: 43 (15 votes) · LW · GW

China is following a strategy of shutting down everything and getting R0 as low as possible. This works well in the short term, but they either have to keep everything shut down forever, or risk the whole thing starting over again.

UK is following a strategy of shutting down only the highest-risk people, and letting the infection burn itself out. It's a permanent solution, but it's going to be really awful for a while as the hospitals overload and many people die from lack of hospital care.

What about a strategy in between these two? Shut everything down, then gradually unshut down a little bit at a time. Your goal is to "surf" the border of the number of cases your medical system can handle at any given time (maybe this would mean an R0 of 1?) Any more cases, and you tighten quarantine; any fewer cases, and you relax it. If you're really organized, you can say things like "This is the month for people with last names A - F to go out and get the coronavirus". That way you never get extra mortality from the medical system being overloaded, but you do eventually get herd immunity and the ability to return to normalcy.

This would be sacrificing a certain number of lives, so you'd only want to do it if you were sure that you couldn't make the virus disappear entirely, and sure that there wasn't going to be vaccine or something in a few months that would solve the problem, but it seems like more long-term thinking than anything I've heard so far.

I've never heard of anyone trying anything like this before, but maybe there's never been a relevant situation before.

Comment by yvain on The Critical COVID-19 Infections Are About To Occur: It's Time To Stay Home [crosspost] · 2020-03-12T21:36:44.975Z · score: 40 (17 votes) · LW · GW

It sounds like you've found that by March 17, the US will have the same number of cases that Italy had when things turned disastrous.

But the US has five times the population of Italy, and the epidemic in the US seems more spread out compared to Italy (where it was focused in Lombardy). This makes me think we might have another ~3 doubling times (a little over a week) after the time we reach the number of cases that marked the worst phase of Italy, before we get the worst phase here.

I agree that it's going to get worse than most people expect sooner than most people expect, and that now is a good time to start staying inside. But (and I might be misunderstanding) I'm not sure if I would frame this as "tell people to stay inside for the next five days", because I do think it's possible that five days from now nothing has gotten obviously worse and then people will grow complacent.

Comment by yvain on When to Reverse Quarantine and Other COVID-19 Considerations · 2020-03-10T19:24:44.603Z · score: 24 (10 votes) · LW · GW

Have you looked into whether cinchona is really an acceptable substitute for chloroquine?

I'm concerned for two reasons. First, the studies I saw were on chloroquine, and I don't know if quinine is the same as chloroquine for this purpose. They have slightly different antimalarial activity - some chloroquine-resistant malaria strains are still vulnerable to quinine - and I can't find any information about whether their antiviral activity is the same. They're two pretty different molecules and I don't think it's fair to say that anything that works for one will also work for the other. Even if they do work, I don't know how to convert doses. It looks like the usual quinine dose for malaria is about three times the usual chloroquine dose, but I have no idea how that translates to antiviral properties.

Second, I don't know how much actual quinine is in cinchona. Quinine is a pretty dangerous substance, so the fact that the FDA doesn't care if people sell cinchona makes me think there isn't much in it. This paper suggests 6 mg quinine per gram of bark, though it's using literal bark and not the purified bark product they sell in supplement stores. At that rate, using this as an example cinchona preparation and naively assuming that quinine dose = chloroquine dose, the dose corresponding to the Chinese studies would be 160 cinchona pills, twice a day, for ten days - a level at which some other alkaloid in cinchona bark could potentially kill you.

Also, reverse-quarantining doesn't just benefit you, it also benefits the people who you might infect if you get the disease, and the person whose hospital bed you might be taking if you get the disease. I don't know what these numbers are but they should probably figure into your calculation.

Comment by yvain on Model estimating the number of infected persons in the bay area · 2020-03-09T05:52:36.842Z · score: 15 (6 votes) · LW · GW

I tried to answer the same question here and got very different numbers - somewhere between 500 and 2000 cases now.

I can't see your images or your spreadsheet, so I can't tell exactly where we diverged. One possible issue is that AFAIK most people start showing symptoms after 5 days. 14 days is the preferred quarantine period because it's almost the maximum amount of time the disease can incubate asymptomatically; the average is much lower.

Comment by yvain on REVISED: A drowning child is hard to find · 2020-02-02T20:32:47.590Z · score: 15 (7 votes) · LW · GW

I've read this. I interpret them as saying there are fundamental problems of uncertainty with saying any number, not that the number $5000 is wrong. There is a complicated and meta-uncertain probability distribution with its peak at $5000. This seems like the same thing we mean by many other estimates, like "Biden has a 40% chance of winning the Democratic primary". GiveWell is being unusually diligent in discussing the ways their number is uncertain and meta-uncertain, but it would be wrong (isolated demand for rigor) to retreat from a best estimate to total ignorance because of this.

Comment by yvain on REVISED: A drowning child is hard to find · 2020-02-02T20:28:39.113Z · score: 19 (6 votes) · LW · GW

I don't hear EAs doing this (except when quoting this post), so maybe that was the source of my confusion.

I agree Good Ventures could saturate the $5000/life tier, bringing marginal cost up to $10000 per life (or whatever). But then we'd be having this same discussion about saving money for $10000/life. So it seems like either:

1. Good Ventures donates all of its money, tomorrow, to stopping these diseases right now, and ends up driving the marginal cost of saving a life to some higher number and having no money left for other causes or the future, or

2. Good Ventures spends some of its money on stopping diseases, helps drive the marginal cost of saving a life up to some number N, but keeps money for other causes and the future, and for more complicated reasons like not wanting to take over charities, even though it could spend the remaining money on short-term disease-curing at $N/life.

(1) seems dumb. (2) seems like what it's doing now, at N = $5000 (with usual caveats).

It still seems accurate to say that you or I, if we wanted to, could currently donate $5000 (with usual caveats) and save a life. It also seems correct to say, once you've convinced people of this surprising fact, that they can probably do even better by taking that money/energy and devoting it to causes other than immediate-life-saving, the same way Good Ventures is.

I agree that if someone said "since saving one life costs $5000, and there are 10M people threatened by these diseases in the world, EA can save every life for $50B", they would be wrong. Is your concern only that someone is saying this? If so, it seems like we don't disagree, though I would be interested in seeing you link such a claim being made by anyone except the occasional confused newbie.

I'm kind of concerned about this because I feel like I've heard people reference your post as proving that EA is fraudulent and we need to throw it out and replace it with something nondeceptive (no, I hypocritically can't link this, it's mostly been in personal conversations), but I can't figure out how to interpret your argument as anything other than "if people worked really hard to misinterpret certain claims, then joined them together in an unlikely way, it's possible a few of them could end up confused in a way that doesn't really affect the bigger picture."

Comment by yvain on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-02-01T02:07:47.091Z · score: 9 (4 votes) · LW · GW

An alternate response to this point is that if someone comes off their medication, then says they're going to kill their mother because she is poisoning their food, and the food poisoning claim seems definitely not true, then spending a few days assessing what is going on and treating them until it looks like they are not going to kill their mother anymore seems justifiable for reasons other than "we know exactly what biological circuit is involved with 100% confidence"

(source: this basically describes one of the two people I ever committed involuntarily)

I agree that there are a lot of difficult legal issues to be sorted out about who has the burden of proof and how many hoops people should have to jump through to make this happen, but none of them look at all like "you do not know the exact biological circuit involved with 100% confidence using a theory that has had literally zero exceptions ever"

Comment by yvain on REVISED: A drowning child is hard to find · 2020-02-01T02:01:12.755Z · score: 35 (12 votes) · LW · GW

I'm confused by your math.

You say 10M people die per year of preventable diseases, and the marginal cost of saving a life is (presumed to be) $5K.

The Gates Foundation and OpenPhil combined have about $50B. So if marginal cost = average cost, their money combined is enough to save everyone for one year.

But marginal cost certainly doesn't equal average cost; average cost is probably orders of magnitude higher. Also, Gates and OpenPhil might want to do something other than prevent all diseases for one year, then leave the world to rot after that.

I agree a "grand experiment" would be neat. But are you sure it's this easy? Suppose we want to try eliminating malaria in Madagascar (chosen because it's an island so it seems like an especially good test case). It has 6K malaria deaths yearly, so if we use the 5K per life number, that should cost $30 million. But given the marginal vs. average consideration, the real number should probably be much higher, maybe $50K per person. Now the price tag is $300M/year. But that's still an abstraction. AFAIK OpenPhil doesn't directly employ any epidemiologists, aid workers, or Africans. So who do you pay the $300M to? Is there some charity that is willing to move all their operations to Madagascar and concentrate entirely on that one island for a few years? Do the people who work at that charity speak Malagasay? Do they have families who might want to live somewhere other than Madagascar? Do they already have competent scientists who can measure their data well? If not, can you hire enough good scientists, at scale, to measure an entire country's worth of data? Are there scientists willing to switch to doing that for enough money? Do you have somebody working for you who can find them and convince them to join your cause? Is the Madagascar government going to let thousands of foreign aid workers descend on them and use them as a test case? Does OpenPhil employ someone who can talk with the Madagascar government and ask them? Does that person speak Malagasay? If the experiment goes terribly, does that mean we're bad at treating malaria, or that we were bad at transferring our entire malaria-treating apparatus to Madagascar and scaling it up by orders of magnitude on short notice? What if it went badly because there are swamps in Madagascar that the local environmental board won't let anyone clear, and there's nothing at all like that in most malarial countries? I feel like just saying "run a grand experiment" ignores all of these considerations. I agree there's *some* amount of money that lets you hire/train/bribe everyone you need to make this happen, but by that point maybe this experiment costs $1B/year, which is the kind of money that even OpenPhil and Gates need to worry about. My best guess is that they're both boggled by the amount of work it would take to make something like this happen.

(I think there was something like a grand experiment to eliminate malaria on the island of Zanzibar, and it mostly worked, with transmission rates down 94%, but it involved a lot of things other than bednets because it turned out most of the difficulty involved battering down at the problems that remain after you pick the low-hanging fruit. I don't know if anyone has tried to learn anything from this.)

I'm not sure it's fair to say that if these numbers are accurate then charities "are hoarding money at the price of millions of preventable death". After all, that's basically true of any possible number. If lives cost $500,000 to save, then Gates would still be "hoarding money" if he didn't spend his $50 billion saving 100,000 people. Gates certainly isn't optimizing for saving exactly as many people as he can right now. So either there's no such person as Bill Gates and we're just being bamboozled to believe that there is, or Gates is trying to do things other than simultaneously throwing all of his money at the shortest-term cause possible without any infrastructure to receive it.

I think the EA movement already tries really hard to push the money that it's mostly talent-constrained and not funding-constrained, and it already tries really hard to convince people to donate to smaller causes where they might have an information advantage. But the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty) and is a really important message to get people thinking about ethics and how they want to contribute.

Comment by yvain on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-01-31T07:28:29.853Z · score: 34 (14 votes) · LW · GW
Likewise for psychiatry, which justifies incredibly high levels of coercion on the basis of precise-looking claims about different kinds of cognitive impairment and their remedies.


You're presenting a specific rule about manipulating logically necessary truths, then treating it as a vague heuristic and trying to apply it to medicine! Aaaaaah!

Suppose a physicist (not even a doctor! a physicist!) tries to calculate some parameter. Theory says it should be 6, but the experiment returns a value of 6.002. Probably the apparatus is a little off, or there's some other effect interfering (eg air resistance), or you're bad at experiment design. You don't throw out all of physics!

Or moving on to biology: suppose you hypothesize that insulin levels go up in response to glucose and go down after the glucose is successfully absorbed, and so insulin must be a glucose-regulating hormone. But you find one guy who just has really high levels of insulin no matter how much glucose he has. Well, that guy has an insulinoma. But if you lived before insulinomas were discovered, then you wouldn't know that. You still probably shouldn't throw out all of endocrinology based on one guy. Instead you should say "The theory seems basically sound, but this guy probably has something weird we'll figure out later".

I'm not claiming these disprove your point - that if you're making a perfectly-specified universally-quantified claim and receive a 100%-confidence 100%-definitely-relevant experimental result disproving it, it's disproven. But nobody outside pure math is in the perfectly-specified universally-quantified claim business, and nobody outside pure math receives 100%-confidence 100%-definitely-relevant tests of their claims. This is probably what you mean by the term "high-precision" - the theory of gravity isn't precise enough to say that no instrument can ever read 6.002 when it should read 6, and the theory of insulin isn't precise enough to say nobody can have weird diseases that cause exceptions. But both of these are part of a general principle that nothing in the physical world is precise enough that you should think this way.

See eg Kuhn, who makes the exact opposite point as this post - that no experimental result can ever prove any theory wrong with certainty. That's why we need this whole Bayesian thing.

Comment by yvain on Are "superforecasters" a real phenomenon? · 2020-01-09T03:05:45.427Z · score: 18 (7 votes) · LW · GW

I was going off absence of evidence (the paper didn't say anything other than taking the top 2%), so if anyone else has positive evidence that outweighs what I'm saying.

Comment by yvain on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2020-01-06T06:44:18.725Z · score: 19 (5 votes) · LW · GW

I agree much of psychology etc are bad for the reasons you state, but this doesn't seem to be because everyone else has fried their brains by trying to simulate how to appease triskaidekaphobics too much. It's because the actual triskaidekaphobics are the ones inventing the psychology theories. I know a bunch of people in academia who do various verbal gymnastics to appease the triskaidekaphobics, and when you talk to them in private they get everything 100% right.

I agree that most people will not literally have their buildings burned down if they speak out against orthodoxies (though there's a folk etymology for getting fired which is relevant here). But I appreciate Zvi's sequence on super-perfect competition as a signpost of where things can end up. I don't think academics, organization leaders, etc. are in super-perfect competition the same way middle managers are, but I also don't think we live in the world where everyone has infinite amounts of slack to burn endorsing taboo ideas and nothing can possibly go wrong.

Comment by yvain on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T06:31:06.201Z · score: 7 (3 votes) · LW · GW

I think you might be wrong about how fraud is legally defined. If the head of Pets.com says "You should invest in Pets.com, it's going to make millions, everyone wants to order pet food online", and then you invest in them, and then they go bankrupt, that person was probably biased and irresponsible, but nobody has committed fraud.

If Raleigh had simply said "Sponsor my expedition to El Dorado, which I believe has lots of gold", that doesn't sound like fraud either. But in fact he said:

For the rest, which myself have seen, I will promise these things that follow, which I know to be true. Those that are desirous to discover and to see many nations may be satisfied within this river, which bringeth forth so many arms and branches leading to several countries and provinces, above 2,000 miles east and west and 800 miles south and north, and of these the most either rich in gold or in other merchandises. The common soldier shall here fight for gold, and pay himself, instead of pence, with plates of half-a-foot broad, whereas he breaketh his bones in other wars for provant and penury. Those commanders and chieftains that shoot at honour and abundance shall find there more rich and beautiful cities, more temples adorned with golden images, more sepulchres filled with treasure, than either Cortes found in Mexico or Pizarro in Peru. And the shining glory of this conquest will eclipse all those so far-extended beams of the Spanish nation.

There were no Indian cities, and essentially no gold, anywhere in Guyana.

I agree with you that lots of people are biased! I agree this can affect their judgment in a way somewhere between conflict theory and mistake theory! I agree you can end up believing the wrong stories, or focusing on the wrong details, because of your bias! I'm just not sure that's how fraud works, legally, and I'm not sure it's an accurate description of what Sir Walter Raleigh did.

Comment by yvain on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T06:15:58.436Z · score: 5 (4 votes) · LW · GW

What exactly is contradictory? I only skimmed the relevant pages, but they all seemed to give a pretty similar picture. I didn't get a great sense of exactly what was in Raleigh's book, but all of them (and whoever tried him for treason) seemed to agree it was somewhere between heavily exaggerated and outright false, and I get the same impression from the full title "The discovery of the large, rich, and beautiful Empire of Guiana, with a relation of the great and golden city of Manoa (which the Spaniards call El Dorado)"

Comment by yvain on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T06:14:36.597Z · score: 11 (5 votes) · LW · GW

I'm confused by your confusion. The first paragraph establishes that Raleigh was at least as deceptive as the institutions he claimed to be criticizing. The second paragraph argues that if deceptive people can write famous poems about how they are the lone voice of truth in a deceptive world, we should be more careful about taking claims like that completely literally.

If you want more than that, you might have to clarify what part you don't understand.

Comment by yvain on What is Life in an Immoral Maze? · 2020-01-06T06:08:55.689Z · score: 16 (5 votes) · LW · GW
Questions that will be considered later, worth thinking about now, include: How does this persist? If things are so bad, why aren’t things way worse? Why haven’t these corporations fallen apart or been competed out of business? Given they haven’t, why hasn’t the entire economy collapsed? Why do regular people, aspirant managers and otherwise, still think of these manager positions as the ‘good jobs’ as opposed to picking up pitchforks and torches?

I hope you also answer a question I had when I was reading this: it's percolated down into common consciousness that some jobs are unusually tough and demanding. Medicine, finance, etc have reputations for being grueling. But I'd never heard that about middle management and your picture of middle management sounds worse than either. Any thoughts on why knowledge of this hasn't percolated down?

Comment by yvain on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-04T23:25:45.540Z · score: 51 (14 votes) · LW · GW

Walter Raleigh is also famous for leading an expedition to discover El Dorado. He didn't find it, but he wrote a book saying that he definitely had, and that if people gave him funding for a second expedition he would bring back limitless quantities of gold. He got his funding, went on his second expedition, and of course found nothing. His lieutenant committed suicide out of shame, and his men decided the Spanish must be hoarding the gold and burnt down a Spanish town. On his return to England, Raleigh was tried for treason based on a combination of the attack on Spain (which England was at peace with at the time) and defrauding everyone about the El Dorado thing. He was executed in 1618.

For conflict theorists, the moral of this story is that accusing everyone else of being lying and corrupt can sometimes be a strategy con men use to deflect suspicion. For mistake theorists, the moral is that it's really easy to talk yourself into a biased narrative where you are a lone angel in a sea full of corruption, and you should try being a little more charitable to other people and a little harsher on yourself.

Comment by yvain on Predictive coding & depression · 2020-01-03T19:27:06.164Z · score: 14 (7 votes) · LW · GW

In this post and the previous one you linked to, you do a good job explaining why your criterion e is possible / not ruled out by the data. But can you explain more about what makes you think it's true? Maybe this is part of the standard predictive coding account and I'm just misunderstanding it, if so can you link me to a paper that explains it?

I'm a little nervous about the low-confidence model of depression, both for some of the reasons you bring up, and because the best fits (washed-out visual field and psychomotor retardation) are really marginal symptoms of depression that you only find in a few of the worst cases. The idea of depression as just a strong global negative prior (that makes you interpret everything you see and feel more negatively) is pretty tempting. I like Friston's attempt to unify these by saying that bad mood is just a claim that you're in an unpredictable environment, with the reasoning apparently being something like "if you have no idea what's going on, probably you're failing" (eg if you have no idea about the social norms in a given space, you're more likely to be accidentally stepping on someone's toes than brilliantly navigating complicated coalitional politics by coincidence). I'm not sure what direction all of this happens in. Maybe if your brain's computational machinery gets degraded by some biochemical insult, it widens all confidence intervals since it can't detect narrow hits, this results in fewer or weaker positive hits being detected, this gets interpreted as an unpredictable world, and this gets interpreted as negative prior on how you're doing?

Comment by yvain on Perfect Competition · 2019-12-29T19:50:41.247Z · score: 9 (4 votes) · LW · GW
Things sometimes get bad. Once things get sufficiently bad that no one can deviate from short-term selfish actions or be a different type of person without being wiped out, things are no longer stable. People cheat on long term investments, including various combinations of things such as having and raising children, maintaining infrastructure and defending norms. The seed corn gets eaten. Eventually, usually when some random new threat inevitably emerges, the order collapses, and things start again. The rise and fall of civilizations.


I'm wondering if you're thinking of https://slatestarcodex.com/2019/08/12/book-review-secular-cycles/ . I think that was what made me realize things worked this way, and it was indeed a big update on the standard narrative. I still haven't decided whether this is just a quirk of systems that have certain agriculture-related dynamics, or a more profound insight about systems in general. I look forward to reading more of what you have to say about this.

I think my answer (not yet written up) to why things aren't worse has something to do with competitions on different time scales - if you have more than zero slack, you want to devote a small amount of your budget to R&D, and then you'll win a long-run competition against a company that doesn't do this. Integrate all the different possible timescales and this gets so confusing that maybe the result barely looks like competition at all. I've been having trouble writing this up and am interested in seeing if you're thinking something similar. Again, really looking forward to reading more.

Comment by yvain on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2019-12-29T19:17:19.665Z · score: 33 (11 votes) · LW · GW

At the risk of being self-aggrandizing, I think the idea of axiology vs. morality vs. law is helpful here.

"Don't be misleading" is an axiological commandment - it's about how to make the world a better place, and what you should hypothetically be aiming for absent other considerations.

"Don't tell lies" is a moral commandment. It's about how to implement a pale shadow of the axiological commandment on a system run by duty and reputation, where you have to contend with stupid people, exploitative people, etc.

(so for example, I agree with you that the Rearden Metal paragraph is misleading and bad. But it sounds a lot like the speech I give patients who ask for the newest experimental medication. "It passed a few small FDA trials without any catastrophic side effects, but it's pretty common that this happens and then people discover dangerous problems in the first year or two of postmarketing surveillance. So unless there's some strong reason to think the new drug is better, it's better to stick with the old one that's been used for decades and is proven safe." I know and you know that there's a subtle difference here and the Institute is being bad while I'm being good, but any system that tries to implement reputation loss for the Institute at scale, implemented on a mob of dumb people, is pretty likely to hurt me also. So morality sticks to bright-line cases, at the expense of not being able to capture the full axiological imperative.)

This is part of what you mean when you say the report-drafting scientist is "not a bad person" - they've followed the letter of the moral law as best they can in a situation where there are lots of other considerations, and where they're an ordinary person as opposed to a saint laser-focused on doing the right thing at any cost. This is the situation that morality (as opposed to axiology) is designed for, your judgment ("I guess they're not a bad person") is the judgment that morality encourages you to give, and this shows the system working as designed, ie meeting its own low standards.

And then the legal commandment is merely "don't outright lie under oath or during formal police interrogations" - which (impressively) is probably *still* too strong, in that we all hear about the police being able to imprison basically whoever they want by noticing small lies committed by accident or under stress.

The "wizard's oath" feels like an attempt to subject one's self to a stricter moral law than usual, while still falling far short of the demands of axiology.

Comment by yvain on Maybe Lying Doesn't Exist · 2019-12-25T19:19:02.651Z · score: 27 (9 votes) · LW · GW

EDIT: Want to talk to you further before I try to explain my understanding of your previous work on this, will rewrite this later.

The short version is I understand we disagree, I understand you have a sophisticated position, but I can't figure out where we start differing and so I don't know what to do other than vomit out my entire philosophy of language and hope that you're able to point to the part you don't like. I understand that may be condescending to you and I'm sorry.

I absolutely deny I am "motivatedly playing dumb" and I enter this into the record as further evidence that we shouldn't redefine language to encode a claim that we are good at ferreting out other people's secret motivations.

Comment by yvain on Maybe Lying Doesn't Exist · 2019-12-25T19:11:48.645Z · score: 5 (2 votes) · LW · GW

I say "strategic" because it is serving that strategic purpose in a debate, not as a statement of intent. This use is similar to discussion of, eg, an evolutionary strategy of short life histories, which doesn't imply the short-life history creature understands or intends anything it's doing.

It sounds like normal usage might be our crux. Would you agree with this? IE that if most people in most situations would interpret my definition as normal usage and yours as a redefinition project, we should use mine, and vice versa for yours?

Comment by yvain on Maybe Lying Doesn't Exist · 2019-12-22T01:38:26.408Z · score: 25 (7 votes) · LW · GW

Sorry it's taken this long for me to reply to this.

"Appeal to consequences" is only a fallacy in reasoning about factual states of the world. In most cases, appealing to consequences is the right action.

For example, if you want to build a house on a cliff, and I say "you shouldn't do that, it might fall down", that's an appeal to consequences, but it's completely valid.

Or to give another example, suppose we are designing a programming language. You recommend, for whatever excellent logical reason, that all lines must end with a semicolon. I argue that many people will forget semicolons, and then their program will crash. Again, appeal to consequences, but again it's completely valid.

I think of language, following Eliezer's definitions sequence, as being a human-made project intended to help people understand each other. It draws on the structure of reality, but has many free variables, so that the structure of reality doesn't constrain it completely. This forces us to make decisions, and since these are not about factual states of the world (eg what the definition of "lie" REALLY is, in God's dictionary) we have nothing to make those decisions on except consequences. If a certain definition will result in lots of people misunderstanding each other, bad people having an easier time confusing others, good communication failing to occur, or other bad things, then it's fine to decide against it based on those grounds, just as you can decide against a programming language decision on the grounds that it will make programs written in it more likely crash, or require more memory, etc.

I am not sure I get your point about the symmetry of strategic equivocation. I feel like this equivocation relies on using a definition contrary to its common connotations. So if I was allowed to redefine "murderer" to mean "someone who drinks Coke", then I could equivocate "Alice who is a murderer (based on the definition where she drinks Coke)" and also "Murderers should be punished (based on the definition where they kill people) and combine them to get "Alice should be punished". The problem isn't that you can equivocate between any two definitions, the problem is very specifically when we use a definition counter to the way most people traditionally use it. I think (do you disagree?) that most people interpret "liar" to mean an intentional liar. As such, I'm not sure I understand the relevance of the Ruby's coworkers example.

I think you're making too hard a divide between the "Hobbesian dystopia" where people misuse language, versus a hypothetical utopia of good actors. I think of misusing language as a difficult thing to avoid, something all of us (including rationalists, and even including me) will probably do by accident pretty often. As you point out regarding deception, many people who equivocate aren't doing so deliberately. Even in a great community of people who try to use language well, these problems are going to come up. And so just as in the programming language example, I would like to have a language that fails gracefully and doesn't cause a disaster when a mistake gets made, one that works with my fallibility rather than naturally leading to disaster when anyone gets something wrong.

And I think I object-level disagree with you about the psychology of deception. I'm interpreting you (maybe unfairly, but then I can't figure out what the fair interpretation is) as saying that people very rarely lie intentionally, or that this rarely matters. This seems wrong to me - for example, guilty criminals who say they're innocent seem to be lying, and there seem to be lots of these, and it's a pretty socially important thing. I try pretty hard not to intentionally lie, but I can think of one time I failed (I'm not claiming I've only ever lied once in my life, just that this time comes to mind as something I remember and am particularly ashamed about). And even if lying never happened, I still think it would be worth having the word for it, the same way we have a word for "God" that atheists don't just repurpose to mean "whoever the most powerful actor in their local environment is."

Stepping back, we have two short words ("lie" and "not a lie") to describe three states of the world (intentional deception, unintentional deception, complete honesty). I'm proposing to group these (1)(2,3) mostly on the grounds that this is how the average person uses the terms, and if we depart from how the average person uses the terms, we're inviting a lot of confusion, both in terms of honest misunderstandings and malicious deliberate equivocation. I understand Jessica wants to group them (1,2)(3), but I still don't feel like I really understand her reasoning except that she thinks unintentional deception is very bad. I agree it is very bad, but we already have the word "bias" and are so in agreement about its badness that we have a whole blog and community about overcoming it.

Comment by yvain on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-21T21:26:50.250Z · score: 48 (16 votes) · LW · GW

Maybe I'm misunderstanding you, but I'm not getting why having the ability to discuss involves actually discussing. Compare two ways to build a triskaidekaphobic calculator.

1. You build a normal calculator correctly, and at the end you add a line of code IF ANSWER == 13, PRINT: "ERROR: IT WOULD BE IMPOLITE OF ME TO DISCUSS THIS PARTICULAR QUESTION".

2. You somehow invent a new form of mathematics that "naturally" never comes up with the number 13, and implement it so perfectly that a naive observer examining the calculator code would never be able to tell which number you were trying to avoid.

Imagine some people who were trying to take the cosines of various angles. If they used method (1), they would have no problem, since cosines are never 13. If they used method (2), it's hard for me to imagine exactly how this would work but probably they would have a lot of problems.

It sounds like the proposal you're arguing against (and which I want to argue for) - not talking about taboo political issues on LW - is basically (1). We discuss whatever we want, we use logic which (we hope) would output the correct (taboo) answer on controversial questions, but if for some reason those questions come up (which they shouldn't, because they're pretty different from AI-related questions), we instead don't talk about them. If for some reason they're really relevant to some really important issue at some point, then we take the hit for that issue only, with lots of consultation first to make sure we're not stuck in the Unilateralist's Curse.

This seems like the right answer even in the metaphor - if people burned down calculator factories whenever any of their calculators displayed "13", and the sorts of problems people used calculators for almost never involved 13, just have the calculator display an error message at that number.

(...plus doing other activism and waterline-raising work to deal with the fact that your society is insane, but that work isn't going to look like having your calculators display 13 and dying when your factory burns down)

Comment by yvain on Will AI See Sudden Progress? · 2019-12-20T21:56:36.897Z · score: 10 (4 votes) · LW · GW

This project (best read in the bolded link, not just in this post) seemed and still seems really valuable to me. My intuitions around "Might AI have discontinuous progress?" become a lot clearer once I see Katja framing them in terms of concrete questions like "How many past technologies had discontinuities equal to ten years of past progress?". I understand AI Impacts is working on an updated version of this, which I'm looking forward to.

Comment by yvain on Noticing the Taste of Lotus · 2019-12-20T21:53:12.513Z · score: 20 (6 votes) · LW · GW

I was surprised that this post ever seemed surprising, which either means it wasn't revolutionary, or was *very* revolutionary. Since it has 229 karma, seems like it was the latter. I feel like the same post today would have been written with more explicit references to reinforcement learning, reward, addiction, and dopamine. The overall thesis seems to be that you can get a felt sense for these things, which would be surprising - isn't it the same kind of reward-seeking all the way down, including on things that are genuinely valuable? Not sure how to model this.

Comment by yvain on The Bat and Ball Problem Revisited · 2019-12-20T21:49:31.895Z · score: 7 (3 votes) · LW · GW

It's nice to see such an in-depth analysis of the CRT questions. I don't really share drossbucket's intuition - for me the 100 widget question feels counterintuitive the same way as the ball and bat question, but neither feels really aversive, so it was hard for me to appreciate the feelings that generated this post. But this gives a good example of an idea of "training mathematical intuitions" I hadn't thought about before.

Comment by yvain on A LessWrong Crypto Autopsy · 2019-12-20T20:53:55.268Z · score: 17 (6 votes) · LW · GW

Many people pointed out that the real cost of a Bitcoin in 2011 or whenever wasn't the couple of cents that it cost, but the several hours of work it would take to figure out how to purchase it. And that costs needed to be discounted by the significant risk that a Bitcoin purchased in 2011 would be lost or hacked - or by the many hours of work it would have taken to ensure that didn't happen. Also, that there was another hard problem of not selling your 2011-Bitcoins in 2014. I agree that all of these are problems with the original post, and that they significantly soften the parts that depend on "everyone should have bought lots of Bitcoins in 2011". Obviously in retrospect this still would have been the right choice, but it makes it much harder to claim it was obvious at the time.

Comment by yvain on Is Science Slowing Down? · 2019-12-20T20:50:54.663Z · score: 12 (5 votes) · LW · GW

I still endorse most of this post, but https://docs.google.com/document/d/1cEBsj18Y4NnVx5Qdu43cKEHMaVBODTTyfHBa8GIRSec/edit has clarified many of these issues for me and helped quantify the ways that science is, indeed, slowing down.

Comment by yvain on Varieties Of Argumentative Experience · 2019-12-20T20:49:01.114Z · score: 9 (4 votes) · LW · GW

I still generally endorse this post, though I agree with everyone else's caveats that many arguments aren't like this. The biggest change is that I feel like I have a slightly better understanding of "high-level generators of disagreement" now, as differences in priors, contexts, and categorizations - see my post "Mental Mountains" for more.

Comment by yvain on Mental Mountains · 2019-12-18T05:18:41.133Z · score: 7 (3 votes) · LW · GW

I definitely agree with you here - I didn't talk about it as much in this post, but in the psychedelics post I linked, I wrote:

People are not actually very good at reasoning. If you metaphorically heat up their brain to a temperature that dissolves all their preconceptions and forces them to basically reroll all of their beliefs, then a few of them that were previously correct are going to come out wrong. F&CH’s theory that they are merely letting evidence propagate more fluidly through the system runs up against the problem where, most of the time, if you have to use evidence unguided by any common sense, you probably get a lot of things wrong.

The best defense of therapy in this model is that you're concentrating on the beliefs that are currently most dysfunctional, so by regression to the mean you should expect them to get better!

Comment by yvain on Is Rationalist Self-Improvement Real? · 2019-12-10T09:19:49.090Z · score: 13 (6 votes) · LW · GW
I'd similarly worry that the "manioc detoxification is the norm + human societies are as efficient at installing mental habits and group norms as they are at detoxifying manioc" model should predict that the useful heuristics underlying the 'scientific method' (e.g., 'test literally everything', using controls, trying to randomize) reach fixation in more societies earlier.

I'd disagree! Randomized controlled trials have many moving parts, removing any of which makes them worse than useless. Remove placebo control, and your trials are always positive and you do worse than intuition. Remove double-blinding, same. Remove power calculations, and your trials give random results and you do worse than intuition. Remove significance testing, same. Even in our own advanced civilization, if RCTs give a result different than common sense it's a 50-50 chance which is right; a primitive civilization who replaced their intuitions with the results of proto-RCTs would be a disaster. This ends up like the creationist example where evolution can't use half an eye so eyes don't evolve; obviously this isn't permanently true with either RCTs or eyes, but in both cases it took a long time for all the parts to evolve independently for other reasons.

Also, you might be underestimating inferential distance - tribes that count "one, two, many" are not going to be able to run trials effectively. Did you know that people didn't consistently realize you could take an average of more than two numbers until the Middle Ages?

Also, what would these tribes use RCTs to figure out? Whether their traditional healing methods work? St. John's Wort is a traditional healing method, there have now been about half a dozen high-quality RCTs investigating it, with thousands of patients, and everyone is *still* confused. I am pretty sure primitive civilizations would not really have benefited from this much.

I am less sure about trigger-action plans. I think a history of the idea of procrastination would be very interesting. I get the impression that ancient peoples had very confused beliefs around it. I don't feel like there is some corpus of ancient anti-procrastination techniques from which TAPs are conspicuously missing, but why not? And premodern people seem weirdly productive compared to moderns in a lot of ways. Overall I notice I am confused here, but this could be an example where you're right.

I'm confused about how manioc detox is more useful to the group than the individual - each individual self-interestedly would prefer to detox manioc, since they will die (eventually) if they don't. This seems different to me than the prediction market example, since (as Robin has discussed) decision-makers might self-interestedly prefer not to have prediction markets so they can keep having high status as decision-makers.

Comment by yvain on Is Rationalist Self-Improvement Real? · 2019-12-10T09:09:25.894Z · score: 20 (12 votes) · LW · GW

Thanks, all good points.

I think efficient market doesn't just suggest we can't do much better at starting companies. It also means we can't do much better at providing self-help, which is a service that can make people lots of money and status if they do it well.

I'm not sure if you're using index fund investing as an example of rationalist self-help, or just as a metaphor for it. If you're using it an example, I worry that your standards are so low that almost any good advice could be rationalist self-help. I think if you're from a community where you didn't get a lot of good advice, being part of the rationalist community can be really helpful in exposing you to it (sort of like the theory where college makes you successful because it inducts you into the upper-middle class). I think I got most of my "invest in index funds" level good advice before entering the rationalist community, so I didn't count that.

Being part of the rationalist community has definitely improved my life, partly through giving me better friends and partly through giving me access to good ideas of the "invest in index funds" level. I hadn't counted that as part of our discussion, but if I do, then I agree it is great. My archetypal idea of "rationalist self-help" is sitting around at a CFAR workshop trying very hard to examine your mental blocks. I'm not sure if we agree on that or if I'm caricaturing your position.

I'm not up for any gigantic time commitment, but if you want to propose some kind of rationalist self-help exercise that I should try (of the order of 10 minutes/day for a few weeks) to see if I change my mind about it, I'm up for that, though I would also believe you if you said such a halfhearted commitment wouldn't be a good test.

Comment by yvain on Is Rationalist Self-Improvement Real? · 2019-12-09T23:02:57.950Z · score: 13 (6 votes) · LW · GW

You're right in catching and calling out the appeal to consequences there, of course.

But aside from me really caring about the movement, I think part of my thought process is that "the movement" is also the source of these self-help techniques. If some people go into this space and then report later with what they think, I am worried that this information is less trustworthy than information that would have come from these same people before they started dealing with this question.

Comment by yvain on Is Rationalist Self-Improvement Real? · 2019-12-09T20:18:56.707Z · score: 98 (52 votes) · LW · GW

I have some pretty complicated thoughts on this, and my heart isn't really in responding to you because I think some things are helpful for some people, but a sketch of what I'm thinking:

First, a clarification. Some early claims - like the ones I was responding to in my 2009 essay - were that rationalists should be able to basically accomplish miracles, become billionaires with minimal work, unify physics with a couple of years of study, etc. I still occasionally hear claims along those lines. I am still against those, but I interpret you as making weaker claims, like that rationalists can be 10% better at things than nonrationalists, after putting in a decent amount of work. I'm less opposed to those claims, especially if "a decent amount of work" is interpreted as "the same amount of work you would need to get good at those things through other methods". But I'm still a little bit concerned about them.

First: I'm interpreting "rationalist self-help" to mean rationalist ideas and practices that are helpful for getting common real-life goals like financial, social, and romantic success. I'm not including things like doing charity better, for reasons that I hope will become clear later.

These are the kinds of things most people want, which means two things. First, we should expect a lot of previous effort has gone into optimizing them. Second, we should expect that normal human psychology is designed to optimize them. If we're trying to do differential equations, we're outside our brain's design specs; if we're trying to gain status and power, we're operating exactly as designed.

When the brain fails disastrously, it tends to be at things outside the design specs, that don't matter for things we want. For example, you quoted me describing some disastrous failures in people understanding some philosophy around atheism, and I agree that sort of thing happens often. But this is because it's outside of our common sense. I can absolutely imagine a normal person saying "Since I can't prove God doesn't exist, God must exist", but it would take a much more screwed-up person to think "Since I can't prove I can't fly, I'm going to jump off this cliff."

Another example: doctors fail miserably on the Bayes mammogram problem, but usually handle actual breast cancer diagnosis okay. And even diagnosing breast cancer is a little outside common sense and everyday life. Faced with the most chimpish possible version of the Bayes mammogram problem - maybe something like "This guy I met at a party claims he's the king of a distant country, and admittedly he is wearing a crown, but what's the chance he's *really* a king?" my guess is people are already near-optimal.

If you have this amazing computer perfectly-tuned for finding strategies in a complex space, I think your best bet is just to throw lots and lots of training data at it, then try navigating the complex space.

I think it's ironic that you use practicing basketball as your example here, because rationalist techniques very much are *not* practice. If you want to become a better salesman, practice is going out and trying to make lots of sales. I don't think this is a "rationalist technique" and I think the kind of self-help you're arguing for is very different (though it may involve better ways to practice). We both agree that practice is useful; I think our remaining disagreement is on whether there are things other than practice that are more useful to do, on the margin, than another unit of practice.

Why do I think this is unlikely?

1. Although rationalists have done pretty well for themselves, they don't seem to have done too remarkably well. Even lots of leading rationalist organizations are led by people who haven't put particular effort into anything you could call rationalist self-help! That's really surprising!

2. Efficient markets. Rationalists developed rationalist self-help by thinking about it for a while. This implies that everyone else left a $100 bill on the ground for the past 4000 years. If there were techniques to improve your financial, social, and romantic success that you could develop just by thinking about them, the same people who figured out the manioc detoxification techniques, or oracle bone randomization for hunting, or all the other amazingly complex adaptations they somehow developed, would have come up with them. Even if they only work in modern society, one of the millions of modern people who wanted financial, social, and romantic success before you would have come up with them. Obviously this isn't 100% true - someone has to be the first person to discover everything - but you should expect the fruits here to be very high up, high enough that a single community putting in a moderate amount of effort shouldn't be able to get too many of them.

(some of this becomes less relevant if your idea of rationalist self-help is just collecting the best self-help from elsewhere and giving it a stamp of approval, but then some of the other considerations apply more.)

3. Rationalist self-help starts looking a lot like therapy. If we're trying to make you a more successful computer programmer using something other than studying computer programming, it's probably going to involve removing mental blocks or something. Therapy has been pretty well studied, and the most common conclusion is that it is mostly nonspecific factors and the techniques themselves don't seem to have any special power. I am prepared to suspend this conclusion for occasional miracles when extremely charismatic therapists meet exactly the right patient and some sort of non-scaleable flash of lightning happens, but this also feels different from "the techniques do what they're supposed to". If rationalists are trying to do therapy, they are competing with a field of tens of thousands of PhD-level practitioners with all the resources of the academic and health systems who have worked on the problem for decades. This is not the kind of situation that encourages me we can make fast progress. See https://slatestarcodex.com/2019/11/20/book-review-all-therapy-books/ for more on this.

4. General skepticism of premature practical application. It took 300 years between Harvey discovering the circulatory system and anyone being very good at treating circulatory disease. It took 50 years between Pasteur discovering germ theory and anyone being very good at treating infections. It took 250 years between Newton discovering gravity and anyone being very good at flying. I have a lower prior than you on good science immediately translating into useful applications. And I am just not too impressed with the science here. Kahneman and Tversky discovered a grab bag of interesting facts, some of which in retrospect were false. I still don't think we're anywhere near the deep understanding of rationality that would make me feel happy here.

This doesn't mean I think rationality is useless. I think there are lots of areas outside our brain's normal design specs where rationality is really useful. And because these don't involve getting sex or money, there's been a lot less previous exploration of the space and the low hanging fruits haven't been gobbled up. Or, when the space has been explored, people haven't done a great job formalizing their insights, or they haven't spread, or things like that. I am constantly shocked by how much really important knowledge there is sitting around that nobody knows about or thinks about because it doesn't have an immediate payoff.

Along with all of this, I'm increasingly concerned that anything that has payoff in sex or money is an epistemic death zone. Because you can make so much money teaching it, it attracts too much charlatanry to navigate easily, and it subjects anyone who enters to extreme pressure to become charlatan-adjacent. Because it touches so closely on our emotions and sense of self-worth, it's a mind-killer in the same way politics are. Because everybody is so different, there's almost irresistible pressure to push the thing that saved your own life, without checking whether it will help anyone else. Because it's such a noisy field and RCTs are so hard, I don't trust us to be able to check our intuitions against reality. And finally, I think there are whole things lurking out there of the approximate size and concerningness of "people are homeostasis-preserving control systems which will expend their entire energy on undoing any change you do to them" that we just have no idea about even though they have the potential to make everything in this sphere useless if we don't respond to them.

I actually want to expand on the politics analogy. If someone were to say rationality was great at figuring out whether liberalism or conservatism was better, I would agree that this is the sort of thing rationality should be able to do, in principle. But it's such a horrible topic that has the potential to do so much damage to anyone trying to navigate it that I would be really nervous about it - about whether we were really up to the task, and about what it would do to our movement if we tried. These are some of my concerns around self-help too.

Comment by yvain on bgaesop's Shortform · 2019-10-27T09:24:29.090Z · score: 12 (4 votes) · LW · GW

I'd assumed what I posted was the LW meditator consensus, or at least compatible with it.

Comment by yvain on Free Money at PredictIt? · 2019-09-26T17:43:46.914Z · score: 25 (12 votes) · LW · GW
In prediction markets, cost of capital to do trades is a major distorting factor, as are fees and taxes and other physical costs, and participants are much less certain of correct prices and much more worried about impact and how many others are in the same trade. Most everyone who is looking to correct inefficiencies will only fade very large and very obvious inefficiencies, given all the costs.

https://blog.rossry.net/predictit/ has a really good discussion of how this works, with some associated numbers that show how you will probably outright lose money on even apparently ironclad trades like the 112-total candidates above.

Comment by yvain on Could we solve this email mess if we all moved to paid emails? · 2019-08-13T04:09:28.693Z · score: 5 (2 votes) · LW · GW

I'm sorry, I didn't understand that. Yes, this answers my objection (although it might cause other problems like make me less likely to answer "sorry, I can't do that" compared to just ghosting someone)

Comment by yvain on Could we solve this email mess if we all moved to paid emails? · 2019-08-12T02:03:53.283Z · score: 12 (8 votes) · LW · GW

I think it's great that you're trying this and I hope it succeeds.

But I won't be using it. For me, the biggest problem is lowering the sense of obligation I feel to answer other people's emails. Without a sense of obligation, there's no problem - I just delete it and move on. But part of me feels like I'm incurring a social cost by doing this, so it's harder than it sounds.

I feel like using a service like this would make the problem worse, not better. It would make me feel a strong sense of obligation to answer someone's email if they had paid $5 to send it. What sort of monster deletes an email they know the other person had to pay money to send?

In the same way, I would feel nervous sending someone else a paid email, because I would feel like I was imposing a stronger sense of obligation on them to respond to my request, rather than it being a harmless ask they can either answer or not. This would be true regardless of how important my email was. Meanwhile, people who don't care about other people's feelings won't really be held back, since $5 is not a lot of money for most people in this community.

I think the increased obligation would dominate any tendency for me to get less emails, and make this a net negative in my case. I still hope other people try it and report back.

Comment by yvain on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2019-08-04T01:48:57.528Z · score: 24 (15 votes) · LW · GW

What would you recommend to people who are doing this (or to people who aren't sure if they're doing it or not?)

Comment by yvain on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-02T05:53:28.207Z · score: 22 (6 votes) · LW · GW

I'm a little confused, and I think it might be because you're using "conflict theorist" different from how I do.

For me, a conflict theorist is someone who thinks the main driver of disagreement is self-interest rather than honest mistakes. There can be mistake theorists and conflict theorists on both sides of the "is billionaire philanthropy good?" question, and on the "are individual actions acceptable even though they're nondemocratic?" question.

It sounds like you're using it differently, so I want to make sure I know exactly what you mean before replying.

You say you've given up understanding the number of basically people who disagree with things you think are obvious and morally obligatory. I suspect there's a big confusion about what 'basically good' means here, I'm making a note of it for future posting, but moving past that for now: When you examine specific cases of such disagreements happening, what do you find how often? (I keep writing possible things, but on reflection avoiding anchoring you is better)

I think I usually find we're working off different paradigms, in the really strong Kuhnian sense of paradigm.

Comment by yvain on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-01T17:34:11.881Z · score: 108 (37 votes) · LW · GW

Rob Reich is a former board member of GiveWell and Good Ventures (i.e. Moskowitz and Tuna) and the people at OpenPhil seem to have a huge amount of respect for him. He responded to my article by tweeting "Really grateful to have my writing taken seriously by someone whose blog I've long enjoyed and learned from" and promising to write a reply soon.

Dylan Matthews, who wrote the Vox article I linked (I don’t know if he is against billionaire philanthropy, but he seems to hold some sympathy for the position), self-describes as EA, has donated a kidney, and switched from opposing work on AI risk to supporting it after reading arguments on the topic.

And here's someone on the subreddit saying that they previously had some sympathy for anti-billionaire-philanthropy arguments but are now more convinced that it's net positive.

I don’t think any of these people fit your description of “people opposed to nerds or to thinking”, “people opposed to all private actions not under ‘democratic control’”, or “people opposed to action of any kind.” They seem like basically good people who I disagree with. I am constantly surprised by how many things that seem obvious and morally obligatory to me can have basically good people disagree with them, and I have kind of given up on trying to understand it, but there we go.

Even if there are much worse people in the movement, I think getting Reich and Matthews alone to dial it down 10% would be very net positive, since they're among the most prominent opponents.

I was concerned about backlash and ran the post by a couple of people I trusted to see if they thought it was net positive, and they all said it was. If you want I'll run future posts I have those concerns about by you too.

Comment by yvain on Dialogue on Appeals to Consequences · 2019-07-19T20:28:53.423Z · score: 38 (10 votes) · LW · GW

Instead of Quinn admitting lying is sometimes good, I wish he had said something like:

“PADP is widely considered a good charity by smart people who we trust. So we have a prior on it being good. You’ve discovered some apparent evidence that it’s bad. So now we have to combine the prior and the evidence, and we end up with some percent confidence that they’re bad.
If this is 90% confidence they’re bad, go ahead. What if it’s more like 55%? What’s the right action to take if you’re 55% sure a charity is incompetent and dishonest (but 45% chance you misinterpreted the evidence)? Should you call them out on it? That’s good in the world where you’re right, but might disproportionately tarnish their reputation in the world where they're wrong. It seems like if you’re 55% sure, you have a tough call. You might want to try something like bringing up your concerns privately with close friends and only going public if they share your opinion, or asking the charity first and only going public if they can’t explain themselves. Or you might want to try bringing up your concerns in a nonconfrontational way, more like ‘Can anyone figure out what’s going on with PADP’s math?’ rather than ‘PADP is dishonest’. After this doesn’t work and lots of other people confirm your intuitions of distrust, then your confidence reaches 90% and you start doing things more like shouting ‘PADP is dishonest’ from the rooftops.
Or maybe you’ll never reach 90% confidence. Many people think that climate science is dishonest. I don’t doubt many of them are reporting their beliefs honestly - that they’ve done a deep investigation and that’s what they’ve concluded. It’s just that they’re not smart, informed, or rational enough to understand what’s going on, or to process it in an unbiased way. What advice would you give these people about calling scientists out on dishonesty - again given that rumors are powerful things and can ruin important work? My advice to them would be to consider that they may be overconfident, and that there needs to be some intermediate ‘consider my own limitations and the consequences of my irreversible actions’ step in between ‘this looks dishonest to me’ and ‘I will publicly declare it dishonest’. And that step is going to look like an appeal to consequences, especially if the climate deniers are so caught up in their own biases that they can't imagine they might be wrong.
I don’t want to deny that calling out apparent dishonesty when you’re pretty sure of it, or when you’ve gone through every effort you can to check it and it still seems bad, will sometimes (maybe usually) be the best course, but I don’t think it’s as simple as you think.”

...and seen what Carter answered.

Comment by yvain on The AI Timelines Scam · 2019-07-11T07:30:47.811Z · score: 48 (23 votes) · LW · GW

1. It sounds like we have a pretty deep disagreement here, so I'll write an SSC post explaining my opinion in depth sometime.

2. Sorry, it seems I misunderstood you. What did you mean by mentioning business's very short timelines and all of the biases that might make them have those?

3. I feel like this is dismissing the magnitude of the problem. Suppose I said that the Democratic Party was a lying scam that was duping Americans into believing it, because many Americans were biased to support the Democratic Party for various demographic reasons, or because their families were Democrats, or because they'd seen campaign ads, etc. These biases could certainly exist. But if I didn't even mention that there might be similar biases making people support the Republican Party, let alone try to estimate which was worse, I'm not sure this would qualify as sociopolitical analysis.

4. I was trying to explain why people in a field might prefer that members of the field address disagreements through internal channels rather than the media, for reasons other than that they have a conspiracy of silence. I'm not sure what you mean by "concrete criticisms". You cherry-picked some reasons for believing long timelines; I agree these exist. There are other arguments for believing shorter timelines and that people believing in longer timelines are "duped". What it sounded like you were claiming is that the overall bias is in favor of making people believe in shorter ones, which I think hasn't been proven.

I'm not entirely against modeling sociopolitical dynamics, which is why I ended the sentence with "at this level of resolution". I think a structured attempt to figure out whether there were more biases in favor of long timelines or short timelines (for example, surveying AI researchers on what they would feel uncomfortable saying) would be pretty helpful. I interpreted this post as more like the Democrat example in 3 - cherry-picking a few examples of bias towards short timelines, then declaring short timelines to be a scam. I don't know if this is true or not, but I feel like you haven't supported it.

Bayes Theorem says that we shouldn't update on information that you could get whether or not a hypothesis were true. I feel like you could have written an equally compelling essay "proving" bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn't, I feel like the post didn't explain why you felt that way. So I don't think we should update on the information in this, and I think the intensity of your language ("scam", "lie", "dupe") is incongruous with the lack of update-able information.

Comment by yvain on The AI Timelines Scam · 2019-07-11T05:46:44.883Z · score: 194 (73 votes) · LW · GW

1. For reasons discussed on comments to previous posts here, I'm wary of using words like "lie" or "scam" to mean "honest reporting of unconsciously biased reasoning". If I criticized this post by calling you a liar trying to scam us, and then backed down to "I'm sure you believe this, but you probably have some bias, just like all of us", I expect you would be offended. But I feel like you're making this equivocation throughout this post.

2. I agree business is probably overly optimistic about timelines, for about the reasons you mention. But reversed stupidity is not intelligence. Most of the people I know pushing short timelines work in nonprofits, and many of the people you're criticizing in this post are AI professors. Unless you got your timelines from industry, which I don't think many people here did, them being stupid isn't especially relevant to whether we should believe the argument in general. I could find you some field (like religion) where people are biased to believe AI will never happen, but unless we took them seriously before this, the fact that they're wrong doesn't change anything.

3. I've frequently heard people who believe AI might be near say that their side can't publicly voice their opinions, because they'll get branded as loonies and alarmists, and therefore we should adjust in favor of near-termism because long-timelinists get to unfairly dominate the debate. I think it's natural for people on all sides of an issue to feel like their side is uniquely silenced by a conspiracy of people biased towards the other side. See Against Bravery Debates for evidence of this.

4. I'm not familiar with the politics in AI research. But in medicine, I've noticed that doctors who go straight to the public with their controversial medical theory are usually pretty bad, for one of a couple of reasons. Number one, they're usually wrong, people in the field know they're wrong, and they're trying to bamboozle a reading public who aren't smart enough to figure out that they're wrong (but who are hungry for a "Galileo stands up to hidebound medical establishment" narrative). Number two, there's a thing they can do where they say some well-known fact in a breathless tone, and then get credit for having blown the cover of the establishment's lie. You can always get a New Yorker story by writing "Did you know that, contrary to what the psychiatric establishment wants you to believe, SOME DRUGS MAY HAVE SIDE EFFECTS OR WITHDRAWAL SYNDROMES?" Then the public gets up in arms, and the psychiatric establishment has to go on damage control for the next few months and strike an awkward balance between correcting the inevitable massive misrepresentations in the article while also saying the basic premise is !@#$ing obvious and was never in doubt. When I hear people say something like "You're not presenting an alternative solution" in these cases, they mean something like "You don't have some alternate way of treating diseases that has no side effects, so stop pretending you're Galileo for pointing out a problem everyone was already aware of." See Beware Stephen Jay Gould for Eliezer giving an example of this, and Chemical Imbalance and the followup post for me giving an example of this. I don't know for sure that this is what's going on in AI, but it would make sense.

I'm not against modeling sociopolitical dynamics. But I think you're doing it badly, by taking some things that people on both sides feel, applying it to only one side, and concluding that means the other is involved in lies and scams and conspiracies of silence (while later disclaiming these terms in a disclaimer, after they've had their intended shocking effect).

I think this is one of the cases where we should use our basic rationality tools like probability estimates. Just from reading this post, I have no idea what probability Gary Marcus, Yann LeCun, or Steven Hansen has on AGI in ten years (or fifty years, or one hundred years). For all I know all of them (and you, and me) have exactly the same probability and their argument is completely political about which side is dominant vs. oppressed and who should gain or lose status (remember the issue where everyone assumes LWers are overly certain cryonics will work, whereas in fact they're less sure of this than the general population and just describe their beliefs differently ). As long as we keep engaging on that relatively superficial monkey-politics "The other side are liars who are silencing my side!" level, we're just going to be drawn into tribalism around the near-timeline and far-timeline tribes, and our ability to make accurate predictions is going to suffer. I think this is worse than any improvement we could get by making sociopolitical adjustments at this level of resolution.