[Crosspost] On Hreha On Behavioral Economics 2021-08-31T18:14:39.075Z
Eight Hundred Slightly Poisoned Word Games 2021-08-09T20:17:17.814Z
Toward A Bayesian Theory Of Willpower 2021-03-26T02:33:55.056Z
Trapped Priors As A Basic Problem Of Rationality 2021-03-12T20:02:28.639Z
Studies On Slack 2020-05-13T05:00:02.772Z
Confirmation Bias As Misfire Of Normal Bayesian Reasoning 2020-02-13T07:20:02.085Z
Map Of Effective Altruism 2020-02-03T06:20:02.200Z
Book Review: Human Compatible 2020-01-31T05:20:02.138Z
Assortative Mating And Autism 2020-01-28T18:20:02.223Z
SSC Meetups Everywhere Retrospective 2019-11-28T19:10:02.028Z
Mental Mountains 2019-11-27T05:30:02.107Z
Autism And Intelligence: Much More Than You Wanted To Know 2019-11-14T05:30:02.643Z
Building Intuitions On Non-Empirical Arguments In Science 2019-11-07T06:50:02.354Z
Book Review: Ages Of Discord 2019-09-03T06:30:01.543Z
Book Review: Secular Cycles 2019-08-13T04:10:01.201Z
Book Review: The Secret Of Our Success 2019-06-05T06:50:01.267Z
1960: The Year The Singularity Was Cancelled 2019-04-23T01:30:01.224Z
Rule Thinkers In, Not Out 2019-02-27T02:40:05.133Z
Book Review: The Structure Of Scientific Revolutions 2019-01-09T07:10:02.152Z
Bay Area SSC Meetup (special guest Steve Hsu) 2019-01-03T03:02:05.532Z
Is Science Slowing Down? 2018-11-27T03:30:01.516Z
Cognitive Enhancers: Mechanisms And Tradeoffs 2018-10-23T18:40:03.112Z
The Tails Coming Apart As Metaphor For Life 2018-09-25T19:10:02.410Z
Melatonin: Much More Than You Wanted To Know 2018-07-11T17:40:06.069Z
Varieties Of Argumentative Experience 2018-05-08T08:20:02.913Z
Recommendations vs. Guidelines 2018-04-13T04:10:01.328Z
Adult Neurogenesis – A Pointed Review 2018-04-05T04:50:03.107Z
God Help Us, Let’s Try To Understand Friston On Free Energy 2018-03-05T06:00:01.132Z
Does Age Bring Wisdom? 2017-11-08T07:20:00.376Z
SSC Meetup: Bay Area 10/14 2017-10-13T03:30:00.269Z
SSC Survey Results On Trust 2017-10-06T05:40:00.269Z
Different Worlds 2017-10-03T04:10:00.321Z
Against Individual IQ Worries 2017-09-28T17:12:19.553Z
My IRB Nightmare 2017-09-28T16:47:54.661Z
If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics 2017-09-03T20:56:25.373Z
Beware Isolated Demands For Rigor 2017-09-02T19:50:00.365Z
The Case Of The Suffocating Woman 2017-09-02T19:42:31.833Z
Learning To Love Scientific Consensus 2017-09-02T08:44:12.184Z
I Can Tolerate Anything Except The Outgroup 2017-09-02T08:22:19.612Z
The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible 2017-08-10T00:33:54.000Z
Where The Falling Einstein Meets The Rising Mouse 2017-08-03T00:54:28.000Z
Why Are Transgender People Immune To Optical Illusions? 2017-06-28T19:00:00.000Z
SSC Journal Club: AI Timelines 2017-06-08T19:00:00.000Z
The Atomic Bomb Considered As Hungarian High School Science Fair Project 2017-05-26T09:45:22.000Z
G.K. Chesterton On AI Risk 2017-04-01T19:00:43.865Z
Guided By The Beauty Of Our Weapons 2017-03-24T04:33:12.000Z
[REPOST] The Demiurge’s Older Brother 2017-03-22T02:03:51.000Z
Antidepressant Pharmacogenomics: Much More Than You Wanted To Know 2017-03-06T05:38:42.000Z
A Modern Myth 2017-02-27T17:29:17.000Z
Highlights From The Comments On Cost Disease 2017-02-17T07:28:52.000Z


Comment by Scott Alexander (Yvain) on Forecasting Newsletter: July 2021 · 2021-08-03T18:41:02.166Z · LW · GW

Can you explain the no-loss competition idea further?

  • If you have to stake your USDC, isn't this still locking up USDC, the thing you were trying to avoid doing?
  • What gives the game tokens value? 
Comment by Scott Alexander (Yvain) on (Brainstem, Neocortex) ≠ (Base Motivations, Honorable Motivations) · 2021-07-22T08:22:18.419Z · LW · GW

Thanks, I read that, and while I wouldn't say I'm completely enlightened, I feel like I have a good basis for reading it a few more times until it sinks in.

I interpret you as saying in this post: there is no fundamental difference between base and noble motivations, they're just two different kinds of plans we can come up with and evaluate, and we resolve conflicts between them by trying to find frames in which one or the other seems better. Noble motivations seem to "require more willpower" only because we often spend more time working on coming up with positive frames for them, because this activity flatters our ego and so is inherently rewarding.

I'm still not sure I agree with this. My own base motivation here is that I posted a somewhat different model of willpower at , which is similar to yours except that it does keep a role for the difference between "base" and "noble" urges. I'm trying to figure out if I still want to defend it against this one, but my thoughts are something like:

- It feels like on stimulants, I have more "willpower" : it's easy to take the "noble" choice when it might otherwise be hard. Likewise, when I'm drunk I have less ability to override base motivations with noble ones, and (although I guess I can't prove it) this doesn't seem like a purely cognitive effect where it's harder for me to "remember" the important benefits of my noble motivations. The same is true of various low-energy states, eg tired, sick, stressed - I'm less likely to choose the noble motivation in all of them. This suggests to me that baser and nobler motivations are coming from different places, and stimulants strengthen (in your model) the connection between the noble-motivation-place and the striatum relative to the connection between the base-motivation-place the striatum, and alcohol/stress/etc weaken it.

- I'm skeptical of your explanation for the "asymmetry" of noble vs. base thoughts. Are thoughts about why I should stay home really less rewarding than thoughts about why I should go to the gym? I'm imagining the opposite - I imagine staying home in my nice warm bed, and this is a very pleasant thought, and accords with what I currently really want (to not go to the gym). On the other hand, thoughts about why I should go to the gym, if I were to verbalize them, would sound like "Ugh, I guess I have to consider the fat that I'll be a fat slob if I don't go, even though I wish I could just never have to think about that".

- Base thoughts seem like literally animalistic desires - hunger seems basically built on top of the same kind of hunger a lizard or nematode feels. We know there are a bunch of brain areas in the hypothalamus etc that control hunger. So why shouldn't this be ontologically different from nobler motivations that are different from lizards'? It seems perfectly sensible that eg stimulants strengthen something about the neocortex relative to whatever part of the hypothalamus is involved in hunger. I guess I'm realizing now how little I understand about hunger - surely the plan to eat must originate in the cortex like every other plan, but it sure feels like it's tied into the hypothalamus in some really important way. I guess maybe hunger could have a plan-generator exactly like every other, which is modulated by hypothalamic connections? It still seems like "plans that need outside justification" vs. "plans that the hypothalamus will just keep active even if they're stupid" is a potentially important dichotomy.

- Base motivations also seem like things which have a more concrete connection to reinforcement learning. There's a really short reinforcement loop between "want to eat candy" and "wow, that was reinforcing", and a really long (sometimes nonexistent) loop between going to the gym and anything good happening. Again, this makes me suspicious that the base motivations are "encoded" in some way that's different from the nobler motivations and which explains why different substances can preferentially reinforce one relative to the other.

- The reasons for thinking of base motivations as more like priors, discussed in that post.

- Kind of a dumb objection, but this feels analogous to other problems where a conscious/intellectual knowledge fails to percolate to emotional centers of the brain, for example someone who knows planes are very safe but is scared of flying anyway. I'm not sure how to use your theory here to account for this situation, whereas if I had a theory that explained the plane phobia problem I feel like it would have to involve a concept of lower-level vs. higher-level systems that would be easy to plug into this problem. 

- Another dumb anecdotal objection, but this isn't how I consciously experience weakness of will. The example that comes to mind most easily is wanting to scratch an itch while meditating, even though I'm supposed to stay completely still. When I imagine my thought process while worrying about this, it doesn't feel like trying to think up new reframings of the plan. It feels like some sensory region of the brain saying "HEY! ITCH! YOU SHOULD SCRATCH IT!" and my conscious brain trying to exert some effort to overcome that. The effort doesn't feel like thinking of new framings, and the need for the effort persists long after every plausible new framing has been thought of. And it does seem relevant that "scratch itch" has no logical justification (it's just a basic animal urge that would persist even if someone told you there was no biological cause of the itch and no way that not scratching it could hurt you), whereas wanting to meditate well has a long chain of logical explanations.

Comment by Scott Alexander (Yvain) on (Brainstem, Neocortex) ≠ (Base Motivations, Honorable Motivations) · 2021-07-13T18:14:55.530Z · LW · GW

Can you link to an explanation of why you're thinking of the brainstem as plan-evaluator? I always thought it was the basal ganglia.

Comment by Scott Alexander (Yvain) on Why do patients in mental institutions get so little attention in the public discourse? · 2021-06-13T07:35:59.008Z · LW · GW

Mental hospitals of the type I worked at when writing that post only keep patients for a few days, maybe a few weeks at tops. This means there's no long-term constituency for fighting them, and the cost of errors is (comparatively) low.

The procedures for these hospitals would be hard to change. It's hard to have a law like "you need a judge to approve sending someone to a mental hospital", because maybe someone's trying to kill themselves right now and the soonest a judge has an opening is three days from now. So the standard rule is "use your own judgment and a judge will review it in a week or two", but most psychiatric cases resolve before then and never have to see a judge. In theory patients can sue doctors if they think they were being held improperly, but they almost never get around to doing this and when they do they almost never win, for a combination of "they're usually wrong about the law and sometimes obviously insane" and "judges are biased towards doctors because they seem to know what they're talking about". Also, the law just got done instituting extremely severe and unpredictable punishments to any doctor who doesn't commit someone to a mental hospital and then that person does anything bad ever, and the law has kindly decided not to be extremely severe on both sides.

There are other mental hospitals that keep people for months or years, but these do have very strict requirements for getting someone into them and are much more careful.

Comment by Scott Alexander (Yvain) on Alcohol, health, and the ruthless logic of the Asian flush · 2021-06-04T20:52:05.021Z · LW · GW

I have some patients on disulfiram and it works very well when they take it. The problem is definitely that they can choose not to take it if they want alcohol (or sometimes just forget for normal reasons, then opportunistically drink after they realize they've forgotten). 

The implants are a great idea. As far as I know, the reason they're not used is because someone would have to pay for lots and lots of studies and the economics don't work out. Also because there are vague concerns about safety (if something went catastrophically wrong and the entire implant got released at once and then the patient drank, it would be potentially fatal) and ethics (should a realistically-probably-heavily-pressured patient be allowed to make decisions that bind their future selves)? I think this is dumb and we should just do the implant, but I don't think it's mysterious why we don't, or why (in the absence of the implant) disulfiram doesn't solve everything.

Comment by Scott Alexander (Yvain) on The EMH is False - Specific Strong Evidence · 2021-03-18T21:47:55.127Z · LW · GW

I tried to bet on this on Polymarket a few months ago. Their native client for directing money into your account didn't work (I think it was because I was in the US and it wasn't legal under US law). I tried to send money from another crypto account, and it said Polymarket didn't have enough money to pay the Ethereum gas fees to receive my money. It originally asked me to try reloading the page close to an odd numbered GMT hour, when they were sending infusions of money to pay gas fees, but I tried a few times and never got quite close enough. I just checked again and they're asking me to send them more money for gas fees, which I should probably do but which is a tough sell when they just ate the last chunk of money I sent them.

I assume the person you're talking about who made $100K is Vitalik. Vitalik knows much more about making Ethereum contracts work than the average person, and details the very complicated series of steps he had to take to get everything worked out in his blog post. There probably aren't very many people who can do all that successfully, and the people who can are probably busy becoming rich some other way.

Comment by Scott Alexander (Yvain) on Acetylcholine = Learning rate (aka plasticity) · 2021-03-18T18:17:19.826Z · LW · GW

Agreed - see and my writeup at

Comment by Scott Alexander (Yvain) on “PR” is corrosive; “reputation” is not. · 2021-02-16T00:01:46.716Z · LW · GW

Thanks, this is a great clarification.

Comment by Scott Alexander (Yvain) on Still Not in Charge · 2021-02-12T07:59:14.243Z · LW · GW

Thanks for this.

I think the UFH might be more complicated than you're making it sound here - the philosophers debate whether any human really has a utility function.

When you talk about the CDC Director sometimes doing deliberately bad policy to signal to others that she is a buyable ally, I interpret this as "her utility function is focused on getting power". She may not think of this as a "utility function", in fact I'm sure she doesn't, it may be entirely a selected adaptation to execute, but we can model it as a utility function for the same reason we model anything else as a utility function.

I used the example of a Director who genuinely wants the best, but has power as a subgoal since she needs it in order to enact good policies. You're using the example of a Director who really wants power, but (occasionally) has doing good as a subgoal since it helps her protect her reputation and avoid backlash. I would be happy to believe either of those pictures, or something anywhere in between. They all seem to me to cash out as a CDC Director with some utility function balancing goodness and power-hunger (at different rates), and as outsiders observing a CDC who makes some good policy and some bad-but-power-gaining policy (where the bad policy either directly gains her power, or gains her power indirectly by signaling to potential allies that she isn't a stuck-up goody-goody. If the latter, I'm agnostic as to whether she realizes that she is doing this, or whether it's meaningful to posit some part of her brain which contains her "utility function", or metaphysical questions like that).

I'm not sure I agree with your (implied? or am I misreading you?) claim that destructive decisions don't correlate with political profit. The Director would never ban all antibiotics, demand everyone drink colloidal silver, or do a bunch of stupid things along those lines; my explanation of why not is something like "those are bad and politically-unprofitable, so they satisfy neither term in her utility function". Likewise, she has done some good things, like grant emergency authorization for coronavirus vaccines - my explanation of why is that doing that was both good and obviously politically profitable. I agree there might be some cases where she does things with neither desideratum but I think they're probably rare compared to the above.

Do we still disagree on any of this? I'm not sure I still remember why this was an important point to discuss.

I am too lazy to have opinions on all nine of your points in the second part. I appreciate them, I'm sure you appreciate the arguments for skepticism, and I don't think there's a great way to figure out which way the evidence actually leans from our armchairs. I would point to Dominic Cummings as an example of someone who tried the thing, had many advantages, and failed anyway, but maybe a less openly confrontational approach could have carried the day.

Comment by Scott Alexander (Yvain) on What is it good for? But actually? · 2020-12-17T01:55:17.818Z · LW · GW

Bronze Age war (as per James Scott) was primarily war for captives, because the Bronze Age model was kings ruling agricultural dystopias amidst virgin land where people could easily escape and become hunter-gatherers. The laborers would gradually escape, the country would gradually become less populated, and the king would declare war on a neighboring region to steal their people to use as serfs or slaves.

Iron Age to Industrial Age war (as per Peter Turchin) was primarily war for land, because of Malthus. Until the Industrial Revolution, you needed a certain amount of land to support a unit of population. Population was constantly increasing, land wasn't, and so every so often population would outstrip land, everyone would be starving and unhappy, and something would restore the situation to equilibrium. Absent any other action, that would be some sort of awful civil war or protracted anarchy where people competed for limited resources - aided by wages being very low (so they could hire soldiers easily) and people being very angry (so becoming a pretender and raising an army against the current king was a popular move). Kings' best way to forestall this disaster was to preemptively declare war against a foreign enemy. If they won, they could steal the enemy's land, which resolved the land/population imbalance and fed the excess population. If they lost, then (to be cynical about it), they still eliminated their excess population and successfully resolved the imbalance.

Comment by Scott Alexander (Yvain) on The rationalist community's location problem · 2020-09-25T07:21:45.382Z · LW · GW

The Bay Area is a terrible place to live in many ways. I think if we were selecting for the happiness of existing rationalists, there's no doubt we should be somewhere else.

But if the rationalist project is supposed to be about spreading our ideas and achieving things, it has some obvious advantages. If MIRI is trying to lure some top programmer, it's easier for them to suggest they move to the Bay (and offer them enough money to overcome the house price hurdle) than to suggest they move to Montevideo or Blackpool or even Phoenix. If CEA is trying to get people interested in effective altruism, getting to socialize with Berkeley and Stanford professors is a pretty big plus. And if we're trying to get the marginal person who isn't quite a community member yet but occasionally reads Less Wrong to integrate more, that person is more likely to be in the Bay than anywhere else we could move. I think this is still true despite the coronavirus and fires. Maybe it's becoming less so, but it's hard to imagine any alternative hub that's anywhere near as good by these metrics. *Maybe* Austin.

Separating rationalists interested in quality-of-life from rationalists working for organizations and doing important world-changing work seems potentially net negative.

I think if we were going to move the Berkeley hub, it would have to be to another US hub - most people aren't going to transfer countries, so even if the community as a whole moved, we would need another US hub for Americans who refused to or coudln't emigrate.

I don't think Moraga (or other similar places near the Bay) are worth trying. They're just as expensive as Berkeley, but almost all single-family homes, so it would be harder for poorer people to rent places there. Although there's a BART station, there's not much other transit, and most homes aren't walkable from the BART station, so poorer people without cars would be in trouble. And it really isn't much less expensive than Berkeley, and it's got the same level of fire danger, so we would be splitting the community in two (abandoning the poor people, the people tied to MIRI HQ, etc) while not gaining much more than a scenery upgrade. I think they're a fair alternative option for people who can't stand the squalor and crime of the Bay proper, but mostly in the context of those people moving there and commuting to Berkeley for community events.

If we made a larger-scale move, I think it would be to avoid the high housing costs, fires, blackouts, taxes, and social decay of the Bay. That rules out anywhere else in California - still the same costs, fires, blackouts, and taxes, although some places are marginally less decayed. It also rules out Cascadian cities like Portland and Seattle - only marginally better housing costs, worse fires, and worse social decay (eg violence in Portland). 

If we wanted to stick close enough to California that it was easy to see families/friends/colleagues, there are lots of great cities in or near the Mountain West - Phoenix, Salt Lake, Colorado Springs, Austin. All of those have housing prices well below half that of the Bay (Phoenix's cost-of-housing index is literally 20% of Berkeley's!). Austin is a trendy exciting tech hub, Colorado Springs frequently tops most-liveable lists, Salt Lake City seems unusually well-governed and resilient to potential climate or political crisis, and Phoenix is gratifyingly cheap.

The most successful adjacent past attempt at deliberate-hub-creation like this I know of was the Free State Project, where 20,000 libertarians agreed to create a libertarian hub. They did some analyses, voted on where the hub should be, created an assurance contract where every signatory agreed to move once there were 20,000 signatories, got 20,000 signatories, and moved. They ended up choosing New Hampshire, which means we might want to consider it as well. It's got great housing prices (Manchester is as cheap as Phoenix!), a great economy, beautiful scenery, a vibrant intellectual scene, it's less than an hour's drive to Boston, it's very politically influential (small, swing state, presidential primaries), and (now) has 20,000 libertarians who are interested in moving places and building hubs.

If people are interested in this, I think the first step would be to consult MIRI, CFAR, CEA, etc, and if they say no, decide whether splitting off "the community" from all of them is worth it. If they say yes, or people decide it's worth it to split, then make an organization and take a vote on location. Once you have a location in mind, start an assurance contract where once X people sign, everyone moves to the location (I'm not sure what X would be - maybe 50?)

I think this is a really interesting project, but probably am too tied to my group house to participate myself :(

Comment by Scott Alexander (Yvain) on Moloch and multi-level selection · 2020-08-11T23:14:33.119Z · LW · GW

I mostly agree with this - see eg

Comment by Scott Alexander (Yvain) on Ideology/narrative stabilizes path-dependent equilibria · 2020-06-11T21:30:03.464Z · LW · GW

I think you might find helpful here. It explains legitimacy as a Schelling point. If everyone thinks you're legitimate, you're legitimate. And if everything expects everyone else is going to think you're legitimate, you're legitimate.

America has such a strong tradition of democracy that the Constitution makes an almost invincible Schelling point - everyone expects everyone else to follow it because everyone expects everyone else to follow it because...and so on. A country with less of a democratic tradition has less certainty around these points, and so some guy who seizes the treasury might become the best Schelling point anyone has.

Comment by Scott Alexander (Yvain) on English Bread Regulations · 2020-05-18T18:55:16.367Z · LW · GW

Banning fresh bread doesn't decrease human caloric needs. Wouldn't making fresh bread less desirable just mean people replace it with other foods, spending the same amount of money overall (or more, since bread is probably cheaper than its replacement) and removing any benefit from bread price controls? Or was the English government working off a model where people were overconsuming food because of how tasty fresh bread was?

Comment by Scott Alexander (Yvain) on The EMH Aten't Dead · 2020-05-17T11:41:29.036Z · LW · GW

Re: "revisionist history":

You criticize my description in "A Failure, But Not Of Prediction", which was:

The stock market is a giant coordinated attempt to predict the economy, and it reached an all-time high on February 12, suggesting that analysts expected the economy to do great over the following few months. On February 20th it fell in a way that suggested a mild inconvenience to the economy, but it didn’t really start plummeting until mid-March – the same time the media finally got a clue.

As my post said, the market started declining a little in February. Using the S&P link you provide, on March 2/28, it reached 2954, just 12% lower than its all-time high, then quickly recovered to only 8% lower a few days later. For comparison, the market fell 7% in May 2019 because Donald Trump made a bad tweet, and then everyone laughed it off and forgot about it within a few weeks. I think my claim that "it fell in a way that suggested a mild inconvenience to the economy" is a fair description of this.

It had its next major fall on March 9, reaching a new low (34% off its all-time high) March 23. I think it is fair to say it started plummeting in mid-March, though I would not blame you if you consider March 9 more "early" than "mid". For comparison, Jacob wrote his post warning that the coronavirus would be a big deal in late February, and I wrote one saying the same on March 2.

Some of this depends on the "correct" amount of market crash. I was writing my post in early April, when the market was near its floor. If that was the "correct" amount of market crash, then the early February crash underpredicted it, and the market didn't "get it right" until mid-March. As you write this post now, the market has recovered, and if it's at the "correct" price now, then the early February crash was basically correctly calibrated and the mid-March crash was an overreaction.

To be clear, I think time has proven you correct about the EMH (and this is easy for you to say, now that the market has stabilized). I'm not debating any of the points in your post, just your accusation that I am a "revisionist historian".

Comment by Scott Alexander (Yvain) on SlateStarCodex 2020 Predictions: Buy, Sell, Hold · 2020-05-02T08:37:26.720Z · LW · GW

Thanks, I look forward to seeing how this goes. I'm impressed with you being willing to bet against me on things you know nothing about like my restaurant preferences (not sarcastic, seriously impressed), and I will be *very* impressed if you end up broadly more accurate than I am in that category. In many cases I agree with your criticism once you explain your reasoning.

There was a pretty credible rumor that Kim Jong-un was dead last week when I wrote this, which is why I gave him such a low probability. Today the news is he was seen in public alive (though in theory this could be a sham), so you are probably right, but it made sense when I wrote it.

Comment by Scott Alexander (Yvain) on Evaluating Predictions in Hindsight · 2020-04-18T00:47:32.061Z · LW · GW

Thanks (as always) for your thoughts.

I agree most of your methods for evaluating predictions are good. But I think I mostly have a different use case, in two ways. First, for a lot of things I'm not working off an explicit model, where I can compare predictions made to the model to reality in many different circumstances. When I give Joe Biden X% of the nomination, this isn't coming from a general process that I can check against past elections and other candidates, it's just something like "Joe Biden feels X% likely to win". I think this is probably part of what you mean by hard mode vs. easy mode.

Second, I think most people who try to make predictions aren't trying to do something that looks like "beat the market". Accepting the market price is probably good enough for most purposes for everyone except investors, gamblers, and domain experts. For me the most valuable type of prediction is when I'm trying to operate in a field without a market, either because our society is bad at getting the right markets up (eg predicting whether coronavirus will be a global pandemic, where stock prices are relevant but there's no real prediction market in it) or because it's a more personal matter (eg me trying to decide whether I would be happier if I quit my job). Calibration is one of the few methods that works here, although I agree with your criticisms of it.

I'm not sure we disagree on Silver's Trump production and superforecasters' Brexit prediction. I agree they did as well as possible with the information that they had and do not deserve criticism. We seem to have a semantic disagreement on whether a prediction that does this (but ascribes less than 50% to the winning side on a binary question) should be called "intelligently-made but wrong" or "right". I'm not really committed to my side of this question except insofar as I want to convey information clearly.

I'm not sure it's possible to do the thing that you're doing here, which is to grade my predictions (with hindsight of what really happened) while trying not to let your hindsight contaminate your grades. With my own hindsight, I agree with most of your criticisms, but I don't know whether that's because you have shown me the error of my ways, or because Scott-with-hindsight and Zvi-with-hindsight are naturally closer together than either of us is to Scott-without-hindsight (and, presumably, Zvi-without-hindsight).

A few cases where I do have thoughts - one reason I priced Biden so low was that in December 2018 when I wrote those it was unclear whether he was even going to run (I can't find a prediction market for that month, but prediction markets a few months later were only in the low 70s or so). Now it seems obvious that he would run, but at the time you could have made good money on InTrade by predicting that. My Biden estimate was higher than the prediction market's Biden estimate at that time (and in fact I made lots of money betting on Biden in the prediction markets in January 2019 ), so I don't think I was clearly and egregiously too low.

Same with Trump being the GOP nominee. I agree now it seems like it was always a sure thing. But in late 2018, he'd been president for just under two years, it was still this unprecedented situation of a complete novice who offended everyone taking the presidency, we were in the middle of a government shutdown that Trump was bungling so badly that even the Republicans were starting to grumble, and the idea of GOP falling out of love with Trump just felt much more plausible than it does now. It's possible this was still silly even in late 2018, but I don't know how to surgically remove my hindsight.

I will defend my very high confidence on Trump approval below 50, based on it never having gotten above 46 in his presidency so far. While I agree a 9-11 scale event could change that, that sort of thing probably only happens once every ten years or so. Trump got a boost from a rally-round-the-flag effect around COVID, and it was clearly bigger than any other boost he's gotten in his administration, but it only took him up to 45.8% or so, so even very large black swans aren't enough. The largest boost Obama got in his administration, after killing Osama, was only 5 points above baseline, still not enough for Trump to hit 50. And it wouldn't just require an event like this to happen, but to happen at exactly the right time to peak on 1/1/2020.

May staying in power feels wrong now, but she had beaten Labour recently enough that she didn't have to quit if she didn't want to, she had survived a no-confidence motion recently enough that it would have been illegal to no-confidence her again until December (and it probably wouldn't happen exactly in December), and she had failed badly many times before without resigning. So I figured she wasn't interested in resigning just because Brexit was hard, and nobody else could kick her out against her will, so she would probably stay in power. I guess she got tired of failing so many times. You were right and I was wrong, but I don't think you could have (or should have be able to) convinced me of that last year.

Comment by Scott Alexander (Yvain) on How to evaluate (50%) predictions · 2020-04-10T22:31:38.823Z · LW · GW

Correction: Kelsey gave Biden 60% probability in January 2020. I gave him 20% probability in January 2019 (before he had officially entered the race). I don't think these contradict each other.

Comment by Scott Alexander (Yvain) on April Coronavirus Open Thread · 2020-04-01T05:21:10.440Z · LW · GW

No, it says:

The study design does not allow us to determine whether medical masks had efficacy or whether cloth masks were detrimental to HCWs by causing an increase in infection risk. Either possibility, or a combination of both effects, could explain our results. It is also unknown whether the rates of infection observed in the cloth mask arm are the same or higher than in HCWs who do not wear a mask, as almost all participants in the control arm used a mask. The physical properties of a cloth mask, reuse, the frequency and effectiveness of cleaning, and increased moisture retention, may potentially increase the infection risk for HCWs. The virus may survive on the surface of the facemasks,29 and modelling studies have quantified the contamination levels of masks.30 Self-contamination through repeated use and improper doffing is possible. For example, a contaminated cloth mask may transfer pathogen from the mask to the bare hands of the wearer. We also showed that filtration was extremely poor (almost 0%) for the cloth masks. Observations during SARS suggested double-masking and other practices increased the risk of infection because of moisture, liquid diffusion and pathogen retention.31 These effects may be associated with cloth masks... The study suggests medical masks may be protective, but the magnitude of difference raises the possibility that cloth masks cause an increase in infection risk in HCWs.
Comment by Scott Alexander (Yvain) on April Coronavirus Open Thread · 2020-04-01T01:45:02.299Z · LW · GW is skeptical of cloth masks. Does anyone have any thoughts on it, or know any other studies investigating this question?

Comment by Scott Alexander (Yvain) on April Coronavirus Open Thread · 2020-03-31T23:47:45.532Z · LW · GW

In most major countries, daily case growth has switched from exponential to linear, an important first step towards the infection being under control. See for more, you can change which countries are on the graph for more detail. The growth rate in the world as a whole has also turned linear, . Since this is growth per day, a horizontal line represents a linear growth rate.

If it was just one country, I would worry it was an artifact of reduced testing. Given almost every country at once, I say it's real.

The time course doesn't really match lockdowns, which were instituted at different times in different countries anyway. Sweden and Brazil, which are infamous for not taking any real coordinated efforts to stop the epidemic, are showing some of the same positive signs as everyone else - see - though the graph is a little hard to interpret.

My guess is that this represents increased awareness of social distancing and increased taking-things-seriously starting about two weeks ago, and that this happened everywhere at once because it was more of a media phenomenon than a political one, and the media everywhere reads the media everywhere else and can coordinate on the same narrative quickly.

Comment by Scott Alexander (Yvain) on April Coronavirus Open Thread · 2020-03-31T22:07:26.473Z · LW · GW

Thanks for the shout-out, but I don't think the thing I proposed there is quite the same as hammer and dance. I proposed lockdown, then gradual titration of lockdown level to build herd immunity. Pueyo and others are proposing lockdown, then stopping lockdown in favor of better strategies that prevent transmission. The hammer and dance idea is better, and if I had understood it at the time of writing I would have been in favor of that instead.

(there was an ICL paper that proposed the same thing I did, and I did brag about preempting them, which might be what you saw)

Comment by Scott Alexander (Yvain) on SSC - Face Masks: Much More Than You Wanted To Know · 2020-03-24T17:42:36.669Z · LW · GW

Sorry, by "complete" I meant "against both types of transmission". I agree it was confusing/wrong as written, so I edited it to say "generalized".

Comment by Scott Alexander (Yvain) on Can crimes be discussed literally? · 2020-03-23T17:37:38.148Z · LW · GW

Agreed, it seems very similar to (maybe exactly like) the "Martin Luther King was a criminal" example from there.

Comment by Scott Alexander (Yvain) on March Coronavirus Open Thread · 2020-03-14T03:41:04.817Z · LW · GW

China is following a strategy of shutting down everything and getting R0 as low as possible. This works well in the short term, but they either have to keep everything shut down forever, or risk the whole thing starting over again.

UK is following a strategy of shutting down only the highest-risk people, and letting the infection burn itself out. It's a permanent solution, but it's going to be really awful for a while as the hospitals overload and many people die from lack of hospital care.

What about a strategy in between these two? Shut everything down, then gradually unshut down a little bit at a time. Your goal is to "surf" the border of the number of cases your medical system can handle at any given time (maybe this would mean an R0 of 1?) Any more cases, and you tighten quarantine; any fewer cases, and you relax it. If you're really organized, you can say things like "This is the month for people with last names A - F to go out and get the coronavirus". That way you never get extra mortality from the medical system being overloaded, but you do eventually get herd immunity and the ability to return to normalcy.

This would be sacrificing a certain number of lives, so you'd only want to do it if you were sure that you couldn't make the virus disappear entirely, and sure that there wasn't going to be vaccine or something in a few months that would solve the problem, but it seems like more long-term thinking than anything I've heard so far.

I've never heard of anyone trying anything like this before, but maybe there's never been a relevant situation before.

Comment by Scott Alexander (Yvain) on The Critical COVID-19 Infections Are About To Occur: It's Time To Stay Home [crosspost] · 2020-03-12T21:36:44.975Z · LW · GW

It sounds like you've found that by March 17, the US will have the same number of cases that Italy had when things turned disastrous.

But the US has five times the population of Italy, and the epidemic in the US seems more spread out compared to Italy (where it was focused in Lombardy). This makes me think we might have another ~3 doubling times (a little over a week) after the time we reach the number of cases that marked the worst phase of Italy, before we get the worst phase here.

I agree that it's going to get worse than most people expect sooner than most people expect, and that now is a good time to start staying inside. But (and I might be misunderstanding) I'm not sure if I would frame this as "tell people to stay inside for the next five days", because I do think it's possible that five days from now nothing has gotten obviously worse and then people will grow complacent.

Comment by Scott Alexander (Yvain) on When to Reverse Quarantine and Other COVID-19 Considerations · 2020-03-10T19:24:44.603Z · LW · GW

Have you looked into whether cinchona is really an acceptable substitute for chloroquine?

I'm concerned for two reasons. First, the studies I saw were on chloroquine, and I don't know if quinine is the same as chloroquine for this purpose. They have slightly different antimalarial activity - some chloroquine-resistant malaria strains are still vulnerable to quinine - and I can't find any information about whether their antiviral activity is the same. They're two pretty different molecules and I don't think it's fair to say that anything that works for one will also work for the other. Even if they do work, I don't know how to convert doses. It looks like the usual quinine dose for malaria is about three times the usual chloroquine dose, but I have no idea how that translates to antiviral properties.

Second, I don't know how much actual quinine is in cinchona. Quinine is a pretty dangerous substance, so the fact that the FDA doesn't care if people sell cinchona makes me think there isn't much in it. This paper suggests 6 mg quinine per gram of bark, though it's using literal bark and not the purified bark product they sell in supplement stores. At that rate, using this as an example cinchona preparation and naively assuming that quinine dose = chloroquine dose, the dose corresponding to the Chinese studies would be 160 cinchona pills, twice a day, for ten days - a level at which some other alkaloid in cinchona bark could potentially kill you.

Also, reverse-quarantining doesn't just benefit you, it also benefits the people who you might infect if you get the disease, and the person whose hospital bed you might be taking if you get the disease. I don't know what these numbers are but they should probably figure into your calculation.

Comment by Scott Alexander (Yvain) on Model estimating the number of infected persons in the bay area · 2020-03-09T05:52:36.842Z · LW · GW

I tried to answer the same question here and got very different numbers - somewhere between 500 and 2000 cases now.

I can't see your images or your spreadsheet, so I can't tell exactly where we diverged. One possible issue is that AFAIK most people start showing symptoms after 5 days. 14 days is the preferred quarantine period because it's almost the maximum amount of time the disease can incubate asymptomatically; the average is much lower.

Comment by Scott Alexander (Yvain) on REVISED: A drowning child is hard to find · 2020-02-02T20:32:47.590Z · LW · GW

I've read this. I interpret them as saying there are fundamental problems of uncertainty with saying any number, not that the number $5000 is wrong. There is a complicated and meta-uncertain probability distribution with its peak at $5000. This seems like the same thing we mean by many other estimates, like "Biden has a 40% chance of winning the Democratic primary". GiveWell is being unusually diligent in discussing the ways their number is uncertain and meta-uncertain, but it would be wrong (isolated demand for rigor) to retreat from a best estimate to total ignorance because of this.

Comment by Scott Alexander (Yvain) on REVISED: A drowning child is hard to find · 2020-02-02T20:28:39.113Z · LW · GW

I don't hear EAs doing this (except when quoting this post), so maybe that was the source of my confusion.

I agree Good Ventures could saturate the $5000/life tier, bringing marginal cost up to $10000 per life (or whatever). But then we'd be having this same discussion about saving money for $10000/life. So it seems like either:

1. Good Ventures donates all of its money, tomorrow, to stopping these diseases right now, and ends up driving the marginal cost of saving a life to some higher number and having no money left for other causes or the future, or

2. Good Ventures spends some of its money on stopping diseases, helps drive the marginal cost of saving a life up to some number N, but keeps money for other causes and the future, and for more complicated reasons like not wanting to take over charities, even though it could spend the remaining money on short-term disease-curing at $N/life.

(1) seems dumb. (2) seems like what it's doing now, at N = $5000 (with usual caveats).

It still seems accurate to say that you or I, if we wanted to, could currently donate $5000 (with usual caveats) and save a life. It also seems correct to say, once you've convinced people of this surprising fact, that they can probably do even better by taking that money/energy and devoting it to causes other than immediate-life-saving, the same way Good Ventures is.

I agree that if someone said "since saving one life costs $5000, and there are 10M people threatened by these diseases in the world, EA can save every life for $50B", they would be wrong. Is your concern only that someone is saying this? If so, it seems like we don't disagree, though I would be interested in seeing you link such a claim being made by anyone except the occasional confused newbie.

I'm kind of concerned about this because I feel like I've heard people reference your post as proving that EA is fraudulent and we need to throw it out and replace it with something nondeceptive (no, I hypocritically can't link this, it's mostly been in personal conversations), but I can't figure out how to interpret your argument as anything other than "if people worked really hard to misinterpret certain claims, then joined them together in an unlikely way, it's possible a few of them could end up confused in a way that doesn't really affect the bigger picture."

Comment by Scott Alexander (Yvain) on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-02-01T02:07:47.091Z · LW · GW

An alternate response to this point is that if someone comes off their medication, then says they're going to kill their mother because she is poisoning their food, and the food poisoning claim seems definitely not true, then spending a few days assessing what is going on and treating them until it looks like they are not going to kill their mother anymore seems justifiable for reasons other than "we know exactly what biological circuit is involved with 100% confidence"

(source: this basically describes one of the two people I ever committed involuntarily)

I agree that there are a lot of difficult legal issues to be sorted out about who has the burden of proof and how many hoops people should have to jump through to make this happen, but none of them look at all like "you do not know the exact biological circuit involved with 100% confidence using a theory that has had literally zero exceptions ever"

Comment by Scott Alexander (Yvain) on REVISED: A drowning child is hard to find · 2020-02-01T02:01:12.755Z · LW · GW

I'm confused by your math.

You say 10M people die per year of preventable diseases, and the marginal cost of saving a life is (presumed to be) $5K.

The Gates Foundation and OpenPhil combined have about $50B. So if marginal cost = average cost, their money combined is enough to save everyone for one year.

But marginal cost certainly doesn't equal average cost; average cost is probably orders of magnitude higher. Also, Gates and OpenPhil might want to do something other than prevent all diseases for one year, then leave the world to rot after that.

I agree a "grand experiment" would be neat. But are you sure it's this easy? Suppose we want to try eliminating malaria in Madagascar (chosen because it's an island so it seems like an especially good test case). It has 6K malaria deaths yearly, so if we use the 5K per life number, that should cost $30 million. But given the marginal vs. average consideration, the real number should probably be much higher, maybe $50K per person. Now the price tag is $300M/year. But that's still an abstraction. AFAIK OpenPhil doesn't directly employ any epidemiologists, aid workers, or Africans. So who do you pay the $300M to? Is there some charity that is willing to move all their operations to Madagascar and concentrate entirely on that one island for a few years? Do the people who work at that charity speak Malagasay? Do they have families who might want to live somewhere other than Madagascar? Do they already have competent scientists who can measure their data well? If not, can you hire enough good scientists, at scale, to measure an entire country's worth of data? Are there scientists willing to switch to doing that for enough money? Do you have somebody working for you who can find them and convince them to join your cause? Is the Madagascar government going to let thousands of foreign aid workers descend on them and use them as a test case? Does OpenPhil employ someone who can talk with the Madagascar government and ask them? Does that person speak Malagasay? If the experiment goes terribly, does that mean we're bad at treating malaria, or that we were bad at transferring our entire malaria-treating apparatus to Madagascar and scaling it up by orders of magnitude on short notice? What if it went badly because there are swamps in Madagascar that the local environmental board won't let anyone clear, and there's nothing at all like that in most malarial countries? I feel like just saying "run a grand experiment" ignores all of these considerations. I agree there's *some* amount of money that lets you hire/train/bribe everyone you need to make this happen, but by that point maybe this experiment costs $1B/year, which is the kind of money that even OpenPhil and Gates need to worry about. My best guess is that they're both boggled by the amount of work it would take to make something like this happen.

(I think there was something like a grand experiment to eliminate malaria on the island of Zanzibar, and it mostly worked, with transmission rates down 94%, but it involved a lot of things other than bednets because it turned out most of the difficulty involved battering down at the problems that remain after you pick the low-hanging fruit. I don't know if anyone has tried to learn anything from this.)

I'm not sure it's fair to say that if these numbers are accurate then charities "are hoarding money at the price of millions of preventable death". After all, that's basically true of any possible number. If lives cost $500,000 to save, then Gates would still be "hoarding money" if he didn't spend his $50 billion saving 100,000 people. Gates certainly isn't optimizing for saving exactly as many people as he can right now. So either there's no such person as Bill Gates and we're just being bamboozled to believe that there is, or Gates is trying to do things other than simultaneously throwing all of his money at the shortest-term cause possible without any infrastructure to receive it.

I think the EA movement already tries really hard to push the money that it's mostly talent-constrained and not funding-constrained, and it already tries really hard to convince people to donate to smaller causes where they might have an information advantage. But the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty) and is a really important message to get people thinking about ethics and how they want to contribute.

Comment by Scott Alexander (Yvain) on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-01-31T07:28:29.853Z · LW · GW
Likewise for psychiatry, which justifies incredibly high levels of coercion on the basis of precise-looking claims about different kinds of cognitive impairment and their remedies.

You're presenting a specific rule about manipulating logically necessary truths, then treating it as a vague heuristic and trying to apply it to medicine! Aaaaaah!

Suppose a physicist (not even a doctor! a physicist!) tries to calculate some parameter. Theory says it should be 6, but the experiment returns a value of 6.002. Probably the apparatus is a little off, or there's some other effect interfering (eg air resistance), or you're bad at experiment design. You don't throw out all of physics!

Or moving on to biology: suppose you hypothesize that insulin levels go up in response to glucose and go down after the glucose is successfully absorbed, and so insulin must be a glucose-regulating hormone. But you find one guy who just has really high levels of insulin no matter how much glucose he has. Well, that guy has an insulinoma. But if you lived before insulinomas were discovered, then you wouldn't know that. You still probably shouldn't throw out all of endocrinology based on one guy. Instead you should say "The theory seems basically sound, but this guy probably has something weird we'll figure out later".

I'm not claiming these disprove your point - that if you're making a perfectly-specified universally-quantified claim and receive a 100%-confidence 100%-definitely-relevant experimental result disproving it, it's disproven. But nobody outside pure math is in the perfectly-specified universally-quantified claim business, and nobody outside pure math receives 100%-confidence 100%-definitely-relevant tests of their claims. This is probably what you mean by the term "high-precision" - the theory of gravity isn't precise enough to say that no instrument can ever read 6.002 when it should read 6, and the theory of insulin isn't precise enough to say nobody can have weird diseases that cause exceptions. But both of these are part of a general principle that nothing in the physical world is precise enough that you should think this way.

See eg Kuhn, who makes the exact opposite point as this post - that no experimental result can ever prove any theory wrong with certainty. That's why we need this whole Bayesian thing.

Comment by Scott Alexander (Yvain) on Are "superforecasters" a real phenomenon? · 2020-01-09T03:05:45.427Z · LW · GW

I was going off absence of evidence (the paper didn't say anything other than taking the top 2%), so if anyone else has positive evidence that outweighs what I'm saying.

Comment by Scott Alexander (Yvain) on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2020-01-06T06:44:18.725Z · LW · GW

I agree much of psychology etc are bad for the reasons you state, but this doesn't seem to be because everyone else has fried their brains by trying to simulate how to appease triskaidekaphobics too much. It's because the actual triskaidekaphobics are the ones inventing the psychology theories. I know a bunch of people in academia who do various verbal gymnastics to appease the triskaidekaphobics, and when you talk to them in private they get everything 100% right.

I agree that most people will not literally have their buildings burned down if they speak out against orthodoxies (though there's a folk etymology for getting fired which is relevant here). But I appreciate Zvi's sequence on super-perfect competition as a signpost of where things can end up. I don't think academics, organization leaders, etc. are in super-perfect competition the same way middle managers are, but I also don't think we live in the world where everyone has infinite amounts of slack to burn endorsing taboo ideas and nothing can possibly go wrong.

Comment by Scott Alexander (Yvain) on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T06:31:06.201Z · LW · GW

I think you might be wrong about how fraud is legally defined. If the head of says "You should invest in, it's going to make millions, everyone wants to order pet food online", and then you invest in them, and then they go bankrupt, that person was probably biased and irresponsible, but nobody has committed fraud.

If Raleigh had simply said "Sponsor my expedition to El Dorado, which I believe has lots of gold", that doesn't sound like fraud either. But in fact he said:

For the rest, which myself have seen, I will promise these things that follow, which I know to be true. Those that are desirous to discover and to see many nations may be satisfied within this river, which bringeth forth so many arms and branches leading to several countries and provinces, above 2,000 miles east and west and 800 miles south and north, and of these the most either rich in gold or in other merchandises. The common soldier shall here fight for gold, and pay himself, instead of pence, with plates of half-a-foot broad, whereas he breaketh his bones in other wars for provant and penury. Those commanders and chieftains that shoot at honour and abundance shall find there more rich and beautiful cities, more temples adorned with golden images, more sepulchres filled with treasure, than either Cortes found in Mexico or Pizarro in Peru. And the shining glory of this conquest will eclipse all those so far-extended beams of the Spanish nation.

There were no Indian cities, and essentially no gold, anywhere in Guyana.

I agree with you that lots of people are biased! I agree this can affect their judgment in a way somewhere between conflict theory and mistake theory! I agree you can end up believing the wrong stories, or focusing on the wrong details, because of your bias! I'm just not sure that's how fraud works, legally, and I'm not sure it's an accurate description of what Sir Walter Raleigh did.

Comment by Scott Alexander (Yvain) on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T06:15:58.436Z · LW · GW

What exactly is contradictory? I only skimmed the relevant pages, but they all seemed to give a pretty similar picture. I didn't get a great sense of exactly what was in Raleigh's book, but all of them (and whoever tried him for treason) seemed to agree it was somewhere between heavily exaggerated and outright false, and I get the same impression from the full title "The discovery of the large, rich, and beautiful Empire of Guiana, with a relation of the great and golden city of Manoa (which the Spaniards call El Dorado)"

Comment by Scott Alexander (Yvain) on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-06T06:14:36.597Z · LW · GW

I'm confused by your confusion. The first paragraph establishes that Raleigh was at least as deceptive as the institutions he claimed to be criticizing. The second paragraph argues that if deceptive people can write famous poems about how they are the lone voice of truth in a deceptive world, we should be more careful about taking claims like that completely literally.

If you want more than that, you might have to clarify what part you don't understand.

Comment by Scott Alexander (Yvain) on What is Life in an Immoral Maze? · 2020-01-06T06:08:55.689Z · LW · GW
Questions that will be considered later, worth thinking about now, include: How does this persist? If things are so bad, why aren’t things way worse? Why haven’t these corporations fallen apart or been competed out of business? Given they haven’t, why hasn’t the entire economy collapsed? Why do regular people, aspirant managers and otherwise, still think of these manager positions as the ‘good jobs’ as opposed to picking up pitchforks and torches?

I hope you also answer a question I had when I was reading this: it's percolated down into common consciousness that some jobs are unusually tough and demanding. Medicine, finance, etc have reputations for being grueling. But I'd never heard that about middle management and your picture of middle management sounds worse than either. Any thoughts on why knowledge of this hasn't percolated down?

Comment by Scott Alexander (Yvain) on Less Wrong Poetry Corner: Walter Raleigh's "The Lie" · 2020-01-04T23:25:45.540Z · LW · GW

Walter Raleigh is also famous for leading an expedition to discover El Dorado. He didn't find it, but he wrote a book saying that he definitely had, and that if people gave him funding for a second expedition he would bring back limitless quantities of gold. He got his funding, went on his second expedition, and of course found nothing. His lieutenant committed suicide out of shame, and his men decided the Spanish must be hoarding the gold and burnt down a Spanish town. On his return to England, Raleigh was tried for treason based on a combination of the attack on Spain (which England was at peace with at the time) and defrauding everyone about the El Dorado thing. He was executed in 1618.

For conflict theorists, the moral of this story is that accusing everyone else of being lying and corrupt can sometimes be a strategy con men use to deflect suspicion. For mistake theorists, the moral is that it's really easy to talk yourself into a biased narrative where you are a lone angel in a sea full of corruption, and you should try being a little more charitable to other people and a little harsher on yourself.

Comment by Scott Alexander (Yvain) on Predictive coding & depression · 2020-01-03T19:27:06.164Z · LW · GW

In this post and the previous one you linked to, you do a good job explaining why your criterion e is possible / not ruled out by the data. But can you explain more about what makes you think it's true? Maybe this is part of the standard predictive coding account and I'm just misunderstanding it, if so can you link me to a paper that explains it?

I'm a little nervous about the low-confidence model of depression, both for some of the reasons you bring up, and because the best fits (washed-out visual field and psychomotor retardation) are really marginal symptoms of depression that you only find in a few of the worst cases. The idea of depression as just a strong global negative prior (that makes you interpret everything you see and feel more negatively) is pretty tempting. I like Friston's attempt to unify these by saying that bad mood is just a claim that you're in an unpredictable environment, with the reasoning apparently being something like "if you have no idea what's going on, probably you're failing" (eg if you have no idea about the social norms in a given space, you're more likely to be accidentally stepping on someone's toes than brilliantly navigating complicated coalitional politics by coincidence). I'm not sure what direction all of this happens in. Maybe if your brain's computational machinery gets degraded by some biochemical insult, it widens all confidence intervals since it can't detect narrow hits, this results in fewer or weaker positive hits being detected, this gets interpreted as an unpredictable world, and this gets interpreted as negative prior on how you're doing?

Comment by Scott Alexander (Yvain) on Perfect Competition · 2019-12-29T19:50:41.247Z · LW · GW
Things sometimes get bad. Once things get sufficiently bad that no one can deviate from short-term selfish actions or be a different type of person without being wiped out, things are no longer stable. People cheat on long term investments, including various combinations of things such as having and raising children, maintaining infrastructure and defending norms. The seed corn gets eaten. Eventually, usually when some random new threat inevitably emerges, the order collapses, and things start again. The rise and fall of civilizations.

I'm wondering if you're thinking of . I think that was what made me realize things worked this way, and it was indeed a big update on the standard narrative. I still haven't decided whether this is just a quirk of systems that have certain agriculture-related dynamics, or a more profound insight about systems in general. I look forward to reading more of what you have to say about this.

I think my answer (not yet written up) to why things aren't worse has something to do with competitions on different time scales - if you have more than zero slack, you want to devote a small amount of your budget to R&D, and then you'll win a long-run competition against a company that doesn't do this. Integrate all the different possible timescales and this gets so confusing that maybe the result barely looks like competition at all. I've been having trouble writing this up and am interested in seeing if you're thinking something similar. Again, really looking forward to reading more.

Comment by Scott Alexander (Yvain) on Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think · 2019-12-29T19:17:19.665Z · LW · GW

At the risk of being self-aggrandizing, I think the idea of axiology vs. morality vs. law is helpful here.

"Don't be misleading" is an axiological commandment - it's about how to make the world a better place, and what you should hypothetically be aiming for absent other considerations.

"Don't tell lies" is a moral commandment. It's about how to implement a pale shadow of the axiological commandment on a system run by duty and reputation, where you have to contend with stupid people, exploitative people, etc.

(so for example, I agree with you that the Rearden Metal paragraph is misleading and bad. But it sounds a lot like the speech I give patients who ask for the newest experimental medication. "It passed a few small FDA trials without any catastrophic side effects, but it's pretty common that this happens and then people discover dangerous problems in the first year or two of postmarketing surveillance. So unless there's some strong reason to think the new drug is better, it's better to stick with the old one that's been used for decades and is proven safe." I know and you know that there's a subtle difference here and the Institute is being bad while I'm being good, but any system that tries to implement reputation loss for the Institute at scale, implemented on a mob of dumb people, is pretty likely to hurt me also. So morality sticks to bright-line cases, at the expense of not being able to capture the full axiological imperative.)

This is part of what you mean when you say the report-drafting scientist is "not a bad person" - they've followed the letter of the moral law as best they can in a situation where there are lots of other considerations, and where they're an ordinary person as opposed to a saint laser-focused on doing the right thing at any cost. This is the situation that morality (as opposed to axiology) is designed for, your judgment ("I guess they're not a bad person") is the judgment that morality encourages you to give, and this shows the system working as designed, ie meeting its own low standards.

And then the legal commandment is merely "don't outright lie under oath or during formal police interrogations" - which (impressively) is probably *still* too strong, in that we all hear about the police being able to imprison basically whoever they want by noticing small lies committed by accident or under stress.

The "wizard's oath" feels like an attempt to subject one's self to a stricter moral law than usual, while still falling far short of the demands of axiology.

Comment by Scott Alexander (Yvain) on Maybe Lying Doesn't Exist · 2019-12-25T19:19:02.651Z · LW · GW

EDIT: Want to talk to you further before I try to explain my understanding of your previous work on this, will rewrite this later.

The short version is I understand we disagree, I understand you have a sophisticated position, but I can't figure out where we start differing and so I don't know what to do other than vomit out my entire philosophy of language and hope that you're able to point to the part you don't like. I understand that may be condescending to you and I'm sorry.

I absolutely deny I am "motivatedly playing dumb" and I enter this into the record as further evidence that we shouldn't redefine language to encode a claim that we are good at ferreting out other people's secret motivations.

Comment by Scott Alexander (Yvain) on Maybe Lying Doesn't Exist · 2019-12-25T19:11:48.645Z · LW · GW

I say "strategic" because it is serving that strategic purpose in a debate, not as a statement of intent. This use is similar to discussion of, eg, an evolutionary strategy of short life histories, which doesn't imply the short-life history creature understands or intends anything it's doing.

It sounds like normal usage might be our crux. Would you agree with this? IE that if most people in most situations would interpret my definition as normal usage and yours as a redefinition project, we should use mine, and vice versa for yours?

Comment by Scott Alexander (Yvain) on Maybe Lying Doesn't Exist · 2019-12-22T01:38:26.408Z · LW · GW

Sorry it's taken this long for me to reply to this.

"Appeal to consequences" is only a fallacy in reasoning about factual states of the world. In most cases, appealing to consequences is the right action.

For example, if you want to build a house on a cliff, and I say "you shouldn't do that, it might fall down", that's an appeal to consequences, but it's completely valid.

Or to give another example, suppose we are designing a programming language. You recommend, for whatever excellent logical reason, that all lines must end with a semicolon. I argue that many people will forget semicolons, and then their program will crash. Again, appeal to consequences, but again it's completely valid.

I think of language, following Eliezer's definitions sequence, as being a human-made project intended to help people understand each other. It draws on the structure of reality, but has many free variables, so that the structure of reality doesn't constrain it completely. This forces us to make decisions, and since these are not about factual states of the world (eg what the definition of "lie" REALLY is, in God's dictionary) we have nothing to make those decisions on except consequences. If a certain definition will result in lots of people misunderstanding each other, bad people having an easier time confusing others, good communication failing to occur, or other bad things, then it's fine to decide against it based on those grounds, just as you can decide against a programming language decision on the grounds that it will make programs written in it more likely crash, or require more memory, etc.

I am not sure I get your point about the symmetry of strategic equivocation. I feel like this equivocation relies on using a definition contrary to its common connotations. So if I was allowed to redefine "murderer" to mean "someone who drinks Coke", then I could equivocate "Alice who is a murderer (based on the definition where she drinks Coke)" and also "Murderers should be punished (based on the definition where they kill people) and combine them to get "Alice should be punished". The problem isn't that you can equivocate between any two definitions, the problem is very specifically when we use a definition counter to the way most people traditionally use it. I think (do you disagree?) that most people interpret "liar" to mean an intentional liar. As such, I'm not sure I understand the relevance of the Ruby's coworkers example.

I think you're making too hard a divide between the "Hobbesian dystopia" where people misuse language, versus a hypothetical utopia of good actors. I think of misusing language as a difficult thing to avoid, something all of us (including rationalists, and even including me) will probably do by accident pretty often. As you point out regarding deception, many people who equivocate aren't doing so deliberately. Even in a great community of people who try to use language well, these problems are going to come up. And so just as in the programming language example, I would like to have a language that fails gracefully and doesn't cause a disaster when a mistake gets made, one that works with my fallibility rather than naturally leading to disaster when anyone gets something wrong.

And I think I object-level disagree with you about the psychology of deception. I'm interpreting you (maybe unfairly, but then I can't figure out what the fair interpretation is) as saying that people very rarely lie intentionally, or that this rarely matters. This seems wrong to me - for example, guilty criminals who say they're innocent seem to be lying, and there seem to be lots of these, and it's a pretty socially important thing. I try pretty hard not to intentionally lie, but I can think of one time I failed (I'm not claiming I've only ever lied once in my life, just that this time comes to mind as something I remember and am particularly ashamed about). And even if lying never happened, I still think it would be worth having the word for it, the same way we have a word for "God" that atheists don't just repurpose to mean "whoever the most powerful actor in their local environment is."

Stepping back, we have two short words ("lie" and "not a lie") to describe three states of the world (intentional deception, unintentional deception, complete honesty). I'm proposing to group these (1)(2,3) mostly on the grounds that this is how the average person uses the terms, and if we depart from how the average person uses the terms, we're inviting a lot of confusion, both in terms of honest misunderstandings and malicious deliberate equivocation. I understand Jessica wants to group them (1,2)(3), but I still don't feel like I really understand her reasoning except that she thinks unintentional deception is very bad. I agree it is very bad, but we already have the word "bias" and are so in agreement about its badness that we have a whole blog and community about overcoming it.

Comment by Scott Alexander (Yvain) on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk · 2019-12-21T21:26:50.250Z · LW · GW

Maybe I'm misunderstanding you, but I'm not getting why having the ability to discuss involves actually discussing. Compare two ways to build a triskaidekaphobic calculator.

1. You build a normal calculator correctly, and at the end you add a line of code IF ANSWER == 13, PRINT: "ERROR: IT WOULD BE IMPOLITE OF ME TO DISCUSS THIS PARTICULAR QUESTION".

2. You somehow invent a new form of mathematics that "naturally" never comes up with the number 13, and implement it so perfectly that a naive observer examining the calculator code would never be able to tell which number you were trying to avoid.

Imagine some people who were trying to take the cosines of various angles. If they used method (1), they would have no problem, since cosines are never 13. If they used method (2), it's hard for me to imagine exactly how this would work but probably they would have a lot of problems.

It sounds like the proposal you're arguing against (and which I want to argue for) - not talking about taboo political issues on LW - is basically (1). We discuss whatever we want, we use logic which (we hope) would output the correct (taboo) answer on controversial questions, but if for some reason those questions come up (which they shouldn't, because they're pretty different from AI-related questions), we instead don't talk about them. If for some reason they're really relevant to some really important issue at some point, then we take the hit for that issue only, with lots of consultation first to make sure we're not stuck in the Unilateralist's Curse.

This seems like the right answer even in the metaphor - if people burned down calculator factories whenever any of their calculators displayed "13", and the sorts of problems people used calculators for almost never involved 13, just have the calculator display an error message at that number.

( doing other activism and waterline-raising work to deal with the fact that your society is insane, but that work isn't going to look like having your calculators display 13 and dying when your factory burns down)

Comment by Scott Alexander (Yvain) on Will AI See Sudden Progress? · 2019-12-20T21:56:36.897Z · LW · GW

This project (best read in the bolded link, not just in this post) seemed and still seems really valuable to me. My intuitions around "Might AI have discontinuous progress?" become a lot clearer once I see Katja framing them in terms of concrete questions like "How many past technologies had discontinuities equal to ten years of past progress?". I understand AI Impacts is working on an updated version of this, which I'm looking forward to.

Comment by Scott Alexander (Yvain) on Noticing the Taste of Lotus · 2019-12-20T21:53:12.513Z · LW · GW

I was surprised that this post ever seemed surprising, which either means it wasn't revolutionary, or was *very* revolutionary. Since it has 229 karma, seems like it was the latter. I feel like the same post today would have been written with more explicit references to reinforcement learning, reward, addiction, and dopamine. The overall thesis seems to be that you can get a felt sense for these things, which would be surprising - isn't it the same kind of reward-seeking all the way down, including on things that are genuinely valuable? Not sure how to model this.

Comment by Scott Alexander (Yvain) on The Bat and Ball Problem Revisited · 2019-12-20T21:49:31.895Z · LW · GW

It's nice to see such an in-depth analysis of the CRT questions. I don't really share drossbucket's intuition - for me the 100 widget question feels counterintuitive the same way as the ball and bat question, but neither feels really aversive, so it was hard for me to appreciate the feelings that generated this post. But this gives a good example of an idea of "training mathematical intuitions" I hadn't thought about before.