Posts

Formalising decision theory is hard 2019-08-23T03:27:24.757Z
Quantifying anthropic effects on the Fermi paradox 2019-02-15T10:51:04.298Z

Comments

Comment by lanrian on Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare · 2020-11-25T16:02:05.169Z · LW · GW

The paper lists "intelligence" as a potentially hard step, which is of extra interest for estimating AI timelines. However, I find all the convergent evolution described in section 5 of this paper (or more shortly described in this blogpost) to be pretty convincing evidence that intelligence was quite likely to emerge after our first common ancestor with octopuses ~800 mya; and as far as I can tell, this paper doesn't contradict that.

Comment by lanrian on Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare · 2020-11-25T12:02:14.129Z · LW · GW

We're not licensed to ignore it, and in fact such an update should be done. Ignoring that update represents an implicit assumption that our prior over "how habitable are long-lived planets?" is so weak that the update wouldn't have a big effect on our posterior. In other words, if the beliefs "long-lived planets are habitable" and "Z is much bigger than Y" are contradictory, we should decrease our confidence in both; but if we're much more confident in the latter than the former, we mostly decrease the probability mass we place on the former.

Of course, maybe this could flip around if we get overwhelmingly strong evidence that long-lived planets are habitable. And that's the Popperian point of making the prediction: if it's wrong, the theory making the prediction (ie "Z is much bigger than Y") is (to some extent) falsified.

Comment by lanrian on Embedded Interactive Predictions on LessWrong · 2020-11-21T09:02:15.588Z · LW · GW

Very cool, looking forward to using this!

How does this work with the alignmemt forum? It would be amazing if AFers predictions were tracked on AF, and all LWers predictions were tracked in the LW mirror.

Comment by lanrian on Draft report on AI timelines · 2020-11-09T10:12:03.496Z · LW · GW

I implemented the model for 2020 compute requirements in Guesstimate here. It doesn't do anything that the notebook can't do (and it can't do the update against currently affordable compute), but I find the graphical structure very helpful for understanding how it works (especially with arrows turned on in the "View" menu).

Comment by lanrian on Rationalist Town Hall: Pandemic Edition · 2020-10-29T18:43:18.179Z · LW · GW

Whoa, that's surprisingly specific! How do we know it's shorter than 12 months? Do we know many cases of reinfection?

Comment by lanrian on "Scaling Laws for Autoregressive Generative Modeling", Henighan et al 2020 {OA} · 2020-10-29T13:43:20.237Z · LW · GW

In this case, it seems especially important whether the purported irreducible entropy is below human-level performance (in which case sufficiently scaled models would outperform humans, if the scaling laws holds up) or if they're above human-level (in which case the constant loss isn't irreducible at all, but betrays some limits of the models).

Comment by lanrian on Why indoor lighting is hard to get right and how to fix it · 2020-10-28T19:23:05.914Z · LW · GW

Spectrum and intensity that changes continuously throughout the day. There are lamps that do this, and they seem especially pleasant to me for waking up in the morning, but I think this is particularly hard to get right and less important than other things

My current setup uses a single 800 lumen lifx bulb for continuously changing light in the morning and evening, and 24 normal bulbs to get high amounts of light during the day (that automatically turn on/off with a socket timer). I think that captures the main benefits of continuous lights, without needing more than one special bulb.

Comment by lanrian on The Darwin Game · 2020-10-20T23:03:47.624Z · LW · GW

Using newlines to figure out what happens after "payload" is fine, as far as I can tell. Multicore's exploit relies on newlines being used when comparing stuff before the payload.

Stuff like CRLF vs LF is a bit awkward, but can maybe be handled explicitly?

Comment by lanrian on The Darwin Game · 2020-10-20T18:21:10.782Z · LW · GW

Yeah, if we'd seen the issue, I think we could've gotten around it just by not using splitlines, which would've been smoother.

Though of course, this exploit updates me towards thinking that there are other vulnerabilities as well.

Comment by lanrian on The Darwin Game · 2020-10-19T22:17:36.862Z · LW · GW

Damn, good job. We should've gone with my suggestion that the whole payload needed to fit on one line, separated by ; (though maybe this would've caused us to lose so many clique-members out of annoyance that it wouldn't have been worth it).

Comment by lanrian on The Darwin Game · 2020-10-19T21:15:05.724Z · LW · GW

I stand by my reasoning! As long as we don't yield to bullying, simulators are our friends, ensuring that the maximum payout is always payed out.

Comment by lanrian on The Darwin Game · 2020-10-19T21:14:08.853Z · LW · GW

I didn't think about reporting the bug as making a sub-optimal but ethical choice – I just wanted to be part of a clique that worked instead of a clique where people defected. My aversion to lying might have affected my intuitions about what the correct choice was, though, idk ¯\_(ツ)_/¯

Comment by lanrian on The Darwin Game · 2020-10-19T21:08:39.227Z · LW · GW

I believed all lies! And I might've submitted a simulator if you hadn't told the first, and would definitely have tried harder to simulator-proof my bot, so you did change my behaviour. Leaving the clique wouldn't have been worth it, though. Even knowing that you lied about the 2nd thing, I assign decent probability to someone crashing all the simulators outside the clique. (I think this is incorrect, though – if you can figure out that you're in a simulation, it's way better to claim that you'll be submitting 3 to scare the simulator into playing 2.)

Comment by lanrian on The Darwin Game · 2020-10-17T08:22:37.980Z · LW · GW

What timezome is the deadline in? Or to be maximally precise – can you give a final submission-hour in UTC?

Comment by lanrian on Covid 10/1: The Long Haul · 2020-10-11T08:06:34.997Z · LW · GW

What about as an upper bound? I'm having a harder time generating confounders that make this an underestimate.

Comment by lanrian on Weird Things About Money · 2020-10-05T10:45:46.515Z · LW · GW

Thanks, that way to derive it makes sense! The point about free trade also seems right. With free trade, EV bettors will buy all risk from Kelly bettors until the former is gone with high probabiliity.

So my point only applies to bettors that can't trade. Basically, in almost every market, the majority of resources are controlled by Kelly bettors; but across all markets in the multiverse, the majority of resources are controlled by EV bettors, because they make bets such that they dominate the markets which contain most of the multiverse's resources.

(Or if there's no sufficiently large multiverse, Kelly bettors will dominate with arbitrary probability; but EV bettors will (tautologically) still get the most expected money.)

Comment by lanrian on Weird Things About Money · 2020-10-04T20:58:28.393Z · LW · GW

The point with having a large number of bettors is to assume that they all get independent sources of randomness, so at least some will win all their bets. Handwavy math follows:

Assume that we have n EV bettors and n Kelly bettors (each starting with $1), and that they're presented with a string of bets with 0.75 probability of doubling any money they risk. The EV bettors will bet everything at each time-step, while the Kelly bettors will bet half at each time-step. For any timestep t, there will be an n such that approximately  of EV bettors have won all their bets (by the law of large numbers), for a total earning of . Meanwhile, each Kelly bettor will in expectation multiply their earnings by 1.25 each time-step, and so in expectation have  after t timesteps. By the law of large numbers, for a sufficiently large n they will in aggregate have approximately . Since , the EV-maximizers will have more money, and we can get an arbitrarily high probability with an arbitrarily large n.

Comment by lanrian on Weird Things About Money · 2020-10-04T19:07:51.049Z · LW · GW

Money-pump arguments, on the other hand, can establish this from other assumptions.

Can you say more about this? Stuart's arguments weren't that convincing to me, absent other assumptions. In particular, it seems like the existence of a contract that exactly cancels out your own contract could increase the value of your own contract; and that there's no guarantee that such a contract exists (or can be made without exposing anyone else to risk that they don't want). Stuart seems to acknowledge this in other parts of the comments, instead referring to the possibility of aggregation.

From this, I'm guessing that you need to assume that the risk is almost independent of the total market value (e.g. because it's small in comparison with the total market value, and independent of all other sources of risk), and there exists an arbitrarily large number of traders whose utility is linear in small amounts of money (that you can spread out the risk between). Are these the necessary assumptions to establish linearity of utility in money?

Comment by lanrian on Weird Things About Money · 2020-10-04T18:49:04.942Z · LW · GW

However, it is a theorem that a diverse market would come to be dominated by Kelly bettors, as Kelly betting maximizes long-term growth rate. This means the previous counterpoint was wrong: expected-money bettors profit in expectation from selling insurance to Kelly bettors, but the Kelly bettors eventually dominate the market.

I haven't seen the theorem, so correct me if I'm wrong, but I'd guess it says that for any fixed number of bettors, there exists a time at which the Kelly bettors dominate the market with arbitrary probability. (Alternate phrasing: a market with a finite number of bettors would be dominated by Kelly bettors over infinite time.) But if we flip it around, we can also say that for any fixed time-horizon, there exists a number of bettors such that the EV-maximizers dominate the market throughout that time with arbitrary probability. (Alternate phrasing: a market with an infinite number of bettors would be dominated by EV-maximizers for any finite time.)

I don't see why we should necessarily prefer the first ordering of the quantifiers over the second.

Comment by lanrian on Postmortem to Petrov Day, 2020 · 2020-10-04T17:15:12.749Z · LW · GW

This was also mentioned in the comments of On Destroying the World

Comment by lanrian on Postmortem to Petrov Day, 2020 · 2020-10-04T10:20:35.141Z · LW · GW

Two different formulations of the problem that Chris faced:

  1. Chris got a message saying that he had to enter the codes, or else the frontpage would be destroyed. He believed it, and thought that he had to enter the codes to save the frontpage. Arguably, if he had destroyed the frontpage by inaction (ie., if the message had been real, and failing to enter the codes would've caused the destruction of the frontpage) he would have been far less chastised by the local culture than if he had destroyed the frontpage by action (ie., what actually happened). In this case, is it more in the spirit of Petrov to take the action that your local culture will blame you the least for, or the action that you honestly think will save the frontpage?
  2. Chris got a message that he had to enter the codes, or else bad things would happen, just like Petrov got a sign that the US had launched nukes, and that the russian military needed to be informed. The message wasn't real, and in fact, the decision with the least bad consequences was to ignore it. In this case, is it more in the spirit of Petrov to consider that a message might not be what it's claiming to be (and accurately determining that it's not) or to just believe it?

I don't know what the take-away is. Maybe we should celebrate Petrov's skepticism/perceptiveness more, and not just his willingness to not defer to superiors.

Comment by lanrian on Postmortem to Petrov Day, 2020 · 2020-10-04T09:36:43.664Z · LW · GW

This doesn't mean that it's a good idea to blow up the frontpage because it's more fun, or whatever. I think it's probably better to not blow up the frontpage, but the case for this is based on meta-level things like ~trust and ~culture, and I think you do need to go to that level to make a convincing consequentialist case for not blowing up the frontpage. The stakes just aren't high enough that the direct consequences dominate. (And it's hard to raise the stakes until that's false, because that would mean we're risking more than we stand to gain.)

Unfortunately, this makes the situation pretty disanalagous to Petrov. Petrov defied the local culture (following orders) because he thought that reporting the alarm would have bad consequences. But in the lesswrong tradition, the direct consequences matter less than the effects on the local culture; and the reputational consequences point in the opposite direction, encouraging people to not press the button.

(Though from skimming the wikipedia article, it's unclear exactly how much Petrov's reputation suffered. It seems like he was initially praised, then reprimanded for not filing the correct paperwork. He's been quoted both as saying that he wasn't punished, and that he was made a scapegoat.)

Comment by lanrian on Postmortem to Petrov Day, 2020 · 2020-10-04T08:51:58.224Z · LW · GW

Yeah, I'm not sure if the net effect of the blown-up frontpage is positive or negative for me, but it's definitely dominated by enjoyability/learning from posts about it, rather than the inability to see the frontpage for a day. (Very similar to how the value of a game is dominated by the value of the time spent thinking about it, while playing.) I didn't even try to access the frontpage on the relevant day, but I don't think it would have been much of an inconvenience, anyway; I could have just gone to the EA forum or my RSS feed for similar content (or to greaterwrong for the same content, if that was still up).

Comment by lanrian on Covid 10/1: The Long Haul · 2020-10-02T08:41:11.259Z · LW · GW

From the article you linked:

The epidemiologist said analysis of the app data – from 3.6m UK users – found 12 per cent have symptoms longer than 30 days, and one in 200 for more than 90 days.

Having symptoms after 90 days seems like a far better measure for long-term problems than having symptoms after 30 days, and .5% doesn't sound crazy as an estimate for long-term problems. They say they haven't accounted for sampling bias, though, which makes me doubt the methodology overall, as sampling bias could be huge over 90 day timespans.

Comment by lanrian on The strategy-stealing assumption · 2020-09-14T21:49:35.417Z · LW · GW

My impression of commitment races and logical time is that the amount of computation we use in general doesn't matter; but that things we learn that are relevant to the acausal bargaining problems do matter. Concretely, using computation during a competitive period to e.g. figure out better hardware cooling systems should be innocuous, because it matters very little for bargaining with other civilisations. However, thinking about agents in other worlds, and how to best bargain with them, would be a big step forward in logical time. This would mean that it's fine to put off acausal decisions however long we want to, assuming that we don't learn anything that's relevant to them in the meantime.

More speculatively, this raises the issue of whether some things in the competitive period would be relevant for acausal bargaining. For example, causal bargaining with AIs on Earth could teach us something about acausal bargaining. If so, the competitive period would advance us in logical time. If we thought this was bad (which is definitely not obvious), maybe we could prevent it by making the competitive AI refuse to bargain with other worlds, and precommiting to eventually replacing it with a naive AI that hasn't updated on anything that the competitive AI has learned. The naive AI would be as early in logical time as we were, when we coded it, so it would be as if the competitive period never happened.

Comment by lanrian on How to teach things well · 2020-08-31T18:35:27.892Z · LW · GW

There is some research on knowledge graphs as a data-structure, and as a tool in AI. Wikipedia and a bunch of references.

Comment by lanrian on Covid 8/27: The Fall of the CDC · 2020-08-30T08:37:30.287Z · LW · GW

The statement is in the first 20 seconds of this video: https://twitter.com/US_FDA/status/1297662384060981248

Comment by lanrian on Do you vote based on what you think total karma should be? · 2020-08-26T19:46:04.996Z · LW · GW

I could see this argument going the other way. If a post is loved by 45% of people, and meh to 55% of people, then if everyone use target karma, the meh voters will downvote it to a meh position. As you say, the final karma will become people's median opinion; and the median opinion does not highlight things that minorities love.

However, if everyone votes solely based on their opinion, 45% will upvote the comment, and 55% won't vote at all. That means that it will end up in an overall quite favorable spot, as long as most comments are upvoted by less than half of readers.

I think both systems would have to rely on some people not always voting on everything. The nonTK system relies on there being large variability in how prone people are to voting (which I think exist; beware the typical mind fallacy... maybe another poll on how often people vote?) whereas the TK system relies on people abstaining if they're uncertain about how valuable something is to other people.

Comment by lanrian on How good is humanity at coordination? · 2020-07-23T13:37:31.431Z · LW · GW

Using examples is neat. I'd characterize the problem as follows (though the numbers are not actually representative of my beliefs, I think it's way less likely that everybody dies). Prior:

  • 50%: Humans are relatively more competent (hypothesis C). The probability that everyone dies is 10%, the probability that 5% survive is 20%, the probability that everyone survive is 70%.
  • 50%: Humans are relatively less competent. The probability that everyone survives is 10%, the probability that only 5% survive is 20%, the probability that everyone dies is 70%.

Assume we are in a finite multiverse (which is probably false) and take our reference class to only include people alive in the current year (whether the nuclear war happened or not). (SIA doesn't care about reference classes, but SSA does.) Then:

  • SSA thinks
    • Notice we're in a world where everyone survived (as opposed to only 5%) ->
      • if C is true, the probability of this is 0.7/(0.7+0.2*0.05)=70/71
      • if C isn't true, the probability of this is 0.1/(0.1+0.2*0.05)=10/11
      • Thus, the odds ratio is 70/71:10/11.
    • Our prior being 1:1, the resulting probability is ~52% that C is true.
  • SIA thinks
    • Notice we're alive ->
      • the world where C is true contains (0.7+0.2*0.05)/(0.1+0.2*0.05)=0.71/0.11 times as many people, so the update in favor of C is 71:11 in favor of C.
    • Notice we're in a world where everyone survived (as opposed to only 5%).
      • The odds ratio is 70/71:10/11, as earlier.
    • So the posterior odds ratio is (71:11) x (70/71:10/11)=70:10, corresponding to a probability of 87.5% that C is true.
    • Note that we could have done this faster by not separating it into two separate updates. The world where C is true contains 70/10 times as many people as the world where C is false, which is exactly the posterior odds. This is what I meant when I said that the updates balance out, and this is why SIA doesn't care about the reference classes.

Note that we only care about the number of people surviving after a nuclear accident because we've included them in SSA's reference class. But I don't know why people want to include those in the reference class, and nobody else. If we include every human who has ever been alive, we have a large number of people alive regardless of whether C is true or not, which makes SSA give relatively similar predictions as SIA. If we include a huge number of non-humans whose existence aren't affected by whether C is true or not, SSA is practically identical to SIA. This arbitrariness of the reference class is another reason to be sceptical about any argument that uses SSA (and to be sceptical of SSA itself).

Comment by lanrian on How good is humanity at coordination? · 2020-07-22T21:47:44.173Z · LW · GW

There's probably some misunderstanding, but I'm not immediately spotting it when rereading. You wrote:

Seems like it's "much weaker" evidence [[for X]] if you buy something like SIA, and only a little weaker evidence if you buy something like SSA.

Going by the parent comment, I'm interpreting this as

  • it = "we didn't observe nukes going off"
  • X = "humans are competent at handling dangerous technology"

I think that

  • SIA thinks that "we didn't observe nukes going off" is relatively stronger evidence for "humans are competent at handling dangerous technology" (because SIA ignores observer selection effects, and updates naively).
  • SSA thinks that "we didn't observe nukes going off" is relatively weaker evidence for "humans are competent at handling dangerous technology" (because SSA doesn't update against hypothesis which would kill everyone).

Which seems to contradict what you wrote?

Comment by lanrian on How good is humanity at coordination? · 2020-07-22T16:11:41.817Z · LW · GW

Hm, interesting. This suggests that, if we're in a simulation, nuclear war is relatively more likely. However, all such simulations are likely to be shortlived, so if we're in a simulation, we shouldn't care about preventing nuclear war for longtermist reasons (only for short-termist ones). And if we think we're sufficiently likely to be outside a simulation to make longterm concerns dominate short-termist ones (obligatory reference), then we should just condition on not being in a simulation, and then I think this point doesn't matter.

Comment by lanrian on How good is humanity at coordination? · 2020-07-22T15:59:23.264Z · LW · GW

This argument sounds like it's SSA-ish (it certainly doesn't work for SIA). I haven't personally looked into this, but I think Anders Sandberg uses SSA for his analysis in this podcast, where he claims that taking taking observer selection effects into account changes the estimated risk of nuclear war by less than a factor of 2 (search for "not even twice"), because of some mathy details making use of near-miss statistics. So if one is willing to trust Anders to be right about this (I don't think the argument is written up anywhere yet?) observer selection effects wouldn't matter much regardless of your anthropics.

Comment by lanrian on How good is humanity at coordination? · 2020-07-22T15:41:53.311Z · LW · GW

Disagree. SIA always updates towards hypotheses that allow more people to exist (the Self Indication Assumption is that your own existence as an observer indicates that there are more observerss), which makes for an update that nuclear war is rare, since there will exist more people in the multiverse if nuclear accidents are rare. This exactly balances out the claim about selection effects – so SIA corresponds to the naive update-rule which says that world-destroying activities must be rare, since we haven't seen them. The argument about observer selection effects only comes from SSA-ish theories.

Note that, in anthropic dilemmans, total consequentialist ethics + UDT makes the same decisions as SIA + CDT, as explained by Stuart Armstrong here. This makes me think that total consequentialists shouldn't care about observer selection effects.

This is complicated by the fact that infinities breaks both anthropic theories and ethical theories. UDASSA might solve this. In practice, I think UDASSA behaves a bit like a combination of SSA and SIA, but that it is a bit closer to SIA, but I haven't thought a lot about this.

Comment by lanrian on The New Frontpage Design & Opening Tag Creation! · 2020-07-09T08:21:45.297Z · LW · GW
Here <INSERT LINK TO TAG GUIDELINES> are some rough tag-guidelines about what makes a good tag.

Is there no guideline yet? Or is there supposed to be a link here?

Comment by lanrian on (answered: yes) Has anyone written up a consideration of Downs's "Paradox of Voting" from the perspective of MIRI-ish decision theories (UDT, FDT, or even just EDT)? · 2020-07-08T19:13:01.470Z · LW · GW

As an aside, for really large populations, it would probably be socially optimal to only have a small fraction of the population voting (at least if we ignore things like legitimacy, feeling of participation, etc). As long as that fraction is randomly sampled, you could get good statistical guarantees that the outcome of the election would be the same as if everyone voted. South Korea did a pretty cool experiment where they exposed a representative sample of 500 people to pro- and anti-nuclear experts, and then let them decide how much nuclear power the country should have.

I don't think this is why CDTs refuses to vote, though.

Comment by lanrian on (answered: yes) Has anyone written up a consideration of Downs's "Paradox of Voting" from the perspective of MIRI-ish decision theories (UDT, FDT, or even just EDT)? · 2020-07-08T18:56:49.835Z · LW · GW

It was especially cool in that it said that even altruist CDTers can’t account for the rationality of voting in sufficiently large elections.

That's pretty surprising. I checked out the page, and he unfortunately doesn't motivate what kind of model he's using, so it's hard to verify. From the book:

If the importance of the election is presumed proportionate to the size of the electorate, then for large enough elections, expected-utility calculations cannot justify the effort of voting by appeal to the small but heavily weighted possibility that your vote will be a tiebreaker. The odds of that outcome decrease faster than linearly with the number of voters, so the expected value of your vote as a tiebreaker approaches zero— even taking account of the value to everyone combined, not just yourself. Given enough voters, then, the causal value (even to everyone) of your vote is overshadowed by the inconvenience to you of going out to vote."

In an election with two choices, in a model where everybody has 50% chance of voting for either side, I don't think the claim is true. Maybe he's assuming that the outcomes of elections become easier to predict as they grow larger, because individual variability becomes less important? If everyone has a 51% probability of voting for a certain side, the election would be pretty much guaranteed for an arbitrarily large population, in which case a CDTer wouldn't have any reason to vote (even if there was a coalition of CDTers who could swing the election). I'm not sure if it's true that elections in larger countries are more predictable, though.

Comment by lanrian on Could someone please start a bright home lighting company? · 2020-05-04T22:10:28.418Z · LW · GW

Check out our product, it’s a far-cry from the $400 you mentioned but many times less expensive and thinner than coelux.

How expensive is it? Maybe there's some reason that you don't want an exact price in the comparison, but can you give a rough range?

Comment by lanrian on How does iterated amplification exceed human abilities? · 2020-05-03T17:23:21.073Z · LW · GW

If you picked the median human by mathematical ability, and put them in this setup, I would be rather surprised if they produced a valid proof of Fermats last theorem.

I would too. IDA/HCH doesn't have to work with the median human, though. It's ok to pick an excellent human, who has been trained for being in that situation. Paul has argued that it wouldn't be that surprising if some humans could be arbitrarily competent in an HCH-setup, even if some couldn't.

Comment by lanrian on The Sandwich Argument · 2020-04-09T08:31:56.441Z · LW · GW

The research of Philip Tetlock shows that forecasters achieve better Brier scores when they exaggerate their confidence.

They showed that it's good to extremise the predictions of teams, when combining predictions that agree with each other, but I don't think that individual forecasters were systematically underconfident.

Comment by lanrian on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T08:59:48.831Z · LW · GW

If ancestor simulations are one of the main uses of cosmic resources, we probably will go extinct soon (somewhat depending on how you define extinction), because we're probably in an ancestor simulation that will be turned off. If the simulators were to keep us alive for billions of years, it would be pretty unlikely that we didn't find ourselves living in those billions of years, by the same logic as the doomsday argument.

Comment by lanrian on April Coronavirus Open Thread · 2020-03-31T22:17:58.053Z · LW · GW

Link to paper, the relevant figure is on page 12.

Comment by lanrian on March Coronavirus Open Thread · 2020-03-26T17:14:51.785Z · LW · GW

That article is based on a twitter thread that is based on this article that is based on the parliamentary hearing that Wei Dai linked. The twitter thread distorted the article a lot, and seems to be mostly speculation.

Comment by lanrian on Authorities and Amateurs · 2020-03-25T20:23:21.921Z · LW · GW

The gaussian assumption makes no difference.

As I said, I do agree that the piece's qualitative conclusion was correct. However, the gaussian assumption does make a large quantitative difference. Comparing it to the extreme: If we always have the maximum number of people in ICUs, continuously, the time until herd-immunity would be 4.9 years, which is a factor 3 less than what the normal assumption gives you. Although it is still clearly too much, it's only one or two additional factors of 3 away from being reasonable. This extreme isn't even that unrealistic; something like it could plausibly be achieved if the government continuously alternated between more or less lock-down, keeping the average R close to 1.

To be clear, I think that it's good that the post was written, but I think it would have been substantially better if it had used a constant number of infected people. If you're going to use unrealistic models (which, in many cases, you should!) it's good practice to use the most conservative model possible, to ensure that your conclusion holds no matter what, and to let your reader see that it holds no matter what. In addition, it would have been simpler (you just have to multiply/divide 3 numbers), and it would have looked less like realistic-and-authoritative-math to the average reader, which would have better communicated its real epistemic status.

Comment by lanrian on Authorities and Amateurs · 2020-03-25T11:38:52.045Z · LW · GW

Notalgebraist posted a reasonable critique of Flattening The Curve Is a Deadly Delusion, explaining how it's incorrect to assume that flattenings won't reduce the total number of cases, and that it doesn't make sense to assume a normal distribution. I think that the the piece's main point was correct, though: that the curve would need to get crazy flat for hospitals to not be overloaded.

3 days later, the Imperial College study made the same point in a much better and more rigorous way (among lots of other good points), but it was less widely shared on social media.

I'm not sure what the conclusion here is. Non-experts will sometimes make false assumptions and get the details wrong, but they're still capable of making good points that only require you to multiply numbers together, and will do so a few days faster and in a way that's more memetically fit than papers from experts?

Comment by lanrian on Against Dog Ownership · 2020-03-23T14:19:54.850Z · LW · GW

This made me curious about where most people get their dogs from. Apparently, something like 34% are purchased from breeders and 23% are obtained from shelters, according to https://www.aspca.org/animal-homelessness/shelter-intake-and-surrender/pet-statistics (though their numbers don't add up to 100%, and I'm not sure why). Getting them from friends/relatives is also pretty common, at 20%.

Comment by lanrian on Are veterans more self-disciplined than non-veterans? · 2020-03-23T10:48:44.626Z · LW · GW

Is lack of discipline a big problem at an average workplace? I would expect most offices to provide sufficiently good social incentives that most people spend most of their time working, in which case any productivity-boon from increased discipline would be swamped by other (selection) effects from military training.

Increased discipline could translate to notably higher productivity for undergrads, PhD students, or people who work from home, though. My impression is that such people struggle much more with procrastination.

Relatedly, China seems to be doing their best to teach discipline in schools, so you could look at whether that seems to be working. This is obviously mixed in with lots of ongoing social incentives, though. Relevant ssc: https://slatestarcodex.com/2020/01/22/book-review-review-little-soldiers/

Comment by lanrian on What are good ways of convincing someone to rethink an impossible dream? · 2020-03-19T09:49:06.259Z · LW · GW

Why would you need to? Couldn't you just convince them that other people won't like their idea? In the second example above, there seems to be ample evidence that other people aren't interested.

Comment by lanrian on March Coronavirus Open Thread · 2020-03-16T21:12:42.331Z · LW · GW

Isn't this exactly what "flatten the curve" is about? Because a lot of people are talking about that as a solution, including some governments.

The main problem is that the curve needs to get really flat for hospitals to have time with everyone. Depending on how overwhelmed you want your hospitals to be, you could be in lock-down for several years. Some calculations in this article.

Comment by lanrian on A Significant Portion of COVID-19 Transmission Is Presymptomatic · 2020-03-14T10:38:17.124Z · LW · GW

Insofar as the virus mostly spreads through presymptomatic transmission in some countries, that's almost certainly because the people with symptoms are all isolated. Symptomatic people definitely spread the disease.

It'd be interesting if the R of these populations were <1 at the time the studies were done, though. If so, presymptomatic transmission might be insufficient to sustain exponential growth, as long as all symptomatic transmission is prevented.

Comment by lanrian on March Coronavirus Open Thread · 2020-03-11T21:09:36.931Z · LW · GW

Angela Merkel says that 60-70% of Germany is likely to be infected. That's useful if people believe that it won't infect that many. Example source, though you can google for others.

If they're willing to believe a redditor's summary, this one says that WHO says that 20% of infected people needed hospital treatment for weeks. (If they want primary sources, maybe you could find those claims in the links / somewhere else.)

Putting together 1 and 2 (and generalising from Germany to whatever country they're in), they ought to be convinced that it's pretty severe.