Jimrandomh's Shortform

post by jimrandomh · 2019-07-04T17:06:32.665Z · LW · GW · 105 comments

This post is a container for my short-form writing. See this post [LW(p) · GW(p)] for meta-level discussion about shortform.


Comments sorted by top scores.

comment by jimrandomh · 2020-04-14T19:16:48.052Z · LW(p) · GW(p)

I am now reasonably convinced (p>0.8) that SARS-CoV-2 originated in an accidental laboratory escape from the Wuhan Institute of Virology.

1. If SARS-CoV-2 originated in a non-laboratory zoonotic transmission, then the geographic location of the initial outbreak would be drawn from a distribution which is approximately uniformly distributed over China (population-weighted); whereas if it originated in a laboratory, the geographic location is drawn from the commuting region of a lab studying that class of viruses, of which there is currently only one. Wuhan has <1% of the population of China, so this is (order of magnitude) a 100:1 update.

2. No factor other than the presence of the Wuhan Institute of Virology and related biotech organizations distinguishes Wuhan or Hubei from the rest of China. It is not the location of the bat-caves that SARS was found in; those are in Yunnan. It is not the location of any previous outbreaks. It does not have documented higher consumption of bats than the rest of China.

3. There have been publicly reported laboratory escapes of SARS twice before in Beijing, so we know this class of virus is difficult to contain in a laboratory setting.

4. We know that the Wuhan Institute of Virology was studying SARS-like bat coronaviruses. As reported in the Washington Post today, US diplomats had expressed serious concerns about the lab's safety.

5. China has adopted a policy of suppressing research into the origins of SARS-CoV-2, which they would not have done if they expected that research to clear them of scandal. Some Chinese officials are in a position to know.

To be clear, I don't think this was an intentional release. I don't think it was intended for use as a bioweapon. I don't think it underwent genetic engineering or gain-of-function research, although nothing about it conclusively rules this out. I think the researchers had good intentions, and screwed up.

Replies from: lbThingrb, BossSleepy, Lukas_Gloor, MakoYass, Pattern, MathieuRoy, Chris_Leong, Jayson_Virissimo, Spiracular, habryka4, Andrew_Clough
comment by lbThingrb · 2020-04-15T04:12:26.980Z · LW(p) · GW(p)

This Feb. 20th Twitter thread from Trevor Bedford argues against the lab-escape scenario. Do read the whole thing, but I'd say that the key points not addressed in parent comment are:

Data point #1 (virus group): #SARSCoV2 is an outgrowth of circulating diversity of SARS-like viruses in bats. A zoonosis is expected to be a random draw from this diversity. A lab escape is highly likely to be a common lab strain, either exactly 2002 SARS or WIV1.

But apparently SARSCoV2 isn't that. (See pic.)

Data point #2 (receptor binding domain): This point is rather technical, please see preprint by @K_G_Andersen, @arambaut, et al at http://virological.org/t/the-proximal-origin-of-sars-cov-2/398… for full details.
But, briefly, #SARSCoV2 has 6 mutations to its receptor binding domain that make it good at binding to ACE2 receptors from humans, non-human primates, ferrets, pigs, cats, pangolins (and others), but poor at binding to bat ACE2 receptors.
This pattern of mutation is most consistent with evolution in an animal intermediate, rather than lab escape. Additionally, the presence of these same 6 mutations in the pangolin virus argues strongly for an animal origin: https://biorxiv.org/content/10.1101/2020.02.13.945485v1…
Data point #3 (market cases): Many early infections in Wuhan were associated with the Huanan Seafood Market. A zoonosis fits with the presence of early cases in a large animal market selling diverse mammals. A lab escape is difficult to square with early market cases.
Data point #4 (environmental samples): 33 out of 585 environmental samples taken from the Huanan seafood market showed as #SARSCoV2 positive. 31 of these were collected from the western zone of the market, where wildlife booths are concentrated. 15/21 http://xinhuanet.com/english/2020-01/27/c_138735677.htm…
Environmental samples could in general derive from human infections, but I don't see how you'd get this clustering within the market if these were human derived.

One scenario I recall seeing somewhere that would reconcile lab-escape with data points 3 & 4 above is that some low-level WIV employee or contractor might have sold some purloined lab animals to the wet market. No idea how plausible that is.

Replies from: ChristianKl, None, rudi-c
comment by ChristianKl · 2020-04-23T08:24:00.495Z · LW(p) · GW(p)
Data point #3 (market cases): Many early infections in Wuhan were associated with the Huanan Seafood Market. A zoonosis fits with the presence of early cases in a large animal market selling diverse mammals. A lab escape is difficult to square with early market cases.

Given that there's the claim from Botao Xiao's The possible origins of 2019-nCoV coronavirus, that this seafood market was located 300m from a lab (which might or might not be true), this market doesn't seem like it reduces chances.

comment by [deleted] · 2020-04-19T06:01:07.441Z · LW(p) · GW(p)

If it was a lab-escape and the CCP knew early enough, they could simply manufacture the data to point at the market as the origin.

comment by Rudi C (rudi-c) · 2020-06-12T12:06:30.568Z · LW(p) · GW(p)

We need to update down on any complex, technical datapoint that we don’t fully understand, as China has surely paid researchers to manufacture hard-to-evaluate evidence for its own benefit (regardless of the truth of the accusation). This is a classic technique that I have seen a lot in propaganda against laypeople, and there is every reason it should have been employed against the “smart” people in the current coronavirus situation.

comment by Randomized, Controlled (BossSleepy) · 2021-03-17T21:45:52.185Z · LW(p) · GW(p)

The most recent episode of the 80k podcast had Andy Weber on it. He was the US Assistant Secretary of Defense, "responsible for biological and other weapons of mass destruction".

Towards the end of the episode he casually drops quite the bomb

Well, over time, evidence for natural spread hasn’t been produced, we haven’t found the intermediate species, you know, the pangolin that was talked about last year. I actually think that the odds that this was a laboratory-acquired infection that spread perhaps unwittingly into the community in Wuhan is about a 50% possibility... And we know that the Wuhan Institute of Virology was doing exactly this type of research [gain of function research].  Some of it — which was funded by the NIH for the United States — on bat Coronaviruses. So it is possible that in doing this research, one of the workers at that laboratory got sick and went home. And now that we know about asymptomatic spread, perhaps they didn’t even have symptoms and spread it to a neighbor or a storekeeper. So while it seemed an unlikely hypothesis a year ago, over time, more and more evidence leaning in that direction has come out. And it’s wrong to dismiss that as kind of a baseless conspiracy theory. I mean, very, very serious scientists like David Relman from Stanford think we need to take the possibility of a laboratory accident seriously.

The included link is to a statement from the US Embassy in Georgia, which to me seems surprisingly blunt, calling out the CCP for obfuscation, and documenting events at the WIV, going so far as to speculate that they were doing bio-weapons research there.

comment by Lukas_Gloor · 2020-04-15T21:48:01.286Z · LW(p) · GW(p)

What about allegations that a pangolin was involved? Would they have had pangolins in the lab as well or is the evidence about pangolin involvement dubious in the first place?

Edit: Wasn't meant as a joke. My point is why did initial analyses conclude that the SARS-Cov-2 virus is adapted to receptors of animals other than bats, suggesting that it had an intermediary host, quite likely a pangolin. This contradicts the story of "bat researchers kept bat-only virus in a lab and accidentally released it."

Replies from: Spiracular
comment by Spiracular · 2020-05-09T19:28:03.796Z · LW(p) · GW(p)

I think it's probably a virus that was merely identified in pangolins, but whose primary host is probably not pangolins.

The pangolins they sequenced weren't asymptomatic carriers at all; they were sad smuggled specimens that were dying of many different diseases simultaneously.

I looked into this semi-recently, and wrote up something here [LW(p) · GW(p)].

The pangolins were apprehended in Guangxi, which shares some of its border with Yunnan. Neither of these provinces are directly contiguous with Hubei (Wuhan's province), fwiw. (map)

comment by MakoYass · 2020-04-19T00:21:04.778Z · LW(p) · GW(p)

How do you know there's only one lab in china studying these viruses?

comment by Pattern · 2020-04-15T18:43:32.794Z · LW(p) · GW(p)
1. If SARS-CoV-2 originated in a non-laboratory zoonotic transmission, then the geographic location of the initial outbreak would be drawn from a distribution which is approximately uniformly distributed over China (population-weighted); whereas if it originated in a laboratory, the geographic location is drawn from the commuting region of a lab studying that class of viruses, of which there is currently only one. Wuhan has <1% of the population of China, so this is (order of magnitude) a 100:1 update.

This is an assumption.

While it might be comparatively correct, I'm not sure about the magnitude. Under the circumstances, perhaps we should consider the possibility that there is something we don't know about Wuhan that makes it more likely.

3. There have been publicly reported laboratory escapes of SARS twice before in Beijing, so we know this class of virus is difficult to contain in a laboratory setting.

That's nice to know.

comment by Chris_Leong · 2020-04-15T03:41:56.595Z · LW(p) · GW(p)

Maybe they don't know whether it escaped or not. Maybe they just think there is a chance that the evidence will implicate them and they figure it's not worth the risk as there'll only be consequences if there is definitely proof that it escaped from one of their labs and not mere speculation.

Or maybe they want to argue that it didn't come from China? I think they've already been pushing this angle.

comment by Jayson_Virissimo · 2020-04-14T20:30:57.836Z · LW(p) · GW(p)

Not sure if you have seen this yet, but they conclude:

Our analyses clearly show that SARS-CoV-2 is not a laboratory construct or a purposefully manipulated virus...

Are they assuming a false premise or making an error in reasoning somewhere?

Replies from: jimrandomh, habryka4
comment by jimrandomh · 2020-04-14T20:42:28.238Z · LW(p) · GW(p)

First, a clarification: whether SARS-CoV-2 was laboratory-constructed or manipulated is a separate question from whether it escaped from a lab. The main reason a lab would be working with SARS-like coronavirus is to test drugs against it in preparation for a possible future outbreak from a zoonotic source; those experiments would involve culturing it, but not manipulating it.

But also: If it had been the subject of gain-of-function research, this probably wouldn't be detectable. The example I'm most familiar with, the controversial 2012 US A/H5N1 gain of function study, used a method which would not have left any genetic evidence of manipulation.

comment by habryka (habryka4) · 2020-04-14T20:36:15.983Z · LW(p) · GW(p)

The article says: 

Our analyses clearly show that SARS-CoV-2 is not a laboratory construct or a purposefully manipulated virus


It is so effective at attaching to human cells that the researchers said the spike proteins were the result of natural selection and not genetic engineering.

I think the article just says that the virus did not undergo genetic engineering or gain-of-function research, which is also what Jim says above. 

Replies from: Jayson_Virissimo, jimrandomh
comment by Jayson_Virissimo · 2020-04-14T20:41:09.250Z · LW(p) · GW(p)

Ah, yes: their headline is very misleading then! It currently reads "The coronavirus did not escape from a lab. Here's how we know."

I'll shoot the editor an email and see if they can correct it.

EDIT: Here's me complaining about the headline on Twitter.

comment by jimrandomh · 2020-04-14T20:44:29.588Z · LW(p) · GW(p)

Genetic engineering is ruled out, but gain-of-function research isn't.

comment by Spiracular · 2020-09-15T18:55:20.546Z · LW(p) · GW(p)

Chinese virology researcher released something claiming that SARS-2 might even be genetically-manipulated after all? After assessing, I'm not really convinced of the GMO claims, but the RaTG13 story definitely seems to have something weird going on.

Claims that the RaTG13 genome release was a cover-up (it does look like something's fishy with RaTG13, although it might be different than Yan thinks). Claims ZC45 and/or ZXC21 was the actual backbone (I'm feeling super-skeptical of this bit, but it has been hard for me to confirm either way).

https://zenodo.org/record/4028830#.X2EJo5NKj0v (aka Yan Report)

RaTG13 Looks Fishy

Looks like something fishy happened with RaTG13, although I'm not convinced that genetic modification was involved. This is an argument built on pre-prints, but they appear to offer several different lines of evidence that something weird happened here.

Simplest story (via R&B): It looks like people first sequenced this virus in 2016, under the name "BtCOV/4991", using mine samples from 2013. And for some reason, WIV re-released the sequence as "RaTG13" at a later date?

(edit: I may have just had a misunderstanding. Maybe BtCOV/4991 is the name of the virus as sequenced from miner-lungs, RaTG13 is the name of the virus as sequenced from floor droppings? But in that case, why is the "fecal" sample reading so weirdly low-bacteria? And they probably are embarrassed that it took them that long to sequence the fecal samples, and should be.)

A paper by by Indian researchers Rahalkar and Bahulikar ( https://doi.org/10.20944/preprints202005.0322.v1 ) notes that BtCoV/4991 sequenced in 2016 by the same Wuhan Virology Institute researchers (and taken from 2013 samples of a mineshaft that gave miners deadly pneumonia) was very similar, and likely the same, as RaTG13.

A preprint by Rahalkar and Bahulikar (R&B) ( doi: 10.20944/preprints202008.0205.v1 ) notes that the fraction of bacterial genomes in in the RaTG13 "fecal" sample was ABSURDLY low ("only 0.7% in contrast to 70-90% abundance in other fecal swabs from bats"). Something's weird there.

A more recent weird datapoint: A pre-print Yan referenced ( https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7337384/ ), whose finding (in graphs; it was left unclear in their wording) was indeed that a RaTG13 protein didn't competently bind their Bat ACE2 samples, but rather their Rat, Mouse, Human, and Pig ACE2. It's supposedly a horseshoe bat virus (sequenced by the Wuhan lab), so this seems hecka fishy to me.

(Sure, their bat samples weren't precisely the same species, but they tried 2 species from the same genus. SARS-2 DID bind for their R. macrotis bat sample, so it seems extra-fishy to me that RaTG13 didn't.).

((...oh. According to the R&B paper about the mineshaft, it was FILTY with rats, bats, poop, and fungus. And the CoV genome showed up in only one of ~280 samples taken. If it's like that, who the hell knew if it came from a rat or bat?))

At this point, RaTG13 is genuinely looking pretty fishy to me. It might actually take evidence of a conspiracy theory in the other direction for me to go back to neutral on that.

E-Protein Similarity? Meh.

I'm not finding the Protein-E sequence similarity super-convincing in itself, because while the logic is fine, it's very multiple-hypothesis-testing flavored.

I'm still looking into the ZC45 / ZXC21 claim, which I'm currently feeling skeptical of. Here's the paper that characterized those: doi: 10.1038/s41426-018-0155-5 . It's true that it was by people working at "Research Institute for Medicine of Nanjing Command." However, someone on twitter used BLAST on the E-protein sequence, and found a giant pile of different highly-related SARS-like coronaviruses. I'm trying to replicate that analysis using BLAST myself, and at a skim the 100% results are all more SARS-CoV-2, and the close (95%) results are damned diverse. ...I don't see ZC in them, it looks like it wasn't uploaded. Ugh. (The E-protein is only 75 amino acids long anyway. https://www.ncbi.nlm.nih.gov/protein/QIH45055.1 )

A different paper mentions extreme S2-protein similarity of early COVID-19 to ZC45 , but that protein is highly-conserved. That makes this a less surprising or meaningful result. (E was claimed to be fast-evolving, so its identicality would have been more surprising, but I couldn't confirm it.) https://doi.org/10.1080/22221751.2020.1719902


I think Yan offers a reasonable argument that a method could have been used that avoids obvious genetic-modification "stitches," instead using methods that are hard to distinguish from natural recombination events (ex: recombination in yeast). Sounds totally possible to me.

The fact that the early SARS-CoV-2 samples were already quite adapted to human ACE2 and didn't have the rapid-evolution you'd expect from a fresh zoonotic infection is something a friend of mine had previously noted, probably after reading the following paper (recommended): https://www.biorxiv.org/content/10.1101/2020.05.01.073262v1 (Zhan, Deverman, Chan). This fact does seem fishy, and had already pushed me a bit towards the "Wuhan lab adaptation & escape" theory.

comment by habryka (habryka4) · 2020-04-14T19:20:11.091Z · LW(p) · GW(p)

Wuhan has <1% of the population of China, so this is (order of magnitude) a 100:1 update.

I agree that this is technically correct, but the prior for "escaped specifically from a lab in Wuhan" is also probably ~100 times lower than the prior for "escaped from any biolab in China", which makes this sentence feel odd to me. I feel like I have reasonable priors for "direct human-to-human transmission" vs. "accidentally released from a lab", but don't have good priors for "escaped specifically from a lab in Wuhan".

Replies from: jimrandomh
comment by jimrandomh · 2020-04-14T19:25:16.438Z · LW(p) · GW(p)

I agree that this is technically correct, but the prior for "escaped specifically from a lab in Wuhan" is also probably ~100 times lower than the prior for "escaped from any biolab in China"

I don't think this is true. The Wuhan Institute of Virology is the only biolab in China with a BSL-4 certification, and therefore is probably the only biolab in China which could legally have been studying this class of virus. While the BSL-3 Chinese Institute of Virology in Beijing studied SARS in the past and had laboratory escapes, I expect all of that research to have been shut down or moved, given the history, and I expect a review of Chinese publications will not find any studies involving live virus testing outside of WIV. While the existence of one or two more labs in China studying SARS would not be super surprising, the existence of 100 would be extremely surprising, and would be a major scandal in itself.

Replies from: Benito, habryka4
comment by Ben Pace (Benito) · 2020-04-14T19:53:46.743Z · LW(p) · GW(p)

Woah. That's an important piece of info. The lab in Wuhan is the only lab in China allowed to deal with this class of virus. That's very suggestive info indeed.

Replies from: jimrandomh, leggi
comment by jimrandomh · 2020-04-14T19:55:58.508Z · LW(p) · GW(p)

That's overstating it. They're the only BSL-4 lab. Whether BSL-3 labs were allowed to deal with this class of virus, is something that someone should research.

Replies from: howie-lempel, howie-lempel, Benito, leggi
comment by Howie Lempel (howie-lempel) · 2020-04-15T15:29:42.417Z · LW(p) · GW(p)

[I'm not an expert.]

My understanding is that SARS-CoV-1 is generally treated as a BSL-3 pathogen or a BSL-2 pathogen (for routine diagnostics and other relatively safe work) and not BSL-4. At the time of the outbreak, SARS-CoV-2 would have been a random animal coronavirus that hadn't yet infected humans, so I'd be surprised if it had more stringent requirements.

Your OP currently states: "a lab studying that class of viruses, of which there is currently only one." If I'm right that you're not currently confident this is the case, it might be worth adding some kind of caveat or epistemic status flag or something.


Some evidence:

comment by Howie Lempel (howie-lempel) · 2020-04-23T07:17:54.545Z · LW(p) · GW(p)

Do you still think there's a >80% chance that this was a lab release?

comment by Ben Pace (Benito) · 2020-04-14T20:37:53.846Z · LW(p) · GW(p)

Thank you for the correction.

comment by leggi · 2020-04-23T08:21:24.513Z · LW(p) · GW(p)
Whether BSL-3 labs were allowed to deal with this class of virus, is something that someone should research.

Did anyone do some research?

- --

(SARSr-CoV) makes the BSL-4 list on Wikipedia.

But what's the probability that animal-based coronaviruses (being very widespread in a lot of species) were restricted to BSL-4 labs?

- - -- ---

COVID19 and BSL according to:

W.H.O. Laboratory biosafety guidance related to the novel coronavirus (2019-nCoV)

Non-propagative diagnostic laboratory work including, sequencing, nucleic acid amplification test (NAAT) on clinical specimens from patients who are suspected or confirmed to be infected with nCoV, should be conducted adopting practices .... ... in the interim, Biosafety Level 2 (BSL-2) in the WHO Laboratory Biosafety Manual, 3rd edition remains appropriate until the 4th edition replaces it.
Handling of material with high concentrations of live virus (such as when performing virus propagation, virus isolation or neutralization assays) or large volumes of infectious materials should be performed only by properly trained and competent personnel in laboratories capable of meeting additional essential containment requirements and practices, i.e. BSL-3.

The CDC: Interim Laboratory Biosafety Guidelines for Handling and Processing Specimens Associated with Coronavirus Disease 2019 (COVID-19)

comment by leggi · 2020-04-23T08:25:41.422Z · LW(p) · GW(p)

It would be important information if it was true. But is it true?

(SARSr-CoV) makes the BSL-4 list on Wikipedia but coronaviruses are widespread in a lot of species and I can't find any evidence that they are restricted to BSL-4 labs.

comment by habryka (habryka4) · 2020-04-14T19:45:38.032Z · LW(p) · GW(p)

Ok, that makes sense to me. I didn't have much of a prior on the Wuhan lab being much more likely to have been involved in this kind of research.

comment by Andrew_Clough · 2020-04-17T14:53:56.259Z · LW(p) · GW(p)

Do we have any good sense of the extent to which researchers from the Wuhan Institute of Virology are flying out across China to investigate novel pathogens or sites where novel pathogens might emerge?

comment by jimrandomh · 2021-03-24T20:50:26.819Z · LW(p) · GW(p)

In a comment here [LW(p) · GW(p)], Eliezer observed that:

OpenBSD treats every crash as a security problem, because the system is not supposed to crash and therefore any crash proves that our beliefs about the system are false and therefore our beliefs about its security may also be false because its behavior is not known

And my reply to this grew into something that I think is important enough to make as a top-level shortform post.

It's worth noticing that this is not a universal property of high-paranoia software development, but a an unfortunate consequence of using the C programming language and of systems programming. In most programming languages and most application domains, crashes only rarely point to security problems. OpenBSD is this paranoid, and needs to be this paranoid, because its architecture is fundamentally unsound (albeit unsound in a way that all the other operating systems born in the same era are also unsound). This presents a number of useful analogies that may be useful for thinking about future AI architectural choices.

C has a couple of operations (use-after-free, buffer-overflow, and a few multithreading-related things) which expand false beliefs in one area of the system into major problems in seemingly-unrelated areas. The core mechanic of this is that, once you've corrupted a pointer or an array index, this generates opportunities to corrupt other things. Any memory-corruption attack surface you search through winds up yielding more opportunities to corrupt memory, in a supercritical way, eventually eventually yielding total control over the process and all its communication channels. If the process is an operating system kernel, there's nothing left to do; if it's, say, the renderer process of a web browser, then the attacker gets to leverage its communication channels to attack other processes, like the GPU driver and the compositor. This has the same sub-or-supercriticality dynamic.

Some security strategies try to keep there from being any entry points into the domain where there might be supercritically-expanding access: memory-safe languages, linters, code reviews. Call these entry-point strategies. Others try to drive down the criticality ratio: address space layout randomization, W^X, guard pages, stack guards, sandboxing. Call these mitigation strategies. In an AI-safety analogy, the entry-point strategies include things like decision theory, formal verification, and philosophical deconfusion; the mitigation strategies include things like neural-net transparency and ALBA.

Computer security is still, in an important sense, a failure: reasonably determined and competent attackers usually succeed. But by the metric "market price of a working exploit chain", things do actually seem to be getting better, and both categories of strategies seem to have helped: compared to a decade ago, it's both more difficult to find a potentially-exploitable bug, and also more difficult to turn a potentially-exploitable bug into a working exploit.

Unfortunately, while there are a number of ideas that seem like mitigation strategies for AI safety, it's not clear if there are any metrics nearly as good as "market price of an exploit chain". Still, we can come up with some candidates--not candidates we can precisely define or measure, currently, but candidates we can think in terms of, and maybe think about measuring in the future, like: how much optimization pressure can be applied to concepts, before perverse instantiations are found? How much control does an inner-optimizer needs to start with, in order to take over an outer optimization process? I don't know how to increase these, but it seems like a potentially promising research direction.

Replies from: zac-hatfield-dodds
comment by Zac Hatfield Dodds (zac-hatfield-dodds) · 2021-03-25T02:56:03.622Z · LW(p) · GW(p)

It's worth noticing that this is not a universal property of high-paranoia software development, but a an unfortunate consequence of using the C programming language and of systems programming. In most programming languages and most application domains, crashes only rarely point to security problems.

I disagree. While C is indeed terribly unsafe, it is always the case that a safety-critical system exhibiting behaviour you thought impossible is a serious safety risk - because it means that your understanding of the system is wrong, and that includes the safety properties.

comment by jimrandomh · 2020-06-06T22:29:39.921Z · LW(p) · GW(p)

Despite the justness of their cause, the protests are bad. They will kill at least thousands, possibly as many as hundreds of thousands, through COVID-19 spread. Many more will be crippled. The deaths will be disproportionately among dark-skinned people, because of the association between disease severity and vitamin D deficiency.

Up to this point, R was about 1; not good enough to win, but good enough that one more upgrade in public health strategy would do it. I wasn't optimistic, but I held out hope that my home city, Berkeley, might become a green zone.

Masks help, and being outdoors helps. They do not help nearly enough.

George Floyd was murdered on May 25. Most protesters protest on weekends; the first weekend after that was May 30-31. Due to ~5-day incubation plus reporting delays, we don't yet know how many were infected during that first weekend of protests; we'll get that number over the next 72 hours or so.

We are now in the second weekend of protests, meaning that anyone who got infected at the first protest is now close to peak infectivity. People who protested last weekend will be superspreaders this weekend; the jump in cases we see over the next 72 hours will be about *the square root* of the number of cases that the protests will generate.

Here's the COVID-19 case count dashboard for Alameda County and for Berkeley. I predict a 72 hours from now, Berkeley's case-count will be 170 (50% CI 125-200; 90% CI 115-500).

(Crossposted on Facebook; abridgeposted on Twitter.)

Replies from: jessica.liu.taylor, jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2020-06-10T15:45:15.495Z · LW(p) · GW(p)

It's been over 72 hours and the case count is under 110, as would be expected from linear extrapolation.

comment by jessicata (jessica.liu.taylor) · 2020-06-10T15:44:49.912Z · LW(p) · GW(p)

It's been over 72 hours and the case count is under 110.

comment by jimrandomh · 2021-01-07T23:08:30.430Z · LW(p) · GW(p)

For reducing CO2 emissions, one person working competently on solar energy R&D has thousands to millions of times more impact than someone taking normal household steps as an individual. To the extent that CO2-related advocacy matters at all, most of the impact probably routes through talent and funding going to related research. The reason for this is that solar power (and electric vehicles) are currently at inflection points, where they are in the process of taking over, but the speed at which they do so is still in doubt.

I think the same logic now applies to veganism vs meat-substitute R&D. Considering the Impossible Burger in particular. Nutritionally, it seems to be on par with ground beef; flavor-wise it's pretty comparable; price-wise it's recently appeared in my local supermarket at about 1.5x the price. There are a half dozen other meat-substitute brands at similar points. Extrapolating a few years, it will soon be competitive on its own terms, even without the animal-welfare angle; extrapolating twenty years, I expect vegan meat-imitation products will be better than meat on every axis, and meat will be a specialty product for luddites and people with dietary restrictions. If this is true, then interventions which speed up the timeline of that change are enormously high leverage.

I think this might be a general pattern, whenever we find a technology and a social movement aimed at the same goal. Are there more instances?

comment by jimrandomh · 2020-09-30T01:17:52.126Z · LW(p) · GW(p)

According to Fedex tracking, on Thursday, I will have a Biovyzr. I plan to immediately start testing it, and write a review.

What tests would people like me to perform?

Tests that I'm already planning to perform:

To test its protectiveness, the main test I plan to perform is a modified Bittrex fit test. This is where you create a bitter-tasting aerosol, and confirm that you can't taste it. The normal test procedure won't work as-is because it's too large to use a plastic hood, so I plan to go into a small room, and have someone (wearing a respirator themselves) spray copious amounts of Bittrex at the input fan and at any spots that seem high-risk for leaks.

To test that air exiting the Biovyzr is being filtered, I plan to put on a regular N95, and use the inside-out glove to create Bittrex aerosol inside the Biovyzr, and see whether someone in the room without a mask is able to smell it.

I will verify that the Biovyzr is positive-pressure by running a straw through an edge, creating an artificial leak, and seeing which way the air flows through the leak.

I will have everyone in my house try wearing it (5 adults of varied sizes), have them all rate its fit and comfort, and get as many of them to do Bittrex fit tests as I can.

comment by jimrandomh · 2020-02-15T02:20:40.348Z · LW(p) · GW(p)

I suspect that, thirty years from now with the benefit of hindsight, we will look at air travel the way we now look at tetraethyl lead. Not just because of nCoV, but also because of disease burdens we've failed to attribute to infections, in much the same way we failed to attribute crime to lead.

Over the past century, there have been two big changes in infectious disease. The first is that we've wiped out or drastically reduced most of the diseases that cause severe, attributable death and disability. The second is that we've connected the world with high-speed transport links, so that the subtle, minor diseases can spread further.

I strongly suspect that a significant portion of unattributed and subclinical illnesses are caused by infections that counterfactually would not have happened if air travel were rare or nonexistent. I think this is very likely for autoimmune conditions, which are mostly unattributed, are known to sometimes be caused by infections, and have risen greatly over time. I think this is somewhat likely for chronic fatigue and depression, including subclinical varieties that are extremely widespread. I think this is plausible for obesity, where it is approximately #3 of my hypotheses.

Or, put another way: the "hygiene hypothesis" is the opposite of true.

Replies from: leggi, adam_scholl
comment by leggi · 2020-02-20T04:42:12.592Z · LW(p) · GW(p)

Some comments:

we've wiped out or drastically reduced most of the diseases that cause severe, attributable death and disability

we've wiped out or drastically reduced some diseases in some parts of the world.   There's a lot of infectious diseases still out there: HIV, influenza, malaria, tuberculosis, cholera, ebola,  infectious forms of pneumonia, diarrhoea, hepatitis .... 

we've connected the world with high-speed transport links, so that the subtle, minor diseases can spread further.

Disease has always spread - wherever people go, far and wide.  It just took longer over land and sea  (rather than the nodes appearing on global maps that we can see these days). 

... very likely for autoimmune conditions ... have risen greatly over time

"autoimmune conditions" covers a long list of conditions lumped together because they involve the immune system 'going wrong'. (and the immune system is, at least to me, a mind-bogglingly complex system)

Given the wide range of conditions that could be "auto-immune" saying they've risen greatly over time is vague. Data for more specific conditions?

Increased rates of automimmune conditions could just be due to the increase in the recognition, diagnosis and recording of cases (I don't think so but it should be considered).

What things other than high speed travel have also changed in that time-frame that could affect our immune systems?   The quality of air we breathe, the food we eat, the water we drink, our environment, levels of exposure to fauna and flora, exposure to chemicals, pollutants ...? Air travel is just one factor.

I think this is somewhat likely for chronic fatigue and depression, including subclinical varieties that are extremely widespread.

Fatigue and depression are clinical symptoms - they are either present or not (to what degree - mild/severe is another matter) so sub-clinical is poor terminology here.   Sub-clinical disease has no recognisable clinical findings - undiagnosed/unrecognised would be closer. But I agree there is widespread issues with health and well-being these days.

Or, put another way: the "hygiene hypothesis" is the opposite of true.

Opposite of true?  Are you saying you believe the "hygiene hypothesis" is false?

In which case, that's a big leap from your reasoning above.

comment by Adam Scholl (adam_scholl) · 2020-02-15T19:21:12.892Z · LW(p) · GW(p)

I'm curious about your first and second hypothesis regarding obesity?

Replies from: jimrandomh
comment by jimrandomh · 2020-02-18T00:32:27.427Z · LW(p) · GW(p)

Disruption of learning mechanisms by excessive variety and separation between nutrients and flavor. Endocrine disruption from adulterants and contaminants (a class including but not limited to BPA and PFOA).

comment by jimrandomh · 2019-09-12T01:19:07.010Z · LW(p) · GW(p)

Eliezer has written about the notion of security mindset [LW · GW], and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.

An1lam's recent shortform post [LW(p) · GW(p)] talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all of the people I know of with security mindset are engineer-types rather than scientist-types. That seemed like a clue; my first theory was that the reason for this is because engineer-types get to actually write software that might have security holes, and have the feedback cycle of trying to write secure software. But I also know plenty of otherwise-decent software engineers who don't have security mindset, at least of the type Eliezer described.

My hypothesis is that to acquire security mindset, you have to:

  • Practice optimizing from a red team/attacker perspective,
  • Practice optimizing from a defender perspective; and
  • Practice modeling the interplay between those two perspectives.

So a software engineer can acquire security mindset because they practice writing software which they don't want to have vulnerabilities, they practice searching for vulnerabilities (usually as an auditor simulating an attacker rather as an actual attacker, but the cognitive algorithm is the same), and they practice going meta when they're designing the architecture of new projects. This explains why security mindset is very common among experienced senior engineers (who have done each of the three many times), and rare among junior engineers (who haven't yet). It explains how Eliezer can have security mindset: he alternates between roleplaying a future AI-architect trying to design AI control/alignment mechanisms, roleplaying a future misaligned-AI trying to optimize around them, and going meta on everything-in-general. It also predicts that junior AI scientists won't have this security mindset, and probably won't acquire it except by following a similar cognitive trajectory.

Which raises an interesting question: how much does security mindset generalize between domains? Ie, if you put Theo de Raadt onto a hypothetical future AI team, would he successfully apply the same security mindset there as he does to general computer security?

Replies from: An1lam
comment by NaiveTortoise (An1lam) · 2019-09-12T02:13:38.445Z · LW(p) · GW(p)

I like this post!

Some evidence that security mindset generalizes across at least some domains: the same white hat people who are good at finding exploits in things like kernels seem to also be quite good at finding exploits in things like web apps, real-world companies, and hardware. I don't have a specific person to give as an example, but this observation comes from going to a CTF competition and talking to some of the people who ran it about the crazy stuff they'd done that spanned a wide array of different areas.

Another slightly different example, Wei Dai is someone who I actually knew about outside of Less Wrong from his early work on cryptocurrency stuff, so he was at least at one point involved in a security-heavy community (I'm of the opinion that early cryptocurrency folks were on average much better about security mindset than the average current cryptocurrency community member). Based on his posts and comments, he generally strikes me as having security mindset style thinking from his comments and from my perspective has contributed a lot of good stuff to AI alignment.

Theo de Raadt is notoriously... opinionated, so it would definitely be interesting to see him thrown on an AI team. That said, I suspect someone like Ralph Merkle, who's a bona fide cryptography wizard (he invented public key cryptography and Merkle trees!) and is heavily involved in the cryonics and nanotech communities, could fairly easily get up to speed on AI control work and contribute from a unique security/cryptography-oriented perspective. In particular, now that there seems to be more alignment/control work that involves at least exploring issues with concrete proposals, I think someone like this would have less trouble finding ways to contribute. That said, having cryptography experience in addition to security experience does seem helpful. Cryptography people are probably more used to combining their security mindset with their math intuition than your average white-hat hacker.

Replies from: jimrandomh, Wei_Dai
comment by jimrandomh · 2019-09-13T22:24:50.632Z · LW(p) · GW(p)

I'm kinda confused about the relation between cryptography people and security mindset. Looking at the major cryptographic algorithm classes (hashing, symmetric-key, asymmetric-key), it seems pretty obvious that the correct standard algorithm in each class is probably a compound algorithm -- hash by xor'ing the results of several highly-dissimilar hash functions, etc, so that a mathematical advance which breaks one algorithm doesn't break the overall security of the system. But I don't see anyone doing this in practice, and also don't see signs of a debate on the topic. That makes me think that, to the extent they have security mindset, it's either being defeated by political processes in the translation to practice, or it's weirdly compartmentalized and not engaged with any practical reality or outside views.

Replies from: Wei_Dai, An1lam
comment by Wei_Dai · 2019-09-15T01:09:19.661Z · LW(p) · GW(p)

Combining hash functions is actually trickier than it looks, and some people are doing research in this area and deploying solutions. See https://crypto.stackexchange.com/a/328 and https://tahoe-lafs.org/trac/tahoe-lafs/wiki/OneHundredYearCryptography. It does seem that if cryptography people had more of a security mindset (that are not being defeated) then there would be more research and deployment of this already.

comment by NaiveTortoise (An1lam) · 2019-09-14T21:50:20.849Z · LW(p) · GW(p)

In fairness, I'm probably over-generalizing from a few examples. For example, my biggest inspiration from the field of crypto is Daniel J. Bernstein, a cryptographer who's in part known for building qmail, which has an impressive security track record & guarantee. He discusses principles for secure software engineering in this paper, which I found pretty helpful for my own thinking.

To your point about hashing the results of several different hash functions, I'm actually kind of surprised to hear that this might to protect against the sorts of advances I'd expect to break hash algorithms. I was under the very amateur impression that basically all modern hash functions relied on the same numerical algorithmic complexity (and number-theoretic results). If there are any resources you can point me to about this, I'd be interested in getting a basic understanding of the different assumptions hash functions can depend on.

comment by Wei_Dai · 2019-09-15T01:12:22.846Z · LW(p) · GW(p)

Can you give some specific examples of me having security mindset, and why they count as having security mindset? I'm actually not entirely sure what it is or that I have it, and would be hard pressed to come up with such examples myself. (I'm pretty sure I have what Eliezer calls "ordinary paranoia" at least, but am confused/skeptical about "deep security".)

Replies from: An1lam
comment by NaiveTortoise (An1lam) · 2019-09-15T04:28:52.001Z · LW(p) · GW(p)

Sure, but let me clarify that I'm probably not drawing as hard a boundary between "ordinary paranoia" and "deep security" as I should be. I think Bruce Schneier's and Eliezer's buckets for "security mindset" blended together in the months since I read both posts. Also, re-reading the logistic success curve post reminded me that Eliezer calls into question whether someone who lacks security mindset can identify people who have it. So it's worth noting that my ability to identify people with security mindset is itself suspect by this criteria (there's no public evidence that I have security mindset and I wouldn't claim that I have a consistent ability to do "deep security"-style analysis.)

With that out of the way, here are some of the examples I was thinking of.

First of all, at a high level, I've noticed that you seem to consistently question assumptions other posters are making and clarify terminology when appropriate. This seems like a prerequisite for security mindset, since it's a necessary first step towards constructing systems.

Second and more substantively, I've seen you consistently raise concerns about human safety problems [LW · GW] (also here [LW(p) · GW(p)]. I see this as an example of security mindset because it requires questioning the assumptions implicit in a lot of proposals. The analogy to Eliezer's post here would be that ordinary paranoia is trying to come up with more ways to prevent the AI from corrupting the human (or something similar) whereas I think a deep security solution would look more like avoiding the assumption that humans are safe altogether and instead seeking clear guarantees that our AIs will be safe even if we ourselves aren't.

Last, you seem to be unusually willing to point out flaws in your own proposals, the prime example being UDT. The most recent example of this is your comment about the bomb argument, but I've seen you do this quite a bit and could find more examples if prompted. On reflection, this may be more of an example of "ordinary paranoia" than "deep security", but it's still quite important in my opinion.

Let me know if that clarifies things at all. I can probably come up with more examples of each type if requested, but it will take me some time to keep digging through posts and comments so figured I'd check in to see if what I'm saying makes sense before continuing to dig.

Replies from: riceissa
comment by riceissa · 2020-02-01T06:04:59.355Z · LW(p) · GW(p)

This comment [LW(p) · GW(p)] feels relevant here (not sure if it counts as ordinary paranoia or security mindset).

comment by jimrandomh · 2020-10-11T22:57:36.859Z · LW(p) · GW(p)

I am working on a longer review of the various pieces of PPE that are available, now that manufacturers have had time to catch up to demand. That review will take some time, though, and I think it's important to say this now:

The high end of PPE that you can buy today is good enough to make social distancing unnecessary, even if you are risk averse, and is more comfortable and more practical for long-duration wear than a regular mask. I don't just mean Biovyzr (which has not yet shipped all the parts for its first batch) and the AIR Microclimate (which has not yet shipped anything), though these hold great promise and may be good budget options.

If you have a thousand dollars to spare, you can get a 3M Versaflo TR-300N+. This is a hospital-grade positive air pressure respirator with a pile of certifications; it is effective at protecting you from getting COVID from others. Most of the air leaves through filter fabric under the chin, which I expect makes it about as effective at protecting others from you as an N95. Using it does not require a fit-test, but I performed one anyways with Bitrex, and it passed (I could not pass a fit-test with a conventional face-mask except by taping the edges to my skin). The Versaflo doesn't block view of your mouth, gives good quality fresh air with no resistance, and doesn't muffle sound very much. Most importantly, Amazon has it in stock (https://www.amazon.com/dp/B07J4WCK6R) so it doesn't involve a long delay or worry about whether a small startup will come through.

comment by jimrandomh · 2019-07-04T17:09:37.876Z · LW(p) · GW(p)

Bullshit jobs are usually seen as an absence of optimization: firms don't get rid of their useless workers because that would require them to figure out who they are, and risk losing or demoralizing important people in the process. But alternatively, if bullshit jobs (and cover for bullshit jobs) are a favor to hand out, then they're more like a form of executive compensation: my useless underlings owe me, and I will get illegible favors from them in return.

What predictions does the bullshit-jobs-as-compensation model make, that differ from the bullshit-jobs-as-lack-of-optimization model?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-04T17:59:28.850Z · LW(p) · GW(p)

When I tried to inner sim the "bullshit jobs as compensation" model, I expected to see a very different world than I do see. In particular, I'd expect the people in bullshit jobs to have been unusually competent, smart, or powerful before they were put in the bullshit job, and this is not in fact what I think actually happens.

The problem being that the kind of person who wants a bullshit job is not typically the kind of person you'd necessarily want a favor from. One use for bullshit jobs could be to help the friends (or more likely the family) of someone who does "play the game." This I think happens more often, but I still think the world would be very different if this was the main use case for bullshit jobs- In particular, I'd expect most bullshit jobs to be isolated from the rest of the company, such that they don't have ripple effects. This doesn't seem to be the case as many bullshit jobs exist in management.

When I inquired about the world I actually do see, I got several other potential reasons for bullshit jobs that may or may not fit the data better:

  • Bullshit jobs as pre-installed scapegoats: Lots of middle management might fit into this role. This could be viewed as a favor (I'll give you a cushy job now in exchange for you throwing yourself on the sword when the time comes.) However, I think the predictive model is to view it in terms of the Gervais principle: The clueless middle managers don't realize they're being manipulated by the sociopaths.
  • Bullshit jobs as a way to make people feel important: Lets say you have a preinstalled scapegoat. You need to keep them happy enough that they'll stay in their position and not ask too many questions. One way to do that for a certain type of person is to give them underlings. But if you gave them underlings with real jobs they could screw things up for the organization, so you give them underlings with bullshit jobs.
    • Another instance of this that I imagined might happen: Someone is really great at what they do (say they're a 10x employee), but to feel important wants to be a manager. You know if you don't promote them you'll lose them, but you know they'll be an awful manager. You promote them, give them a couple underlings with a bullshit job, and now they're still only a 4x employee because they spend a lot of their time managing, but you still manage to squeeze a little bit of productivity out of the deal. This one I'm less sure about but its' interesting because it turns the peter principle on its' head.

Edit: As I continued to inner sim the above reasons, a few feedback loops began to become clear:

  • To be a proper scapegoat, your scapegoat has to seem powerful within the organization. But to prevent them from screwing things up, you can't give them real power. This means, the most effective scapegoats have lots of bullshit jobs underneath them.
  • There are various levels of screwup. I might not realize I'm a scapegoat for the very big events above me, but still not want to get blamed for the very real things that happen on the level of organization I actually do run. One move I have is to hire another scapegoat who plays the game one level below me, install them as a manager, and then use them as a scapegoat. Then there's another level at which they get blamed for things that happen on their level, and this can recurse for several levels of middle management.
  • Some of the middle managment installed as scapegoats might accidentally get hands on real power in the organization. Because they're bad managers, they're bad at figuring out what jobs are needed. This then becomes the "inefficiency" model you mentioned.
Replies from: Benquo
comment by Benquo · 2019-07-05T00:37:28.696Z · LW(p) · GW(p)
In particular, I'd expect the people in bullshit jobs to have been unusually competent, smart, or powerful before they were put in the bullshit job, and this is not in fact what I think actually happens.

Moral Mazes claims that this is exactly what happens at the transition from object-level work to management - and then, once you're at the middle levels, the main traits relevant to advancement (and value as an ally) are the ones that make you good at coalitional politics, favor-trading, and a more feudal sort of loyalty exchange.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2019-07-05T02:52:13.197Z · LW(p) · GW(p)

Do you think that the majority of direct management jobs are bullshit jobs? My direction is that especially the first level of management that is directly managing programmers is a highly important coordination position.

comment by jimrandomh · 2021-05-10T04:07:17.237Z · LW(p) · GW(p)

I think Berkeley may, to little fanfare, have achieved herd immunity and elimination of COVID-19. The test positivity rate on this dashboard is 0.22%. I'm having a hard time pinning down exactly what the false-positive rate of COVID-19 PCR is, probably due to the variety of labs and test kits, but a lot of estimates I've seen have been higher than that.

I expect people closer to the Berkeley department of health would have better information one way or another. A little caution is warranted in telling people COVID is gone, since unvaccinated people dropping all precautions and emerging en masse would not necessarily be herd immune.

Replies from: ChristianKl
comment by ChristianKl · 2021-05-11T10:34:56.252Z · LW(p) · GW(p)

I'm having a hard time pinning down exactly what the false-positive rate of COVID-19 PCR is, probably due to the variety of labs and test kits, but a lot of estimates I've seen have been higher than that.

That should make you update towards those estimates being faulty, because they can't be true and not just round them down. 

I don't think the Berkeley department of health is as stupid as you propose. In cases where the test has a false-positive rate like that I would expect them to test positively tested people another time to make sure that they are actually positive. 

comment by jimrandomh · 2020-04-04T05:45:01.934Z · LW(p) · GW(p)

This tweet raised the question of whether masks really are more effective if placed on sick people (blocking outgoing droplets) or if placed on healthy people (blocking incoming droplets). Everyone in public or in a risky setting should have a mask, of course, but we still need to allocate the higher-quality vs lower-quality masks somehow. When sick people are few and are obvious, and masks are scarce, masks should obviously go on the sick people. However, COVID-19 transmission is often presymptomatic, and masks (especially lower-quality improvised masks) are not becoming less scarce over time.

If you have two people in a room and one mask, one infected and one healthy, which person should wear the mask? Thinking about the physics of liquid droplets, I think the answer is that the infected person should wear it.

  1. A mask on a sick person prevents the creation of fomites; masks on healthy people don't.
  2. Outgoing particles have a larger size and shrink due to evaporation, so they'll penetrate a mask less, given equal kinetic energy. (However, kinetic energies are not equal; they start out fast and slow down, which would favor putting the mask on the healthy person. I'm not sure how much this matters.)
  3. Particles that stick to a mask but then un-stick lose their kinetic energy in the process, which helps if the mask is on the sick person, but doesn't help if the mask is on the healthy person.

Overall, it seems like for a given contact-pair, a mask does more good if it's on the sick person. However, mask quality also matters in proportion to the number of healthy-sick contacts it affects; so, upgrading the masks of all of the patients in a hospital would help more than upgrading the masks of all the workers in that hospital, but since the patients outnumber the workers, upgrading the workers' masks probably helps more per-mask.

Replies from: MakoYass
comment by MakoYass · 2020-04-19T01:20:01.091Z · LW(p) · GW(p)

Wearing a surgical mask, I get the sense it tends to form more of a seal when inhaling, less when exhaling. (like a valve). If this is common, it would be a point in favour of having the healthy person wear them.

comment by jimrandomh · 2020-10-24T05:24:09.854Z · LW(p) · GW(p)

This was initially written in response to "Communicating effective altruism better--Jargon" by Rob Wiblin (Facebook link), but stands alone well and says something important. Rob argues that we should make more of an effort to use common language and avoid jargon, especially when communicating to audiences outside of your subculture.

I disagree.

If you're writing for a particular audience and can do an editing pass, then yes, you should cut out any jargon that your audience won't understand. A failure to communicate is a failure to communicate, and there are no excuses. For public speaking and outreach, your suggestions are good.

But I worry that people will treat your suggestions as applying in general, and trying to extinguish jargon terms from their lexicon. People have only a limited ability to code-switch. Most of the time, there's no editing pass, and the processes of writing and thinking are comingled. The practical upshot is that people are navigating a tradeoff between using a vocabulary that's widely understood outside of their subculture, and using the best vocabulary for thinking clearly and communicating within their subculture.

When it comes to thinking clearly, some of the jargon is load-bearing. Some of it is much more load-bearing than it looks. On the margin, people should be using jargon more.

I'm the author of Rationality Cardinality (http://carddb.rationalitycardinality.com/card/all/). The premise of the game is, I curated a collection of concepts that I thought it was important for people to be familiar with, optimized the definitions, and mixed them together with some jokes. I've given a lot of thought to what makes good jargon terms, and the effects that using and being immersed in jargon has on people.

I'm also a developer of LessWrong, a notoriously jargon-heavy site. We recently integrated a wiki, and made it so that if a jargon term links to the appropriate wiki page, you can hover over it for a quick definition. In the medium to long term, we hope to also have some mechanisms for getting jargon terms linked without the post author needing to do it, like having readers submit suggested linkifications, or a jargon-bot similar to what they have on the SpaceX wiki (which scans for keywords and posts a comment with definitions of all of them).

Jargon condenses ideas, but the benefit of condensation isn't speed. Short phrases are more accessible to our thoughts, and more composeable. The price of replacing "steelmanning" with "giving the best defense of a position" is to less-often notice that steelmanning is an option, or that someone is doing it. The price of replacing "Moloch" with "coordination problems" is to stop noticing when what look like villain-shaped problems are actually coordination problems instead.

Much of our jargon is writers' crystallized opinions about which concepts we should have available, and the jargon is the mechanism for doing so. If we reject those opinions, we will not notice what we fail to notice. We will simply see less clearly.

Appendix: A few illustrative examples from the slides

If I replaced the term "updated" with "changed my mind" in my lexicon, then I'd get tripped up whenever I wanted to tell someone my probability estimate had gone from 10% to 20%, or (worse) when I wanted to tell them my probability estimate had gone up, but didn't want to commit to a new estimate. Ie, the power of the word "updating" is not that it's extra precise, it's that it's *imprecise* in a way that's useful.

Replacing "agenty" with "proactive and independent-minded" feels like obliterating the concept entirely, in a way that feels distinctly Orwellian. I think what's actually going on here is that this concept requires a lot more words to communicate, but it also happens to be a concept that the villains in Orwell's universe would actually try to erase, and this substitution would actually erase it.

Replacing "credence" with "estimate of the probability" would imply the existence of a person-independent probability to be argued over. This is a common misunderstanding, attached to a conversational trap, and this trap is enough of a problem in practice that I think I'd rather be occasionally inscrutable than lead people into it.

Replies from: Viliam, mikkel-wilson
comment by Viliam · 2020-10-25T18:01:05.190Z · LW(p) · GW(p)

Now I would like to see an article that would review the jargon, find the nearest commonly used term for each term, and explain the difference the way you did (or possibly admit that there is no important difference).

comment by MikkW (mikkel-wilson) · 2020-10-28T02:22:50.603Z · LW(p) · GW(p)

Why does the link for rationality cardinality go through facebook?

Replies from: jimrandomh
comment by jimrandomh · 2020-10-28T18:48:00.717Z · LW(p) · GW(p)

This comment was crossposted with Facebook, and Facebook auto-edited the link while I was editing it there. Edited now to make it a direct link.

comment by jimrandomh · 2019-07-04T17:22:49.463Z · LW(p) · GW(p)

The discussion so far on cost disease seems pretty inadequate, and I think a key piece that's missing is the concept of Hollywood Accounting. Hollywood Accounting is what happens when you have something that's extremely profitable, but which has an incentive to not be profitable on paper. The traditional example, which inspired the name, is when a movie studio signs a contract with an actor to share a percentage of profits; in that case, the studio will create subsidiaries, pay all the profits to the subsidiaries, and then declare that the studio itself (which signed the profit-sharing agreement) has no profits to give.

In the public contracting sector, you have firms signing cost-plus contracts, which are similar; the contract requires that profits don't exceed a threshold, so they get converted into payments to de-facto-but-not-de-jure subsidiaries, favors, and other concealed forms. Sometimes this involves large dead-weight losses, but the losses are not the point, and are not the cause of the high price.

In medicine, there are occasionally articles which try to figure out where all the money is going in the US medical system; they tend to look at one piece, conclude that that piece isn't very profitable so it can't be responsible, and move on. I suspect this is what's going on with the cost of clinical trials, for example; they aren't any more expensive than they used to be, they just get allocated a share of the profits from R&D ventures that're highly profitable overall.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2019-07-04T20:27:18.131Z · LW(p) · GW(p)
they aren't any more expensive than they used to be, they just get allocated a share of the profits from R&D ventures that're highly profitable overall.

Did you mean "allocated a share of the costs"? If not, I am confused by that sentence.

Replies from: jimrandomh
comment by jimrandomh · 2019-07-04T20:46:52.061Z · LW(p) · GW(p)

I'm pretty uncertain how the arrangements actually work in practice, but one possible arrangement is: You have two organizations, one of which is a traditional pharmaceutical company with the patent for an untested drug, and one of which is a contract research organization. The pharma company pays the contract research organization to conduct a clinical trial, and reports the amount it paid as the cost of the trial. They have common knowledge of the chance of success, of the future probability distribution of future revenue for the drug, how much it costs to conduct the trial, and how much it costs to insure away the risks. So the amount the first company pays to the second is the costs of the trial, plus a share of the expected profit.

Pharma companies making above-market returns are subject to political attack from angry patients, but contract research organizations aren't. So if you control both of these organizations, you would choose to allocate all of the profits to the second organization, so you can defend yourself from claims of gouging by pleading poverty.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2019-07-04T21:05:40.922Z · LW(p) · GW(p)

Ah, that makes sense. Thanks for explaining.

comment by jimrandomh · 2020-08-15T18:20:02.630Z · LW(p) · GW(p)

Suppose LessWrong had a coauthor-matchmaking feature. There would be a section where you could see other peoples' ideas for posts they want to write, and message them to schedule a collaboration session. You'd be able to post your own ideas, to get collaborators. There would be some quality-sorting mechanism so that if you're a high-tier author, you can restrict the visibility of your seeking-collaborators message to other high-tier authors.

People who've written on LessWrong, and people who've *almost* written on LessWrong but haven't quite gotten a post out: Would you use this feature? If so, how much of a difference do you think it would make in the quantity and quality of your writing?

Replies from: MakoYass
comment by MakoYass · 2020-08-16T06:18:56.778Z · LW(p) · GW(p)

I think it could be very helpful, if only for finding people to hold me to account and encourage me to write. Showing me that someone gets what I want to do, and would appreciate it.

comment by jimrandomh · 2019-07-09T02:20:15.059Z · LW(p) · GW(p)

Among people who haven't learned probabilistic reasoning, there's a tendency to push the (implicit) probabilities in their reasoning to the extremes; when the only categories available are "will happen", "won't happen", and "might happen", too many things end up in the will/won't buckets.

A similar, subtler thing happens to people who haven't learned the economics concept of elasticity. Some example (fallacious) claims of this type:

  • Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.
  • Building more housing will cause more people to move into the area from far away, so additional housing won't decrease rents.
  • A company made X widgets, so there are X more widgets in the world than there would be otherwise.

This feels like it's in the same reference class as he traditional logical fallacies, and that giving it a name - "zero elasticity fallacy" - might be enough to significantly reduce the rate at which people make it. But it does require a bit more concept-knowledge than most of the traditional fallacies, so, maybe not? What happens when you point this out to someone with no prior microeconomics exposure, and does logical-fallacy branding help with the explanation?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2019-07-09T12:50:32.584Z · LW(p) · GW(p)
Building more highway lanes will cause more people to drive (induced demand), so building more lanes won't fix traffic.

Is this really fallacious? I'm asking because while I don't know the topic personally, I have some friends who are really into city planning. They've said that this is something which is pretty much unambiguously accepted in the literature, now that we've had the time to observe lots and lots of failed attempts to fix traffic by building more road capacity.

A quick Googling seemed to support this, bringing up e.g. this article which mentions that:

In this paper from the Victoria Transport Policy Institute, author Todd Litman looks at multiple studies showing a range of induced demand effects. Over the long term (three years or more), induced traffic fills all or nearly all of the new capacity. Litman also modeled the costs and benefits for a $25 million line-widening project on a hypothetical 10-kilometer stretch of highway over time. The initial benefits from congestion relief fade within a decade.
Replies from: habryka4
comment by habryka (habryka4) · 2019-07-11T01:53:19.186Z · LW(p) · GW(p)

Yeah, I do agree that for the case of traffic, elasticity is pretty close to 1, which importantly doesn't mean building more traffic is a bad idea, it's actually indicative of demand for traffic capacity being really high, meaning marginal value of doing so is likely also really high.

comment by jimrandomh · 2020-09-10T22:05:55.915Z · LW(p) · GW(p)

Vitamin D reduces the severity of COVID-19, with a very large effect size, in an RCT.

Vitamin D has a history of weird health claims around it failing to hold up in RCTs (this SSC post has a decent overview). But, suppose the mechanism of vitamin D is primarily immunological. This has a surprising implication:

It means negative results in RCTs of vitamin D are not trustworthy.

There are many health conditions where having had a particular infection, especially a severe case of that infection, is a major risk factor. For example, 90% of cases of cervical cancer are caused by HPV infection. There are many known infection-disease pairs like this (albeit usually with smaller effect size), and presumably also many unknown infection-disease pairs like this as well.

Now suppose vitamin D makes you resistant to getting a severe case of a particular infection, which increases risk of a cancer at some delay. Researchers do an RCT of vitamin D for prevention of that kind of cancer, and their methodology is perfect. Problem: What if that infection wasn't common in at the time and place the RCT was performed, but is common somewhere else? Then the study will give a negative result.

This throws a wrench into the usual epistemic strategies around vitamin D, and around every other drug and supplement where the primary mechanism of action is immune-mediated.

Replies from: capybaralet
comment by capybaralet · 2020-09-15T07:27:21.193Z · LW(p) · GW(p)

Sounds like a very general criticism that would apply to any effects that are very strong/consistent in circumstances where there a very high variance (e.g. binary) latent variable takes on a certain variable (and the effect is 0 otherwise...).

I wonder how meta-analyses typically deal with that...(?) http://rationallyspeakingpodcast.org/show/rs-155-uri-simonsohn-on-detecting-fraud-in-social-science.html suggested that very large anomalous effects are usually evidence of fraud, and that meta-analyses may try to prevent a single large effect size study from dominating (IIRC).

comment by jimrandomh · 2021-01-25T18:31:25.120Z · LW(p) · GW(p)

What those drug-abuse education programs we all went though should have said:

It is a mistake to take any drug until after you've read its wikipedia page, especially the mechanism, side effects, and interactions sections, and its Erowid page, if applicable. All you children on ritalin right now, your homework is to go catch up on your required reading and reflect upon your mistake. Dismissed.

(Not a vagueblog of anything recent, but sometimes when I hear about peoples' recreational-drug or medication choices, I feel like Quirrell in HPMOR chapter 26, discussing a student who cast a high-level curse without knowing what it did.)

comment by jimrandomh · 2021-03-16T05:47:35.634Z · LW(p) · GW(p)

It's looking likely that the pandemic will de facto end on the Summer Solstice.

Biden promised vaccine availability for everyone on May 1st. May 1st plus two weeks to get appointments plus four weeks spacing between two doses of Moderna plus one week waiting for full effectiveness, is June 19. The astronomical solstice is June 20, which is a Sunday.

Things might not go to plan, if the May 1st vaccine-availability deadline is missed, or a vaccine-evading strain means we have to wait for a booster. No one's organizing the details yet, as far as I know. But with all those caveats aside:

It's going to be a hell of a party.

Replies from: Measure
comment by Measure · 2021-03-16T14:06:51.834Z · LW(p) · GW(p)

My understanding was that the May 1st date was "Everyone's now allowed to sign up for an appointment, but you may be at the end of a long queue." How long after that do you think it will take to get a vaccine to everyone who wants one?

Replies from: gerald-monroe
comment by Gerald Monroe (gerald-monroe) · 2021-03-18T03:26:25.835Z · LW(p) · GW(p)

Currently, 2.4 million shots/day.  Note that it's a situation where it's always going to be limited by the rate limiting step, and there are many bottlenecks, so using the 'current' data and extrapolating only a modest increase is the most conservative estimate.

210 million adults.  Only 0.7 need to be vaccinated for the risk to plummet for everyone else.   A quick bit of napkin math says we need 294 million doses to fully vaccinate everyone, and we are at 52 million now.  (294-52) = 242million/2.4 = 100.8 more days.

This is why the lesser J&J vaccine is actually so useful - if we switched all the vaccine clinics and syringe supplies to J&J overnight (if there was enough supply of the vaccine itself) suddenly we only need 121 million doses to vaccinate everyone, or 50.4 more days.

The reality is that increasing efforts are probably going to help, and the J&J is helping, but sooner or later a bottleneck will be hit that can't be bypassed quickly (like a syringe shortage), so I would predict the reality number of days to fall in that (50, 100) day interval.  

There are 94 days between now and June 19.  Also, a certain percentage of the population are going to refuse the shot in order to be contrarian or because they earnestly believe their aunt's facebook rants.  Morever, the 'get an appointment' game means the tech savvy/people who read reddit get an advantage over folks who aren't.  

So for those of us reading this who don't yet qualify, it doesn't appear that it will be much longer.  

comment by jimrandomh · 2020-06-10T04:28:01.197Z · LW(p) · GW(p)

Twitter is an unusually angry place. One reason is that the length limit makes people favor punchiness over tact. A less well-known reason is that in addition to notifying you when people like your own tweets, it gives a lot of notifications for people liking replies to you. So if someone replies to disagree, you will get a slow drip of reminders, which will make you feel indignant.

LessWrong is a relatively calm place, because we do the opposite: under default settings, we batch upvote/karma-change notifications together to only one notification per day, to avoid encouraging obsessive-refresh spirals.

Replies from: Pattern
comment by Pattern · 2020-06-11T16:27:52.894Z · LW(p) · GW(p)

I also thing there's less engagement on LW.* While it might depends on the part of twitter, there's a lot more replies going on. Sometimes it seems like there's a 100 replies to a tweet, in contrast to posts with zero comments. This necessarily means replies will overlap a lot more than they do on LW. Imagine getting 3 distinct comments to a short post on LW, versus a thread of tweets, with 30 responses that mostly boil down to the same 3 responses that are being sent because people are responding without seeing other responses. (And if there's hundreds of very similar responses, asking people to read responses is asking people to read a very boring epic.)

And getting one critical reply, versus the same critical reply from 10 people, even when it's the same fraction of responses, probably affects people differently - if only because it's annoying to see the same message over and over again.

*This could be the case (the medium probably helps) even if that engagement was all positive.

comment by jimrandomh · 2020-02-10T02:21:34.520Z · LW(p) · GW(p)

Some software costs money. Some software is free. Some software is free, with an upsell that you might or might not pay for. And some software has a negative price: not only do you not pay for it, but someone third party is paid to try to get you to install it, often on a per-install basis. Common examples include:

  • Unrelated software that comes bundled with software you're installing, which you have to notice and opt out of
  • Software advertised in banner ads and search engine result pages
  • CDs added to the packages of non-software products

This category of software is frequently harmful, but I've never seen the it called out by the economic definition. For laypeople, about 30% of computer security is recognizing the telltale signs of this category of software, and refusing to install it.

Replies from: Viliam, mr-hire
comment by Viliam · 2020-02-10T21:43:27.840Z · LW(p) · GW(p)

I wonder what would be a non-software analogy of this.

Perhaps those tiny packages with labels "throw away, do not eat" you find in some products. That is, in a parallel world where 99% of customers would actually eat them anyway. But even there it isn't obvious how the producer would profit from them eating the thing. So, no good analogy.

comment by Matt Goldenberg (mr-hire) · 2020-02-10T23:44:50.729Z · LW(p) · GW(p)

I'm trying to wrap my head around the negative price distinction. A business can't be viable if the cost of user acquisition is lower than the lifetime value of a user.

Most software spend money on advertising, then they have to make that money back somehow. In a direct business model, they'll charge the users of the software directly. In an indirect business model, they'll charge a third party for access to the users or an asset that the user has. Facebook is more of an indirect business model, where they charge advertisers for access to the users' attention and data.

In my mind, the above is totally fine. I choose to pay with my attention and data as a user, and know that it will be sold to advertisers. Viewing this as "negatively priced" feels like a convoluted way to understand the business model however.

Some malware makes money by trying to hide the secondary market they're selling. For instance, by sneaking in a default browser search that sells your attention to advertisers, or selling your computers idle time to a botnet without your permission. This is egregious in my opinion, but it's not the indirect business model that is bad here, it's the hidden costs that they lie about or obfuscate.

Replies from: jimrandomh
comment by jimrandomh · 2020-02-11T19:05:03.917Z · LW(p) · GW(p)

User acquisition costs are another frame for approximately the same heuristic. If software has ads in an expected place, and is selling data you expect them to sell, then you can model that as part of the cost. If, after accounting for all the costs, it looks like the software's creator is spending more on user acquisition than they should be getting back, it implies that there's another revenue stream you aren't seeing, and the fact that it's hidden from you implies that you probably wouldn't approve of it.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-02-11T19:26:03.020Z · LW(p) · GW(p)

Ahhh I see, so you're making roughly the same distinction of "hidden revenue streams".

comment by jimrandomh · 2021-02-01T19:25:36.020Z · LW(p) · GW(p)

Lack-of-adblock is a huge mistake. On top of the obvious drain on attention, slower loading times everywhere, and surveillance, ads are also one of the top mechanisms by which computers get malware.

When I look over someone's shoulder and see ads, I assume they were similarly careless in their choice of which books to read.

Replies from: SaidAchmiz, ete
comment by Said Achmiz (SaidAchmiz) · 2021-02-01T23:59:33.260Z · LW(p) · GW(p)

Note that many people don’t know about ad blockers:

As usual, I use Google Surveys to run a weighted population survey. On 2019-03-16, I launched a n = 1000 one-question survey of all Americans with randomly reversed order, with the following results: […]

… I am however shocked by the percentage claiming to not know what an adblocker is: 72%! I had expected to get something more like 10–30%. As one learns reading surveys, a decent fraction of every population struggles with basic questions like whether the Earth goes around the Sun or vice-versa, so I would be shocked if they knew of ad blockers but I expected the remaining 50%, who are driving this puzzle of “why advertising avoidance but not adblock installation?”, to be a little more on the ball, and be aware of ad blockers but have some other reason to not install them (if only myopic laziness).

But that appears to not be the case. There are relatively few people who claim to be aware of ad blockers but not be using them, and those might just be mobile users whose browsers (specifically, Chrome, as Apple’s Safari/iOS permitted adblock extensions in 2015), forbid ad blockers.

(I highly recommend reading that entire section of the linked page, where gwern describes the results of several follow-up surveys he ran, and conclusions drawn from them.)

comment by plex (ete) · 2021-02-01T21:06:05.067Z · LW(p) · GW(p)

One day we will be able to wear glasses which act as adblock for real life, replacing billboards with scenic vistas. 

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2021-02-02T03:40:02.717Z · LW(p) · GW(p)

And they will also be able to do the opposite, placing ads over scenic vistas

Replies from: Viliam
comment by Viliam · 2021-02-05T12:28:54.091Z · LW(p) · GW(p)

They will also send data about "what you looked at, how long" to Google servers, to prepare even better customized ads for you.

But people will be more worried about giant pop-up ads suddenly covering their view while they are trying to cross the street.

comment by jimrandomh · 2021-04-02T17:44:49.705Z · LW(p) · GW(p)

Some people have a sense of humor. Some people pretend to be using humor, to give plausible deniability to their cruelty. On April 1st, the former group becomes active, and the latter group goes quiet.

This is too noisy to use for judging individuals, but it seems to work reasonably well for evaluating groups and cultures. Humor-as-humor and humor-as-cover weren't all that difficult to tell apart in the first place, but I imagine a certain sort of confused person could be pointed at this in order to make the distinction salient.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-04-03T05:28:44.118Z · LW(p) · GW(p)

I'm not sure that's true. I think the second kind also uses April 1st as a way to justify more cruelty than usual.

comment by jimrandomh · 2021-03-03T01:29:53.414Z · LW(p) · GW(p)

There is a rumor of RSA being broken. By which I mean something that looks like a strange hoax made it to the front on Hacker News. Someone uploaded a publicly available WIP paper on integer factorization algorithms by Claus Peter Schnorr to the Cryptology ePrint Archive, with the abstract modified to insert the text "This destroyes the RSA cryptosystem." (Misspelled.)

Today is not the Recurring Internet Security Meltdown Day. That happens once every month or two, but not today in particular.

But this is a good opportunity to point out a non-obvious best practice around cryptographic key-sizes, which is this: Whatever key size is accepted as the standard, you want your SSH keys and your PGP keys to be one size bigger, so that if a gradually rising tide of mathematical advances causes a cryptography meltdown, you won't be caught in the wave where everyone else gets pwned at once.

So I recommend making sure, if you're using RSA for your SSH keys, that they are 4096-bit (as opposed to the current ssh-keygen default of 3072-bit).

Replies from: gerald-monroe
comment by Gerald Monroe (gerald-monroe) · 2021-03-05T08:58:19.253Z · LW(p) · GW(p)

While this sounds cool, what sort of activities are you thinking you need to encrypt?  Consider the mechanisms for how information leaks.

       a.  Are you planning or coordinating illegal acts?  The way you get caught is one of your co-conspirators reported you.

      b.  Are you protecting your credit card and other financial info? The way it leaks is a third party handler, not your own machine.

      c.  Protecting trade secrets?  The way it gets leaked is one of your coworkers copied the info and brought it to a competitor.

      d.  Protecting crypto?  Use an offline wallet.  Too much protection and you will have the opposite problem.

Countless people - probably a substantial fraction of the entire population, maybe the majority - all their credit and identity records were leaked in various breaches.  They have easily hackable webcams exposed on the internet.  Skimmers trap their credit card periodically.  And...nothing major happens to them.  

comment by jimrandomh · 2021-04-14T16:10:18.996Z · LW(p) · GW(p)

On October 26, 2020, I submitted a security vulnerability report to the Facebook bug bounty program. The submission was rejected as a duplicate. As of today (April 14), it is still not fixed. I just resubmitted, since it seems to have fallen through the cracks or something. However, I consider all my responsible disclosure responsibilities to be discharged.

Once an Oculus Quest or Oculus Quest 2 is logged in to a Facebook account, its login can't be revoked. There is login-token revocation UI in Facebook's Settings>Security and Login menu, but changing the account password and revoking the login there does not work.

One practical impact of this is that if your Facebook account is ever compromised, and the attacker uses this vulnerability, they have permanent access.

The other practical impact is that if someone has unsupervised access to your unlocked Quest headset, and they use the built-in web browser to go to facebook.com, they have full access to your Facebook account, including Messenger, without having to do anything special at all. This means that if you've ever made a confidentiality agreement regarding something you discussed on Facebook Messenger, you probably can't lend your headset to anyone, ever.

Additionally, the lock-screen on the Oculus Quest 2 does not have a strict enough rate limit; it gives unlimited tries at 2/minute, so trying all lock-screen combinations takes approximately 35 days. This can be done without network access, and can be automated with some effort. So if someone steals a *locked* Oculus Quest 2, they can also use that to break into your Facebook account. There is almost certainly a much faster way to do this involving disassembling the device, but this is bad enough.

Replies from: BossSleepy
comment by Randomized, Controlled (BossSleepy) · 2021-04-14T19:03:43.419Z · LW(p) · GW(p)

Is your logic that releasing this heinous volun into the public is more likely to pressure FB to do something about this? Because if so, I'm not sure that LW is a forum with enough public spotlight to generate pressure. OTOH, I imagine some percentage of readers here aren't well-aligned but are looking for informational edge, in which case it's possible this does more harm than good?

I'm not super-confident in this model -- eg, it also seems entirely possible to me that lots of FB security engineers read the site and one or more will be shouting ZOMG! any moment over this.. 

Replies from: jimrandomh
comment by jimrandomh · 2021-04-15T00:09:06.452Z · LW(p) · GW(p)

I'm posting here (cross-posted with my FB wall and Twitter) mostly to vent about it, and to warn people that sharing VR headsets has infosec implications they may not have been aware of. I don't think this comment will have much effect on Facebook's actions.

comment by jimrandomh · 2020-02-27T19:53:20.056Z · LW(p) · GW(p)

The Diamond Princess cohort has 705 positive cases, of which 4 are dead and 36 serious or critical. In China, the reported ratio of serious/critical cases to deaths is about 10:1, so figure there will be 3.6 more deaths. From this we can estimate a case fatality rate of 7.6/705 ~= 1%. Adjust upward to account for cases that have not yet progressed from detection to serious, and downward to account for the fact that the demographics of cruise ships skew older. There are unlikely to be any undetected cases in this cohort.

Replies from: steve2152, Dagon
comment by Steven Byrnes (steve2152) · 2020-02-27T21:10:43.280Z · LW(p) · GW(p)

Hang on, maybe I'm being stupid, but I don't get the 3.6. Why not say 36+4=40 serious/critical cases and the 10%=4 of them have already passed away?

Replies from: jimrandomh
comment by jimrandomh · 2020-02-27T21:25:28.713Z · LW(p) · GW(p)

You're right, adding deaths+.1*serious the way I did seems incorrect. But, since not all of the serious cases have recovered yet, that would seem to imply that the serious:deaths ratio is worse in the Diamond Princess than it is in China, which would be pretty strange. It's not clear to me that the number of serious cases is as up to date as the number of positive tests.

So, widen the error bars some more I guess?

comment by Dagon · 2020-02-27T20:57:54.057Z · LW(p) · GW(p)

How many passengers were exposed? Capacity of 2670, I haven't seen (and haven't looked that hard) how many actual passengers and crew were aboard when the quarantine started. So maybe over 1/4 of exposed became positive, 6% of that positive become serious, and 10% of that fatal.

Assuming it escapes quarantine and most of us are exposed at some point, that leads to an estimate of 0.0015 (call it 1/6 of 1%) of fatality. Recent annual deaths are 7.7 per 1000, so best guess is this adds 20%, assuming all deaths happen in the first year and any mitigations we come up with don't change the rate by much. I don't want to downplay 11.5 million deaths, but I also don't want to overreact (and in fact, I don't know how to overreact usefully).

I'd love to know how many of the serious cases have remaining disability. Duration and impact of survival cases could easily be the differences between unpleasantness and disruption that doubles the death rate, and societal collapse that kills 10x or more as the disease directly.