Why did no LessWrong discourse on gain of function research develop in 2013/2014?

post by ChristianKl · 2021-06-18T23:35:35.918Z · LW · GW · 24 comments

This is a question post.

Contents

  Answers
    21 habryka
    16 Anders_H
    12 jdfaben
    2 Anders_H
None
24 comments

While I wasn't at 80% of a lab leak when Eliezer asseted it a month ago, I'm now at 90%. It will take a while till it filters through society but I feel like we can already look at what we ourselves got wrong.  

In 2014, in the LessWrong survey more people considered bioengineered pandemics a global catastrophic risk then AI. At the time there was a public debate about gain of function research. On editoral described the risks of gain of function research as:

Insurers and risk analysts define risk as the product of probability times consequence. Data on the probability of a laboratory-associated infection in U.S. BSL3 labs using select agents show that 4 infections have been observed over <2,044 laboratory-years of observation, indicating at least a 0.2% chance of a laboratory-acquired infection (5) per BSL3 laboratory-year. An alternative data source is from the intramural BSL3 labs at the National Institutes of Allergy and Infectious Diseases (NIAID), which report in a slightly different way: 3 accidental infections in 634,500 person-hours of work between 1982 and 2003, or about 1 accidental infection for every 100 full-time person-years (2,000 h) of work (6).

A simulation model of an accidental infection of a laboratory worker with a transmissible influenza virus strain estimated about a 10 to 20% risk that such an infection would escape control and spread widely (7). Alternative estimates from simple models range from about 5% to 60%. Multiplying the probability of an accidental laboratory-acquired infection per lab-year (0.2%) or full-time worker-year (1%) by the probability that the infection leads to global spread (5% to 60%) provides an estimate that work with a novel, transmissible form of influenza virus carries a risk of between 0.01% and 0.1% per laboratory-year of creating a pandemic, using the select agent data, or between 0.05% and 0.6% per full-time worker-year using the NIAID data.

Even at the lower bar of 0.05% per full-time worker-year it seems crazy that society continued playing Russian Roulette. We could have seen the issue and protested. EA's could have created organizations to fight against gain-of-function research. Why didn't we speak every Petrov day about the necessity to stop gain of function research? Organizations like OpenPhil should go through the 5 Why's and model why they messed this up and didn't fund the cause. What needs to change so that we as rationalists and EA's are able to organize to fight against tractable risks that our society takes without good reason?

Answers

answer by habryka · 2021-06-19T00:17:35.771Z · LW(p) · GW(p)

I feel like this just happened? There was a good amount of articles written about this, see for example this article by the Global Priorities Project on GoF research: 

http://globalprioritiesproject.org/wp-content/uploads/2016/03/GoFv9-3.pdf 

I also remember a number of other articles by people working on biorisk, but would have to dig them up. But overall I had a sense there was both a bunch of funding and a bunch of advocacy and a bunch of research on this topic.

comment by ChristianKl · 2021-06-19T08:55:36.368Z · LW(p) · GW(p)

I searched LessWrong itself for "gain of function" and it didn't bring up much. Searching for it on OpenPhil finds it mentioned a few times, so it seems that while OpenPhil got in contact with the topic they failed to identify it as a cause area that needs funding. 

All the hits on OpenPhil are 2017 and before and 2018 the Trump administration ended the ban on gain of function research. That should have been a moment of public protest by our community.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-06-19T21:29:33.671Z · LW(p) · GW(p)

Are you saying we should have been marching in the streets and putting up banners? Criticizing the end of the ban more in public? Or taking steps against it, somehow, using alternative mechanisms like advocacy with any contacts we might have in the world of virology?

Replies from: ChristianKl
comment by ChristianKl · 2021-06-19T21:51:11.469Z · LW(p) · GW(p)

The first step would be to do similar things as we do with other X-risks. For the case of OpenPhil, the topic should have been important enough for them to task a researcher with summarizing the state of the topic and what should be done. That's the OpenPhil procedere to deal with topics that matter. 

That analysis might have resulted in the observation that this Marc Lipsitch guy seems to have a good grasp of the subject to then fund him with a million per year to do something. 

It's not clear that funding Lipsitch would have been enough, but it would be on course with "we tried to do something with our toolkit". 

With research it's hard to know before what you find if you invest in a bunch of smart people to think about a topic and how to deal with it.

In retrospect finding out that the NIH illegally funneled money to Baric and Shi in circumvention of the moratorium imposed by the Office of Science and Technology Policy and then challenging that publically might have prevented this pandemic. Being part of a scandal about illegal transfer of funds likely would have seriously damanged Shi's career given the importance of being seen as respectful in China. 

Finding that out at the time would have required reading a lot of papers to understand what's going on but I think it's quite plausible that a researcher who reads through the top 200 gain of function research papers attentively and tried to get a good model of what's happening might have caught it. 

Replies from: habryka4
comment by habryka (habryka4) · 2021-06-20T03:29:27.196Z · LW(p) · GW(p)

Some relevant links: 

Don't think they prove anything, but seem useful references.

Replies from: ChristianKl
comment by ChristianKl · 2021-06-23T17:57:03.067Z · LW(p) · GW(p)

I do think they suggest the situation is better then I initially thought given that funding the Lipsitch /Johns Hopkins Center for Health Security is a good idea.

I read through their report Research and Development to Decrease Biosecurity Risks from Viral Pathogens:

How could the problem eventually be solved or substantially alleviated? We believe that if a subset of the following abilities/resources were developed, the risk of a globally catastrophic pandemic would be substantially reduced:

  • A better selection of well-stocked, broad-spectrum antiviral compounds with low potential for development of resistance
  • Ability to confer immunity against a novel pathogen in fewer than 100 days
  • Widespread implementation of intrinsic biocontainment technologies that can reliably contain viral pathogens in the lab without impairing research
  • Improved countermeasures for non-viral conventional pathogens
  • Rapid, inexpensive, point-of-care diagnostics for all known pathogens
  • Inexpensive, ubiquitous metagenomic sequencing
  • Targeted countermeasures for the most dangerous viral pathogens

I do think that list is missing finding ways to reduce gain of function research and instead encourages gain of function research via funding "Targeted countermeasures for the most dangerous viral pathogens". 

Not talking about the tradeoffs between developing measures against viruses and the risk caused by gain of function research, seem to me a big omission. Not speaking about the dangers of gain of function research likely reduces conflicts with virologists.

The report suggests to me that the led themselves be conned by researchers who suggest that developing immunity against a novel pathogen in fewer than 100 days is about developing new vaccination platforms when it was mostly about regulation and finding ways to verifying drug safety in short amounts of time. 

Fighting for changes in laws about drug regulation means to get in conflicts while funding vaccine platforms is conflictless. 

Unsexy approaches like reducing the amount of surfaces touched by multiple people or researching better airfilters/humidifiers to reduce transmission of all viruses are also off the roadmap.

comment by ChristianKl · 2021-06-19T13:50:18.639Z · LW(p) · GW(p)

I now read the paper and given what we saw last year the market mechanism they proposed seems flawed. If we would have an insurance company that would be responsible to paying out the damage created by the pandemic that company would be insolvent and not be able to pay for the damage and at the same time the suppression of the lab leak hypothesis  (and all the counterparty risk that comes with a major insurance company going bankrupt) would have been even stronger when the existence of a billion dollar company depends on people not believing in the lab leak hypothesis.

In general the paper only addresses the meta level of how to generally think about risks. What would have been required is to actually think about how high the risk is and communicate that it's serious enough that other people should pay attention. The paper could have cited Marc Lipsitch's risk assement in the introduction to frame the issue but instead talked about it in a more abstract way that doesn't get the reader to think that the issue is worth paying attention.

It seems to falsely propogate the idea that the risk was very low by saying "However, in the case of potential pandemic pathogens, even a very low probability of accident could be unacceptable given the consequences of a global pandemic" when the risk estimate that Marc Lipsitch made wasn't in an order that anyone should consider low. 

The paper seems like there was an opportunity to say something general on risk management and FHI used that to express their general ideas of risk management while failing at actually looking at the risk in question.

Just imagine someone saying about AI risk "Even a very low chance of AI killing all humans in unacceptable. We should get AI researchers and AI companies to buy insurance against the harm created by AI risk". The paper isn't any different then that. 

answer by Anders_H · 2021-06-19T13:24:24.227Z · LW(p) · GW(p)

Here is a data point not directly relevant to Less Wrong, but perhaps to the broader rationality community:  

Around this time, Marc Lipsitch organized a website and an open letter warning publicly about the dangers of gain-of-function research. I was a doctoral student at HSPH at the time, and shared this information with a few rationalist-aligned organizations. I remember making an offer to introduce them to Prof. Lipsitch, so that maybe he could give a talk. I got the impression that the Future of Life Institute had some communication with him, and I see from their 2015 newsletter that there is some discussion of his work, but I am not sure if anything more concrete came out of of this

My impression was that while they considered this important, this was more of a catastrophic risk than an existential risk, and therefore outside their core mission. 

comment by ChristianKl · 2021-06-19T13:39:06.955Z · LW(p) · GW(p)

While this crisis was a catastrophe and no existential challenge, it's unclear why that has to be generally the case.

The claim that global catastrophic risk isn't part of the FLI mission seems strange to me. It's the thing the Global Priorities Project of CEA focus on (global catastrophic risk is more primarily mentioned on in the Global Priorities Project then X-risk). 

FLI does say on it's website that out of five areas one of them is:

Biotechnology and genetics often inspire as much fear as excitement, as people worry about the possibly negative effects of cloning, gene splicing, gene drives, and a host of other genetics-related advancements. While biotechnology provides incredible opportunity to save and improve lives, it also increases existential risks associated with manufactured pandemics and loss of genetic diversity.

It seems to me like an analysis that treats cloning (and climate change) as an X-risk but not gain of function research is seriously flawed. 

It does seem to me that the messed up in a major way and should do the 5 Why's just like OpenPhil should be required to do it. 

Having climate change as X-risk but not gain of function research suggests too much trust in experts and doing what's politically convienent instead of fighting the battles that are important. This was easy mode and they messed up. 

Donors to both donations should request analysis of what went wrong.

comment by Anders_H · 2021-06-19T13:31:35.344Z · LW(p) · GW(p)

Here is a video of Prof. Lipsitch at EA Global Boston in 2017. I haven't watched it yet, but I would expect him to discuss gain-of-function research:  https://forum.effectivealtruism.org/posts/oKwg3Zs5DPDFXvSKC/marc-lipsitch-preventing-catastrophic-risks-by-mitigating

Replies from: ChristianKl
comment by ChristianKl · 2021-06-19T15:46:44.618Z · LW(p) · GW(p)

He only addresses it indirectly by saying we shouldn't develop very targeted approaches (which is what gain of function research is about) and instead fund interventions that are more broad. The talk doesn't mention the specific risk of gain of function research. 

answer by jdfaben · 2021-06-19T11:53:51.767Z · LW(p) · GW(p)

I can't speak for less wrong as a whole, but I looked into this a little bit around that time, and concluded that actually it looked like things were heading in the sensible direction. In particular, towards the end of 2014, the US government stopped funding gain of function research: https://www.nature.com/articles/514411a, and there seemed to be a growing consensus/understanding that it was dangerous. think anyone doing (at least surface level) research in 2014/early 2015 could have reasonably concluded that this wasn't a neglected area. That does leave open the question of what I did wrong in not noticing that the moratorium was lifted 3 years later...

comment by ChristianKl · 2021-06-19T12:12:53.941Z · LW(p) · GW(p)

It seems that when there's a discussion of a dangerous practice being stopped pending safety review it makes sense to shedule into the future a moment to review how the safety review turned out. 

Maybe a way forward would be:

Whenever there's something done by a lot of scientists is categorically stopped pending safety review, make a metaculus question about how the safety review is likely to turn out. 

That way when the safety review turns out negatively, it triggers an event that's seen by a bunch of people who can then write a LessWrong post about it?

That leaves the question whether there are any comparable moratoriums out there that we should look at more.  

comment by GeneSmith · 2021-06-20T18:23:35.326Z · LW(p) · GW(p)

Eliezer seemed to think that the ban on funding for gain of function research in the US simply led to research grants going to labs outside the US (Wuhan Institute of Virology in particular). he doesn't really cite any sources here so I can't do much to fact check his hypothesis.

Upon further googling, this gets murkier. Here's a very good article that goes into depth about what the NIH did and didn't fund at WIV and whether such research counts as "gain of function research".

Some quotes from the article:

The NIH awarded a $3.4 million grant to the non-profit organization EcoHealth Alliance Inc. over six years, funding research to study the risk of bat coronavirus emergence. This sum of money was administered by the National Institute of Allergy and Infectious Diseases (NIAID), the institute of the NIH directed by Fauci. EcoHealth Alliance then awarded part of the money to the Wuhan Institute of Virology ($598,500 over five years).

...

This framework defined PPP as a pathogen that is “likely highly transmissible” and “likely highly virulent and likely to cause significant morbidity and/or mortality in humans”. An enhanced PPP is one that results “from the enhancement of the transmissibility and/or virulence of a pathogen”. Under this framework, enhanced PPPs do not include pathogens that are naturally circulating and have been recovered from nature.

...

Stanley Perlman, a microbiologist at the University of Iowa, told FactCheck.org that EcoHealth’s research was about “trying to see if these viruses can infect human cells and what about the spike protein on the virus determines that.” According to FactCheck.org, Perlman did not think there was anything in the EcoHealth grant description that would be gain-of-function research.

...

A 2017 study published by researchers at the Wuhan Institute of Virology, listing the NIH as a funding body, appears related to this grant[4]. The researchers wanted to test whether the spike protein of new wild coronaviruses, which they isolated in bats, would allow the coronaviruses to enter human cells.

The problem with studying coronaviruses is that they are hard to culture in the lab[5]. To carry out their study, the researchers used the genetic sequence of a coronavirus (WIV1) that does replicate in vitro (in the lab) and inserted the spike proteins of the newly isolated viruses. In this way, they could test whether the newly isolated viruses could replicate in human cells in a lab dish.

Data included in the publication[4] showed that these experiments did not enhance the viruses’ infectivity. The experiments therefore did not make viruses more dangerous to humans or more transmissible.

There are differing opinions on whether or not what the researchers at WIV did counts as gain of function research:

However, Richard Ebright, professor of chemistry and chemical biology at Rutgers University and a critic of gain-of-function research, told the Washington Post that “the research was—unequivocally—gain-of-function research. The research met the definition for gain-of-function research of concern under the 2014 Pause.”

And Kevin Esvelt, a biologist at the MIT Media Lab, stated in a fact-check by PolitiFact that “certain techniques that the researchers used seemed to meet the definition of gain-of-function research”.

On the other hand, Joel Wertheim, an evolutionary biologist at the University of California San Diego, told PolitiFact that the experiments carried out in the 2017 study, despite using recombinant RNA technology, don’t meet the criteria for gain-of-function research in virology.

So to summarize: from what we know, researchers at WIV inserted a spike protein from a naturally occuring coronavirus into another coronavirus that was capable of replicating in a lab and infecting human cells. But the genome of this resulting virus seems too different from that of coronavirus for it to have been a direct ancestor of the pandemic causing coronavirus.

Overall I don't feel like enough people are linking their sources when they make statements like "I'd give the lab leak hypothesis a probability of X%".

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2021-06-20T19:42:45.452Z · LW(p) · GW(p)

I think Eliezer ignores how important prestige is for the Chinese. We got them to outlaw human cloning by telling them that doing it would put the Chinese academic community in a bad light. 

We likely could have done the same with gain of function research. Having their first biosafety level 4 lab for the Chinese likely was mostly about prestige. Having no biosafety 4 labs while a lot of other countries had biosafety 4 labs wasn't something that was okay for the Chinese because it suggests that they aren't advanced enough. 

I do think that it would be possible to make a deal that gives China the prestige for their scientists that they want without having to endanger everyone for it.  

So to summarize: from what we know, researchers at WIV inserted a spike protein from a naturally occuring coronavirus into another coronavirus that was capable of replicating in a lab and infecting human cells. But the genome of this resulting virus seems too different from that of coronavirus for it to have been a direct ancestor of the pandemic causing coronavirus.

The Chinese took their database with the database about all the viruses that they had in their possession down in September 26 2019. In their own words they took it down because of a hacking attack during the pandemic (which suggests that starts for them somewhere in September). If we would have the database we likely would find a more related virus in it. Given that the point of creating the database in the first place was to help us in a coronavirus pandemic taking it down and not giving it to anyone is a clear sign that there's something that would implicate them.

On the other hand, Joel Wertheim, an evolutionary biologist at the University of California San Diego, told PolitiFact that the experiments carried out in the 2017 study, despite using recombinant RNA technology, don’t meet the criteria for gain-of-function research in virology.

Basically, people outside of the virology community told them that they have to stop after exposing 75 CDC scientists to anthrax and a few weeks later other scientists finding a few vials of small pox in their freezer.

The reaction of the virology community was to redefine what gain of function research happens to be and continue endangering everyone. 

It's like Wall Street people when asked whether they do insider training saying: "According to our definition of what insider training means we didn't". 

Overall I don't feel like enough people are linking their sources when they make statements like "I'd give the lab leak hypothesis a probability of X%".

I have written all my sources up at https://www.lesswrong.com/posts/wQLXNjMKXdXXdK8kL/fauci-s-emails-and-the-lab-leak-hypothesis

Replies from: GeneSmith
comment by GeneSmith · 2021-06-21T18:08:21.275Z · LW(p) · GW(p)

Wow, this is quite the post! I've been looking for a post like this on LessWrong going over the lab leak hypothesis and the evidence for and against it, but I must have missed this one when you posted it.

I have to say, this looks pretty bad. I think I still have a major blindspot, which is I've read much more about the details of the lab leak hypothesis than I have about the natural origin hypothesis, so I still don't feel like I can judge the relative strength of the two. That being said I think it is looking more and more likely that the virus was probably engineered while doing research and accidentally leaked from the lab.

Thanks for writing this up. I'm surprised more of this info doesn't show up in other articles I've read on the origins of the pandemic.

Replies from: ChristianKl
comment by ChristianKl · 2021-06-21T18:26:53.606Z · LW(p) · GW(p)

I'm surprised more of this info doesn't show up in other articles I've read on the origins of the pandemic.

I was too when I researched it. I think it's telling us something about the amount of effort went into narrative control.

Take for example Huang Yanling, who was in the start of the pandemic called "patient zero" till someone discovered that she works at the Wuhan Institute of Virology and the Chinese started censoring information about her. The fact that the NIH asked the EcoHealth alliance about where Huang Yanling is suggest that the US government (that has CIA/NSA who wiretap a lot and hack people to try to get some idea what's going on) does consider this to be an important piece of information. 

Why doesn't the name appear in the NewYorkTimes? Very odd...

It seems impossible for a simple he-said/she-said article about the questions from the NIH to EcoHealth to appear in any of the major publications. 

comment by ChristianKl · 2021-06-23T18:53:15.815Z · LW(p) · GW(p)

After reading more it seems that according to John Holdren (Head of the 
Office of Science and Technology Policy) the Chinese came to US politicians to discuss how topics like gain of function research should be regulated:

The top Chinese people came to talk through what the implications of these technologies are, and how we should think as a global science community about regulating them.

China's leaders aren't completely irresponsible. They messed up in Wuhan by allowing the lab to without enough trained personal to operate it safely but I would expect that it's a combination of goals to have the lab on the one hand and the information about the security issues not going to the right people because the people who are responsible for the lab don't want to look bad. 

I doubt that Xi Jinping knew that he had a biosafety 4 lab without enough trained personal to be run safely.

Replies from: GeneSmith
comment by GeneSmith · 2021-06-24T03:34:13.525Z · LW(p) · GW(p)

I think the fact that mistakes like this are so understandable is precisely why gain of function research is dangerous. One mistake can lead to a multi-year pandemic and kill 10 million people. With those stakes, I don't think anyone should be doing gain of function research that could lead to human deaths if pathogens escaped.

answer by Anders_H · 2021-06-21T14:20:49.547Z · LW(p) · GW(p)

I found the original website for Prof. Lipsitch's "Cambridge Working Group" from 2014 at http://www.cambridgeworkinggroup.org/  . While the website does not focus exclusively on gain-of-function, this was certainly a recurring theme in his public talks about this. 

The list of signatories (which I believe has not been updated since 2016) includes several members of our community (apologies to anyone who I have missed):

  • Toby Ord, Oxford University
  • Sean O hEigeartaigh, University of Oxford
  • Daniel Dewey, University of Oxford
  • Anders Sandberg, Oxford University
  • Anders Huitfeldt, Harvard T.H. Chan School of Public Health
  • Viktoriya Krakovna, Harvard University PhD student
  • Dr. Roman V. Yampolskiy, University of Louisville
  • David Manheim, 1DaySooner

 

Interestingly, there was an opposing group arguing in favor of this kind of research, at http://www.scientistsforscience.org/. I do not recognize a single name on their list of signatories

comment by ChristianKl · 2021-06-21T14:50:27.741Z · LW(p) · GW(p)

That's interesting. That leaves the question of why the FHI mostly stopped caring about it after 2016. 

Past that point https://www.fhi.ox.ac.uk/wp-content/uploads/Lewis_et_al-2019-Risk_Analysis.pdf and https://www.fhi.ox.ac.uk/wp-content/uploads/C-Nelson-Engineered-Pathogens.pdf seem to be about gain of function research while completely ignoring the issue of potential lab leaks and only talking about it as an interesting biohazard topic. 

My best guess is that it's like in math where applied researchers are lower status then theoretical researchers and thus everyone wants to be seen as addressing the theoretical issues. 

Infohazards are a great theoretical topic, discussing generalized methods to let researchers buy insurance for side effects of their research is a great theoretical topic as well. 

Given that Lipitch didn't talk directly about the gain of function research but tried to talk on a higher level to speak about more generalized solutions at EA Global Boston in 2017 he might have also felt social pressure to talk about the issue in a more theoretical manner then in a more applied manner where he told people about the risks of gain of function research. 

If we would have instead said on stage at EA Global Boston in 2017  "I believe that the risk of gain of function research is between 0.05% and 0.6% per fulltime researcher" this would have been awkward and create conflict that's uncomfortable. Talking about it in a more theoretical manner on the other hand allow a listener just to say "He Lipitch seems like a really smart guy". 

I don't want to say that as a critique of Lipitch, given that he actually did the best work. I however do think EA Global having a social structure that gets people to act that way is a systematic flaw. 

What do you think about that thesis?

24 comments

Comments sorted by top scores.

comment by gilch · 2021-06-19T05:38:34.425Z · LW(p) · GW(p)

In 2014, in the LessWrong survey more people considered bioengineered pandemics a global catastrophic risk then AI.

It goes back further than that. Pandemic (especially the bioengineered type) was also rated as the greater risk in the 2012 survey, [LW · GW], and also the most feared in the 2011 survey [LW · GW], which was the earliest one I could find that asked the question.

It seems like it has been one of the global catastrophic risks we've taken most seriously here at LessWrong from the beginning. It's one of our cached memes. It's a large part of the reason that we rationalists, as a subculture, were able to react to the coronavirus threat so much more quickly than the mainstream. It was a possibility we had considered seriously a decade before it happened.

comment by Shmi (shminux) · 2021-06-19T06:04:37.570Z · LW(p) · GW(p)

Eliezer's X-risk emphasis has always been about extinction-level events, and a pandemic ain't one, so it didn't get a lot of attention from... the top.

Replies from: ChristianKl
comment by ChristianKl · 2021-06-19T08:49:18.829Z · LW(p) · GW(p)

Events that kill 90% of the human population can easily be extinction level events and in 2014 more LessWrongers believed that pandemics do that then AI.

Replies from: shminux, interstice
comment by Shmi (shminux) · 2021-06-19T23:49:11.580Z · LW(p) · GW(p)

I don't disagree that it was discussed on LW... I'm just pointing out that there was little interest from the founder himself.

comment by interstice · 2021-06-19T15:36:07.712Z · LW(p) · GW(p)

Killing 90% of the human population would not be enough to cause extinction. That would put us at a population of 800 million, higher than the population in 1700.

Replies from: ChristianKl
comment by ChristianKl · 2021-06-19T15:45:55.918Z · LW(p) · GW(p)

Shimux claims that Eliezer's emphasis was always about X-risk and not global catastrophic risks. If that's true why was the LW survey tracking global catastrophic risks and not X-risk?

Replies from: interstice
comment by interstice · 2021-06-19T15:54:32.790Z · LW(p) · GW(p)

I actually agree with you there, there was always discussion of GCR along with extinction risks(though I think Eliezer in particular was more focused on extinction risks). However, they're still distinct categories: even the deadliest of pandemics is unlikely to cause extinction.

Replies from: ChristianKl
comment by ChristianKl · 2021-06-23T18:16:24.729Z · LW(p) · GW(p)

Modern civilisation depends a lot on collaboration. I think it's plausible that downstream of the destabilization of a deadly pandemic extinction happens, especially as the tech level grows.  

Replies from: DPiepgrass
comment by DPiepgrass · 2021-06-27T03:32:15.578Z · LW(p) · GW(p)

That doesn't ring true to me. I'm curious why you think that, even though I'm irrationally short-termist: "100% is actually much worse than 90%" says my brain dryly, but I feel like a 90% deadly event is totally worth worrying about a lot!

comment by Liron · 2021-06-20T03:40:46.596Z · LW(p) · GW(p)

A related question is why the topic of GoF research still didn’t get much LW discussion in 2020

Replies from: ChristianKl
comment by ChristianKl · 2021-06-21T09:38:10.139Z · LW(p) · GW(p)

For my part I would say that in 2020 thinking about how to deal with the pandemic was a topic that reduced the available attention for other topics.

I also mistakenly thought that Fauci & Co are just incompetent and not actively hostile. 

comment by ChristianKl · 2021-06-21T23:16:32.706Z · LW(p) · GW(p)

After spending two days reading more 90% now feels way to low and 99% more reasonable because so many different stings of evidence point at the same conclusion that it was a lab leak. 

Replies from: gilch
comment by gilch · 2021-06-22T00:14:32.029Z · LW(p) · GW(p)

I already think a lab leak is more likely than not, and did months ago when I first heard the circumstantial case for the hypothesis, but I'm nowhere near 99%. I'd say ~65%, but that's just my current prior, and I might not be calibrated that well. The fact that other rationalists are more confident about this than I am makes me want to update in that direction, but I also don't want to double-count my evidence. I'm also worried about confirmation bias creeping in. Can you summarize the strongest points and their associated Bayes factors? Or factors with error bars, if you prefer?

Replies from: ChristianKl
comment by ChristianKl · 2021-06-22T10:56:36.245Z · LW(p) · GW(p)

(Sources and so on are in https://www.lesswrong.com/posts/wQLXNjMKXdXXdK8kL/fauci-s-emails-and-the-lab-leak-hypothesis) [LW · GW]

I don't factor Bayes factors together when I'm on Metaculus either, so that's not really how I reason. If you want them I would be interested in yours for the following pieces of evidence.

How do you get a probability as high as 35% for a natural origin? Can you provide Bayes factors for that?

There's a database for Coronaviruses. We (the international community) funded it to help us for the time when we have to deal with a coronavirus pandemic. The Wuhan Institute of Virology (WIV) took it down in late 2019 and doesn't give it to anyone. Nobody complains about that in a way that brings this in the mainstream narrative even when all those virologists who believe in their field should think that the database is valuable for fighting the pandemic. If it isn't why are we funding that research in the first place?

According to the US government information there was unusual activity in the WIV in October 2019 with at least significantly reduced cell phone traffic and likely also road blocks.

Three people from the WIV seems to went to hospital in November 2019 with flu or COVID-19 like symptoms in the same week.

Huang Yanling who was in the beginning of the pandemic called patient zero was a WIV employee and US government requests who account for what's up with her currently go unanswered. 

The security at the WIV was so bad that they asked the US for help in 2018 because they didn't have enough skilled people to operate their biosafety 4 lab safely. Chinese care about saving face, things need to have been bad to get them to tell the US that they didn't have enough people to operate their lab safely.

There are six separate biological reasons of why the virus looks like it came from a lab. 

The bats are more then 1000km away from Wuhan. It's quite unclear how they would have naturally infected people in Wuhan. 

An amazing amount of effort went into supressing the story, likely with a lot of collateral damage that made us react less well to the pandemic. Google, Facebook and Twitter started censoring in early February and that might have been part of the reason why it took us so long to respond. 

As Bret Weinstein said, that given that the virus looks so much like it comes from Wuhan the next likely alternative explantion would be that someone went through a lot of effort and released it in Wuhan to make the WIV look bad. If that would be the case it's however unclear why the WIV doesn't allow outside inspections to clear their name. 

If all the information we had was that Huang Yanling who was in the beginning called 'patient zero' was a WIV employee that alone might warrent more then 65%. I mean what are the odds that 'patient zero' for a pandemic caused by Coronavirus is randomly an employee of a lab studying Coronaviruses?

Then a lab studying Coronaviruses with known safety problems? 

Replies from: gilch
comment by gilch · 2021-06-22T17:05:36.728Z · LW(p) · GW(p)

How do you get a probability as high as 35% for a natural origin? Can you provide Bayes factors for that?

I guess that's fair. I don't really think that way either, but I want to learn how. I think numbers become especially important when coordinating evidence with others like this. My older prior favored the natural origin hypothesis, because that's what was reported in the news. I heard the case for the lab leak and updated from there.

There's a database for Coronaviruses.

Authoritarians in general and the Chinese in particular would reflexively cover up anything that's even potentially embarrassing as a matter of course. I can't call a coverup more likely in a natural origin scenario, but it's still pretty likely, so this is weak evidence.

unusual activity in the WIV in October 2019

Didn't know this one, but that's pretty vague. Source?

Three people from the WIV seems to went to hospital in November 2019 with flu or COVID-19 like symptoms in the same week.

The first confirmed case wasn't until December 8th, last I heard. Still, Wuhan is Wuhan. Even assuming a natural origin, we'd expect people from WIV to be more vigilant than the general public. Three at once is hardly more evidence than one, because they could have given it to each other. I do think this favors the leak hypothesis, because the timing is suggestive, but it seems weak. Could this have been some other disease? How early in November?

Huang Yanling who was in the beginning of the pandemic called patient zero was a WIV employee and US government requests who account for what's up with her currently go unanswered.

Again, coverup is a matter of course for these guys.

The security at the WIV was so bad that they asked the US for help

Not very strong by itself.

There are six separate biological reasons of why the virus looks like it came from a lab.

Need more details here.

The bats are more then 1000km away from Wuhan.

I knew about this one. This combined with the fact that the biosafety 4 WIV is in Wuhan is most of what got me to thinking the leak was more likely than not.

An amazing amount of effort went into supressing the story [...] Google, Facebook and Twitter

Why? And does this have anything to do with whether it was a leak or not? These are primarily American companies that are already censored in China. This was during the Trump era, when the Left was trying to fight him any way they could. "Racist" has been their favorite ad hominem lately. Unless you can establish than China was behind this, and put in more effort than would be expected as a matter of course, I don't think this is evidence at all of anything other than normal American political bickering. But we've already counted the coverup as weak evidence. We can't count it again.

As Bret Weinstein said

This doesn't seem to be saying anything new. Weinstein does at least have gears in his models, but seems dangerously close to crackpot territory. I don't think he's a conspiracy theorist yet, but he also seems subject to the normal human biases, and doesn't seem to be trying to correct for them the way a rationalist would. It's not obvious to me that his next most likely explanation is the next most likely.

why the WIV doesn't allow outside inspections

Again, coverup as a matter of course. Nothing new here.

I mean what are the odds that 'patient zero' for a pandemic caused by Coronavirus is randomly an employee of a lab studying Coronaviruses?

"Patient zero" is the earliest that could be identified, not necessarily the first to get it. That an employee of a lab studying coronaviruses would notice first doesn't seem that strange, even if it had been circulating in Wuhan for a bit before. This does seem to favor a leak. How strong this evidence is depends a lot on more details. I could see this being very strong or fairly weak depending on the exact circumstances.

Replies from: ChristianKl
comment by ChristianKl · 2021-06-22T17:35:32.618Z · LW(p) · GW(p)

Authoritarians in general and the Chinese in particular would reflexively cover up anything that's even potentially embarrassing as a matter of course. I can't call a coverup more likely in a natural origin scenario, but it's still pretty likely, so this is weak evidence.

I think it's embarrasing to withold a database that was created to help us fight a pandemic in times of a pandemic. It's bad for any future Chinese researcher who wants to collaborate with the West if it's clear that we can't count of resources that we create together with China to help us in a crisis to actually be available in the crisis. Additionally, why did the coverup start in September 2019?

"Patient zero" is the earliest that could be identified, not necessarily the first to get it. 

Yes, but if you take 1 billion Chinese and maybe 200 employees of the WIV what are the odds that "patient zero" is from the WIV?

5,000,000 to 1.

Unless you can establish than China was behind this, and put in more effort than would be expected as a matter of course, I don't think this is evidence at all of anything other than normal American political bickering. 

No, it was American/International supression because of NIH funding gain of function involving the WIV in violation of the ban in 2015 and not putting it through the safety review process that was instituted in 2017. 

Why? And does this have anything to do with whether it was a leak or not? These are primarily American companies that are already censored in China.

It's about how important it was for Farrar on the 2nd of February to get through to Tedros and have Tedros decide while talking about ZeroHedge and Tedros announcing the next day that he's cooperating with Google/Twitter to fight "misinformation" and ZeroHedge being banned from Twitter that day.

It's complex, but if you want to understand the point I have it written down in https://www.lesswrong.com/posts/wQLXNjMKXdXXdK8kL/fauci-s-emails-and-the-lab-leak-hypothesis [LW · GW

The first confirmed case wasn't until December 8th, last I heard.

Confirmed cases are different from "cases the US intelligence service knows about because they lauched a cyber attack on the WIV and all the private and professional emails of it's employees". 

Didn't know this one, but that's pretty vague. Source? 

It's the letter that the NIH send the EcoHealth Alliance with question that have to be explained before they want to give funding to the EcoHealth Alliance again. Generally, if you want sources read https://www.lesswrong.com/posts/wQLXNjMKXdXXdK8kL/fauci-s-emails-and-the-lab-leak-hypothesis [LW · GW

Replies from: gilch
comment by gilch · 2021-06-22T20:35:45.500Z · LW(p) · GW(p)

Yes, but if you take 1 billion Chinese and maybe 200 employees of the WIV what are the odds that "patient zero" is from the WIV?

5,000,000 to 1.

This is obviously not the right calculation, and I expected better from a rationalist. I've already counted the fact that it started in Wuhan where they happen to have a biosafety 4 lab studying coronaviruses as the strongest evidence in favor of the leak. You may feel I didn't count it strongly enough, but that's a different argument. What does the entire population of China have to do with it after that point? Nothing. You're being completely arbitrary by drawing the boundary there. Why not the entire world?

The population of Wuhan, maybe, but we can probably narrow it down more than that, and then we also have to account for the fact that the WIV employees would be much more likely to report anything out of the ordinary when it comes to illness. For the rest of Wuhan at the time, the most common symptoms would have been reported as "the flu" or "a cold". Mild cases are common, and at least a third of people have no noticeable symptoms at all, especially early on with the less virulent original variant.

The population of Wuhan is about 8.5 million, and the number of staff at WIV, I think was more like 600. So that's more like 14,000 : 1. I think WIV staff could be easily 20x more likely to notice that the disease was novel, so that's more like 700 : 1. That's still pretty strong evidence, but nowhere near what you're proposing.

Replies from: ChristianKl
comment by ChristianKl · 2021-06-22T20:57:53.290Z · LW(p) · GW(p)

This is obviously not the right calculation, and I expected better from a rationalist. I've already counted the fact that it started in Wuhan where they happen to have a biosafety 4 lab studying coronaviruses as the strongest evidence in favor of the leak.

I have 99% as my likelihood for the lab leak not 99,9999%, I don't suggest that 5,000,000 to 1 should be the end number. It's just a random calculation. 

I am often enough at metaculus and played the credence game to not go for the 99.9% that Dr. Roland Wiesendanger proposes. 

I think WIV staff could be easily 20x more likely to notice that the disease was novel, so that's more like 700 : 1. 

If that's your calculation how can you justify only 65%, especially when that's only one of the pieces of evidence? 

comment by Ofer (ofer) · 2021-06-20T09:59:32.536Z · LW(p) · GW(p)

(This isn't an attempt to answer the question, but…) My best guess is that info hazard concerns reduced the amount of discourse on GoF research to some extent.

Replies from: philh
comment by philh · 2021-06-25T13:16:59.187Z · LW(p) · GW(p)

Can you be more specific? My vague impression is that if GoF research is already happening, talking about GoF research isn't likely to be an info hazard because the info is already in the heads of the people in whose heads it's hazardous.

Replies from: ChristianKl, ofer
comment by ChristianKl · 2021-06-25T22:26:49.464Z · LW(p) · GW(p)

The debate about gain of function research started as a debate about infohazards when Fouchier and Kawaoka modified H5N1 in 2011 and published the modified sequence. 

It's possible that gain of function research is therefore mentally associated as being an infohazard. The more recent FHI papers for example mention gain of function research only in relation to infohazards and not the problem of lab leaks in labs doing gain of function research.

The OpenPhil analysis that speaks of gain of function research by calling it dual use research also has a frame that suggests that possible military use or someone stealing engineered viruses and intentionally spreading them is what the problem is about.

This seems to reflect the general human bias that we have an easier time imagining other humans intentionally creating harm then accidentially creating harm. It's quite similar to naive people thinking that the problem of AGI is humans using AGI's for nefarious ends. 

Replies from: philh
comment by philh · 2021-06-27T09:17:32.373Z · LW(p) · GW(p)

(I'm not sure to what extent you're trying to "give background info" versus "be more specific about how people thought of GoF research as an infohazard" versus "be more specific about how GoF research actually was an infohazard" versus other things, so I might be talking past you a bit here.)

The debate about gain of function research started as a debate about infohazards when Fouchier and Kawaoka modified H5N1 in 2011 and published the modified sequence.

So this seems to me likely to be an infohazard that was found through GoF research, but not obviously GoF-research-as-infohazard. That is, even if we grant that the modified sequence was an infohazard and a mistake to publish, it doesn't then follow that it's a mistake to talk about GoF research in general. Because when GoF research is already happening, it's already known within certain circles, and those circles disproportionately contain the people we'd want to keep the knowledge from. It might be the case that talking about GoF research is a mistake, but it's not obviously so.

What I'm trying to get at is that "info hazard concerns" is pretty vague and not very helpful. What were people concerned about, specifically, and was it a reasonable thing to be concerned about? (It's entirely possible that people made the mental leap from "this thing found through GoF is an infohazard" to "GoF is an infohazard", but if so it seems important to realize that that's a leap.)

a frame that suggests that possible military use or someone stealing engineered viruses and intentionally spreading them is what the problem is about.

Here, too: if this is what we're worried about, it's not clear that "not talking about GoF research" helps the problem at all.

comment by Ofer (ofer) · 2021-06-25T13:30:06.954Z · LW(p) · GW(p)

Now (after all the COVID-19 related discourse in the media), it indeed seems a lot less risky to mention GoF research. (You could have made the point that "GoF research is already happening" prior to COVID-19; but perhaps a very small fraction of people then were aware that GoF research was a thing, making it riskier to mention).

Replies from: philh
comment by philh · 2021-06-25T15:30:30.565Z · LW(p) · GW(p)

I agree probably only a small fraction of people were aware that GoF research was a thing until recently. I would assume that fraction included most of the people who were capable of acting on the knowledge. (That is, the question isn't "what fraction of people know about GoF research" but "what fraction of people who are plausibly capable of causing GoF research to happen know about it".) But maybe that depends on the specific way you think it's hazardous .