One of the main places Americans look for information on coronavirus is the Center for Disease Control and Prevention (abbreviated CDC from the days before “and Prevention” was in the title). That’s natural; “handling contagious epidemics” is not their only job, but it is one of their primary ones, and they position themselves as the authority. At a time when so many things are uncertain, it saves a lot of anxiety (and time, and money) to have an expert source you can turn to and get solid advice.
Unfortunately, the CDC has repeatedly given advice with lots of evidence against it. Below is a list of actions from the CDC that we believe are misleading or otherwise indicative of an underlying problem. If you know of more examples or have information on any of these (for or against), please comment below and we will incorporate into this post.
Dismissed Risk of Infection Via Packages
On the CDC’s coronavirus FAQs pages on 2020-03-04, they say, under “Am I at risk for COVID-19 from a package or products shipping from China?”:
“In general, because of poor survivability of these coronaviruses on surfaces, there is likely very low risk of spread from products or packaging that are shipped over a period of days or weeks at ambient temperatures.”
However, this metareview found that various coronaviruses remained infectious for days at room temperature on certain surfaces (cardboard was not tested, alas) and potentially weeks at lower temperatures. The CDC’s answer is probably correct for packages from China, and it’s possible it’s even right for domestic packages with 2-day shipping, but it is incorrect to say that coronaviruses in general have low survivability, and to the best of my ability to determine, we don’t have the experiments that would prove deliveries are safe.
The criteria are intended to serve as guidance for evaluation. Patients should be evaluated and discussed with public health departments on a case-by-case basis. For severely ill individuals, testing can be considered when exposure history is equivocal (e.g., uncertain travel or exposure, or no known exposure) and another etiology has not been identified.
(The CDC describes this change as happening on 2020-02-12, however the Wayback Machine did not capture the page that day).
Based on this announcement on 2020-02-14, when testing that could detect community exposure was happening it was in one of 5 major cities. However as of 2020-03-01 only 472 tests had been done, so no test could have been happening very often.
Between 2020-02-27 and 2020-02-28, the primary guidelines on this page were amended to
However guidance went out on the same day (the 28th) that only listed China as a risk (and even then, only medium risk unless they had been exposed to a confirmed case or travelled to Hubei specifically).
Testing Kits the CDC Sent to Local Labs were Unreliable
Hamstrung Detection by Banning 3rd Party Testing (HHS/FDA, not CDC)
One reason the CDC used such stringent criteria for determining who to test was that they had a very limited ability to test, hamstrung further by the faulty tests sent to local labs. Normally private testing would fill the gap, but the department of Health and Human Services invoked emergency measures that created a requirement for special approval of tests, and the FDA didn’t grant it to anyone (source).
There are multiple harrowing stories of people with obvious symptoms and exposure to the virus being turned away from testing, often against a doctor’s pleas:
There is also a rumor that the first case caught in Seattle, which has since turned into the US epicenter of the disease, was caught by a research lab using a loophole to perform unauthorized testing (raising the possibility that it’s worse elsewhere and simply hasn’t been caught).
Ceased to Report Number of Tests Run
Until 2020-03-02, the CDC reported how many tests SARS-CoV-2 tests it had run. On March 2nd, it stopped (before, after). There are many potential reasons for this, none of which inspire confidence. The official reason for this as told to reporter Kelsey Piper is that the number would no longer be representative now that states are running their own tests. So, best case scenario, the CDC can not coordinate enough to count tests performed by other labs.
Gave False Reassurances About Recovered Individuals
As of this writing (2020-03-05), the CDC’s “Share Facts” page states that “Someone who has completed quarantine or has been released from isolation does not pose a risk of infection to other people.”
While it is certainly true that being released from quarantine implies a significantly reduced risk, the quarantine that is typically performed is not stringent enough to say that people released pose no risk. The quarantine procedure performed by the CDC lasts 14 days, after which if symptoms have not appeared, they can be released.
While an epidemic is still contained, safely quarantining at-risk people means choosing a quarantine period long enough to be confident that, if they haven't shown symptoms, they don't have the disease. When a disease is still contained, this should be risk averse, since a single infected person could start an outbreak. The CDC's 14-day quarantine period was not long enough to catch the cases detailed above.
This was foreseeable. This paper, published Feb 6, estimated the distribution of incubation periods, including the incubation periods of outliers.
The relevant row is the 99th percentile row, which estimates the longest incubation period per 100 people. If you quarantined 100 people, one of them would have an incubation period at least that long. The paper estimates this using three different methods; two of those estimates are greater than 14 days, and all three estimates put significant probability on incubation periods longer than 14 days.
There are also reports of the virus re-emerging in patients who were believed to have recovered.
Conflated Genetics and Environmental Exposure
This is a tough topic to write about.
Cruelty to people because they have or might have a disease is never okay. And the vast majority of people who were cruel to Asian-appearing people in the early days of an epidemic were doing it to healthy people out of knee jerk fear and antagonism, not a measured, well-informed cost-benefit analysis. When the CDC claimed on 2020-02-29 that "People of Asian descent, including Chinese Americans, are not more likely to get COVID-19 than any other American." they were surely trying to dampen attacks on people who had done nothing wrong and were hurting no one.
But the statement is false. Chinese-Americans are more likely to travel to China or associate with people who have, and thus were more likely to catch SARS-CoV-2. This doesn’t mean they are more likely to catch it given exposure, but they were more likely to be exposed.
The CDC admits this in the page specifically on stigma (2020-02-24), saying “People—including those of Asian descent—who have not recently traveled to China or been in contact with a person who is a confirmed or suspected case of COVID-19 are not at greater risk of acquiring and spreading COVID-19 than other Americans.”
However that same anti-stigma page goes on to say “Viruses cannot target people from specific populations, ethnicities, or racial backgrounds.” This is also false. About 10% of Europeans are immune to HIV, an immunity not found people originating from other areas. So we know it is technically possible for a virus to have differential effects based on race.
Does SARS-CoV-2 in particular have race-related effects? There are people claiming Asian men are more susceptible to SARS-CoV-2 than others due to a higher expression of a certain protein (example). Other people dispute this (example). Right now it is very much an open question.
We can see why the CDC prioritized calming racially-motivated violence over fully explaining their confusion over an unanswered question. It might have been the highest-utility thing to do. But it is important to know that “misrepresenting data in order to produce better actions from the public” is a thing the CDC does.
CDC does not recommend that people who are well wear a facemask to protect themselves from respiratory diseases, including COVID-19.
The Surgeon General (who is not directly part of the CDC) takes a stronger tack:
While we can’t hold the CDC responsible for the Surgeon General, they are being conflated in a lot of news articles saying or implying that masks are useless for healthy people. They’re (probably) not.
Our best guess is that the CDC is trying to conserve masks for health care professionals and others with the highest need, in the face of a looming mask shortage. That could easily be the optimum mask allocation. I can’t prove the lie wasn’t justified for the greater good. But it is another example of the CDC placing “getting the outcome it wants” over “telling people the literal truth.”
What Does This Mean?
These errors we’ve highlighted tend towards errors of omission: saying something is completely safe when it’s not, saying something is unhelpful when it is, saying the current state is less dangerous than it is. You should include that bias when processing new information from the CDC. Notably we’re not saying any of the things they do recommend are bad: to the best of our knowledge, you should be washing your hands and not touching your face. Vaccines are (mostly) great. But I would not take the CDC saying an activity is safe or unnecessary as the last word on the subject.
Addendum: A whistleblower claims that CDC wanted to advise elderly and fragile people to not fly on commercial airlines, but removed this advice at the White House's direction.
Where the CDC and White House are in conflict, I believe the CDC is more credible (and I believe this is consensus); however, this looks like a clear-cut case where the CDC's political situation forced it to be less honest and understate risk.
The person who spoke to the AP on condition of anonymity did not have authorization to talk about the matter. The person did not have direct knowledge about why the decision to kill the language was made or who made the call.
That doesn't seem a really strong source for the claim or what exactly was said. Sounds like a fair degree of reading between lines based on an observation (passage removed). The administration refutes the claim.
On Sunday, Dr. Anthony Fauci — the head of infectious diseases at the National Institutes of Health and a member of the White House Coronavirus Task Force — said “no one overruled anybody.”
But, how many of this class of people have been infected (or infected others) due to flying? Alternatively, how do those numbers stack up to cruise ship travels? Did the CDC provide advice on that?
Also missing, from what I can tell, is the date of these events -- when did CDC want to provide that advise, when did the decision to make the edit occur? They did update their website last Friday.
If we're splitting that hair then one should be questioning if any whistle was being blown or if simple hearsay was reported by AP. However, Fauci certainly seems to be a person that would have direct knowledge so one might take his statement as factual and so a refutation of the reported hearsay.
I agree with some of the sentiments in this post, but I think the claim in the second paragraph "Unfortunately, the CDC has repeatedly given advice with lots of evidence against it", is poorly supported. It suggests that the CDC has given advice that is not just incomplete or somewhat off-base, but that is ineffective and should be ignored. I don't think the points that deal with advice meet that standard:
Packages: The CDC quote explicitly refers to packages from China, so this more a matter of missing advice about what to do in other cases than bad advice.
Masks: At the end of the day, "don't buy masks" seems like good advice that ought to be followed. I get the annoyance that the CDC or others might be trying to downplay the fact masks can help healthy individuals, but that doesn't mean the recommendation is wrong.
Genetics and Environment: The general sentiment of "please keep in mind the odds of a Chinese-American having COVID-19 is very similar to anyone else having COVID-19" is pretty good advice. Sure, you can nitpick the language and say the CDC implied "exactly equal" instead of "very similar", but I think it's pretty pedantic to use that to justify calling this "advice with lots of evidence against it". The general point that is trying to be made here is correct.
Gave False Reassurances About Recovered Individuals: We should have some epistemological humility here, one paper published a month ago and a handful of anecdotes shouldn't give us a lot of certainty that the 14 day period suggested by most experts is a mistake. Even if it was, its possible people not in quarantine and going about their day-to-day lives will have a higher chance of having COVID-19 than people who were exposed and then quarantined for 14 days anyway due the increased chance of other, incidental exposures. If the general bit of advice here is "treat people who left quarantine as you would anyone else who you have no reason to believe is infected", that seems like pretty good advice. I suppose you could argue that, given the uncertainty, one should be slightly more carefully about people who left quarantine very recently, but again in my mind that is an insufficient caveat to justify calling this "advice with lots of evidence against it".
We should remember that much of the CDC's website is meant for the general public, and is mostly trying to remove naive misconceptions people might have (e.g., people who have been quarantined and Chinese-Americans are very likely to be infectious). It is not trying convey very precise statistical information or make detailed technical claims about COVID-19, and shouldn't be interpreted that way. From that perspective, the CDC's advice discussed in this post seems fine. There might be some issues with the phrasing or the details, and maybe there are things missing, but I think it's a stretch to call it erroneous.
But right now, there is no source we could give an uninformed person and say “all you need to do is listen to them”.
A lot of your arguments are of the form "they're saying something untrue in an effort to get people to do the right thing". So isn't pointing an uninformed person at the CDC the correct thing to do, since we assume that on reading it they'll end up doing the right thing?
Separate from the infohazardness of this post (discussed in other comments and fairly specific to the audience), it seems weird to prefer truth over consequences in what we tell arbitrary uninformed people who have no interest in rationality and just want to know what the best thing to do is?
The CDC offers a pretty short list of things to do as far as prevention goes. Surely that can't be all there is. Why not post something similar to our Justified Practical Advice thread [LW · GW]? At least with low cost/risk ideas like copper tape and taking vitamin D.
And for more unclear or controversial things like wearing a mask, why not offer a nuanced discussion of the trade-offs involved?
The fact that they haven't done these things reduces their credibility in my eyes.
The CDC's role is to protect the public as a whole, and communicate with them in ways that minimize the burden of diseases. That doesn't mean you shouldn't trust the CDC, just that you shouldn't assume their goal is to advance epistemic purity. But as far as I can tell, treating them as you sole source and doing exactly what they say, and encouraging others to do the same, would make us all better off than most of the personal advice lesswrong is advising.
If the CDC says "disposable masks reduce your chance of becoming infected very slightly," (which is likely true if you use them properly, which, to be clear, most people won't do,) what happens next? The entirely predictable result is that hospitals will not be able to buy them, hospital staff gets sick more often, and then there are staff shortages when they are needed most, leading to far more deaths. That almost certainly makes people as a whole worse off, so they don't do that. (People who wanted to be virtuous instead of selfish might even decide to only do what the CDC recommends.)
The CDC also need to communicate in ways that idiots won't misconstrue, and a nuanced discussion of interventions that are unscalable or that could be dangerous if done wrong, or that are difficult to do, would be similarly a really stupid thing for the CDC to publish. Maybe a few examples would help.
"disposable masks reduce your chance of becoming infected very slightly," (which is likely true if you use them properly, which, to be clear, most people won't do)
I am confused why you would say this, after this [LW(p) · GW(p)] thread, which suggested a 60%-80% reduced infection rate for influenza-like viruses, and you said you updated on the value of masks when worn by the general population without being fitted. "Very slightly, if you wear them properly" does not seem at all compatible with the evidence, and also seems clearly contradicted by the emphasis that the chinese governments puts on the use of masks. I again would ask for a source for this claim that masks that aren't worn properly only have very little effectiveness.
You're saying that the post is interested in supporting defecting and causing societal harm for personal benefit? I hope that isn't the case, but if it is, we should be far clearer in condemning the provision of information to support people doing this.
Our best guess is that the CDC is trying to conserve masks for health care professionals and others with the highest need, in the face of a looming mask shortage. That could easily be the optimum mask allocation. I can’t prove the lie wasn’t justified for the greater good. But it is another example of the CDC placing “getting the outcome it wants” over “telling people the literal truth.”
As far as I can tell, the CDC hasn't uttered a literal lie about this. In the link, they only say "CDC does not recommend that people who are well wear a facemask to protect themselves from respiratory diseases, including COVID-19", which is a recommendation, rather than a statement of efficacy. It could be motivated by a desire to stop mask-hoarding, as you say, or by the belief that typical usage of masks (including reuse, frequently readjusting the mask and thereby touching your face, etc) actually harms people more than it helps them.
(It's interesting that the link also says "The use of facemasks is also crucial for health workers and people who are taking care of someone in close settings (at home or in a health care facility)." This is (i) an admission that masks can protect you when you're close to someone sick, and (ii) does provide an incentive for people to hoard masks, if they think they're going to be taking care of someone.)
Regardless, it's fair to say that they're placing "getting the outcome it wants" at least over "telling people the full truth", and that this is a strike against the CDC's trustworthiness.
Edit: This SSC says that the CDC has been advising the public against using masks for a long time, so whatever they're saying, they're probably saying it for different reasons than to stop hoarding.
As far as I can tell, the CDC hasn't uttered a literal lie about this
They definitely haven't written down a literal lie. A lot of newsarticlessay or implyone though, and people are walking away with the impression the CDC has anti-recommended masks. A friend has suggested they're more actively discouraging masks in press conferences, but I couldn't find proof so I left that out.
It's certainly possible that uninformed usage of masks is net-negative, and that it's not possible to inform the general public of correct usage. I haven't seen any evidence of that though. Meanwhile, China is requiring them.
To be clear, China started requiring mask usage, but also put in place price controls on masks, and limited mask purchases to 2 per week. Then they ensured that companies were building factories almost overnight to mass produce them. These might be good ideas, but as with many other things, it's not within CDC's abilities to do, so I think it's reasonable for the CDC to do what it can to actually reduce risks.
And "don't trust CDC because they haven't lied but they didn't advise things that might help but would be harmful overall to the public" is one hell of a take.
In English, if the word "quarantine" is applied to an infection-avoiding isolation period of either more or less than 40 days, that's arguably an abuse of linguistic tradition that reveals whoever says it to be in need of remedial education.
Maybe? *I* probably need remedial education, too! Very prestigious linguists have asserted here or there that linguistics is a descriptivist science, and so, from their very prestigious perspective, any use of language is as good as any other use of language...
Still, it does give one pause.
How many people in public health read or write latin anymore? Maybe there are some things that people used to take so MUCH for granted that no one thought to spell them out? Like "40 day periods should last 40 days" is basically a tautology. Should THAT go into a medical book and become testable knowledge for doctors?
It would be scary for medical inferences based in the obvious literal meaning of words to be valid, so they are probably not valid. I'm sure everything is fine.
The relevant row is the 99th percentile row, which estimates the longest incubation period per 100 people. If you quarantined 100 people, one of them would have an incubation period at least that long.
This doesn't seem correct to me but not a statistician and not quite sure what we're doing the percentiles.
However, the confidence interval should be a statement about the likelihood the true mean will be found within the range stated (so a 5% chance we got it really wrong). Based on that I don't follow the claim that at least one of the 100 people should have an incubation period at least as long as that (I assume "that" is the mean value estimated).
This section is kind of confusing, and I have tweaked the wording a little bit to try to be clearer. The reason for the confusion is that there are two nested distributions here.
The first is that when a bunch of people get infected, they have different incubation periods; some of them start showing symptoms more quickly than others. This is what the 99th percentile refers to. This makes us uncertain about the incubation period that a particular person will have, but it is not a confidence interval; if we learned how long the incubation periods were for a very large number of people, it wouldn't make the 99th-percentile person's incubation period any closer to the mean incubation period.
The second distribution is our uncertainty about the first distribution; we don't know exactly what fraction of people will have extra-long incubation periods, or how long those periods will be--but we would if we observed enough people. This uncertainty is what the 9.7-17.2, 10.9-20.6, and 12.6-32.2 ranges are referring to.
After some more looking and thinking I still find both the claim and the answer a bit confusing. Given the, to me, somewhat cryptic comment below which seems to have a some backing, I want to see if I can figure out where I'm missing something everyone sees as so obvious.
The days for incubation at the 99% level were estimates of the longest incubation period in days we should expect. Am I still on the same page with everyone on on that?
If so then we have the 95% CI range about that mean estimate for the longest expect incubation period. The paper calls CI a credible interval which is a term I've never heard used for statistics. I had taken CI be to the standard confidence interval for the estimated value. From what I can understand the credible interval older (I suppose) confidence interval are similar but not quite the same. The credible interval appears to be narrower in smaller observations than the confidence internal -- but the tend to converge as a limit.
If they are really similar concepts then I would think the same interpretation applies as I was using before. That is one cannot say a very strong statement about the estimated value per se using CI ranges. The CI is telling us that the "true" value has a likelihood of falling between the upper and lower range but we don't really know where.
So if credible intervals do work as confidence intervals then the claim that out of 100 quarantined people on would have an incubation period at least as long as the estimated days (11.9, 14.1 or 18.5) is not a correct interpretation. What we should be able to say is that we have a credibility level or confidence level of 95% that the longest period we would observer would be between the upper and lower ranges.
This is certainly possible, and it will never be possible to fully rule out second exposures in cases like this. But note that the 19- and 27-day outliers were not included in the data used by the linked paper that estimated a >14day right tail, and I think it's unlikely for untraced second exposures to have influenced its conclusion.
There are cases where you can know which exposure is responsible because you did RNA sequencing and can use the mutations to trace the route of infection. Given that we however don't have cheap enough RNA sequencing to widely deploy it, it seems to me unlikely that the 27-day outlier is backed by such considerations.
...during disasters, people can show strikingly altruistic behavior, but interventions by authorities can backfire if they fuel mistrust or treat the public as an adversary rather than people who will step up if treated with respect. Given that even homemade masks may work better than no masks, wearing them might be something to direct people to do while they stay at home more, as we all should.
We will no doubt face many challenges as the pandemic moves through our societies, and people will need to cooperate. The sooner we create the conditions under which such cooperation can bloom, the better off we all will be.
I am puzzled by a somewhat amusing phenomenon. There are thousands of people on social media screaming "Stop buying masks! They are useless". That's intriguing. If they are useless, why do you care?
...It is even stranger that some people reply: "hospitals need the masks". So... hospitals think that masks are useful? Then they are not useless. In fact, they seem to be indispensable. Then the correct statement would be: "Stop buying masks! They are extremely useful!"
...Did you notice that perfectly healthy World Health Organization officials always wear masks during their news briefings to reporters? It's because they now believe that you can transmit the virus even if you don't have the symptoms, so potentially anyone around you (healthy or sick) may be contagious.
...I am not competent enough to judge how effective a mask can be in the case of this covid-19. I am just intrigued that so many people have joined the anti-mask crusade despite these obvious logical contradictions.
He’s very independent and doesn’t try to compete in the attention landscape like most blogs, so I take it as a fairly strong datapoint that these are fairly obvious inconsistencies to the public.
[The] CDC created a test requiring a slow RT-PCR reaction on a specific model of machine to be run overnight, not designing the right primers, and not realizing this for a month. This was both strategically (using 30-year-old technology) and tactically (designing wrong primers) incompetent. I would expect most graduate students to do better.
He also feels that the CDC is giving lousy information. In their FAQ, their answer to whether your child is at risk for Covid-19 fails to mention that children reliably have much milder disease courses than adults. He says:
It’s clear that kids get less sick if at all. Why doesn’t the CDC say so? It won’t hurt to tell the truth! If you provide such lousy information, people will stop trusting you.
I think this is consistent with the primary goal of communication from major institutions being to prevent people from doing stupid things, over and above being open and honest.
Dot points at end of masks section seem misplaced.
This should probably be in block quotes in the post:
The criteria are intended to serve as guidance for evaluation. Patients should be evaluated and discussed with public health departments on a case-by-case basis. For severely ill individuals, testing can be considered when exposure history is equivocal (e.g., uncertain travel or exposure, or no known exposure) and another etiology has not been identified.
It may be possible that a person can get COVID-19 by touching a surface or object that has the virus on it and then touching their own mouth, nose, or possibly their eyes, but this is not thought to be the main way the virus spreads.
It's certainly contrary to most sources I've seen. Instead CDC claim it spreads "between people who are in close contact with one another (within about 6 feet)" (i. e. through droplets in the air).
Note that "survived as an [artificially-generated] aerosol" does not mean that aerosols are generated in substantial numbers in realistic scenarios, nor does it say anything about how infectious the aerosol route is. (Also note that the "3 hour" figure in the preprint's original abstract was grossly misleading; the preprint has been updated to remove it. The real figure implied by their data is longer.)
I agree that a lot of aspects of the early response have been less than ideal. And I'm seriously worried about the CDC being affected by the White House's explicitly stated goal of under-reacting to the pandemic.
With that said, I think it's important to keep in mind just how important trust in public health institutions will be for the next year.
There already is an enormous amount of misinformation out there competing for people's attention today. Even if mistakes have been made, the CDC is still far more reliable than the White House and many other loud sources, so we seriously don't want to reduce the CDC's share of information being acted on by the public.
In 2009, a *lot* of people initially expressed that they would want a vaccine shot when available, only to refuse it when it ultimately hit the market out of (unfounded) concerns about vaccine safety. 
Just imagine how horrible it will be if people don't trust a licensed SARS-COV-2 vaccine once we finally have it. Trust in the CDC will be an important part of avoiding that.
For these reasons, I urge people to consider the info-hazards associated with discourse surrounding the CDC's credibility. This is not meant as an all-things-considered judgment that you shouldn't critique the CDC but be mindful of this risk.
Even if the claim was usually true on longer time scales, I doubt that pointing out an organisations mistakes and not entirely truthful statements usually increases the trust in them on the short time scales that might be most important here. Reforming organizations and rebuilding trust usually takes time.
Important in these considerations is also the meta level issues that arise when dealing with appeals to consequences [LW · GW]. There are also other considerations. For instance, if the fact that the cdc has made mistakes is never disseminated, then it becomes harder to hold it accountable for those mistakes and ensure that they don't happen again.
This post is a bad idea and it would be better if it were taken down. It's "penny-wise, pound-foolish" applied to epistemology and I would be utterly shocked if this post had a net positive effect.
I wrote a big critique outlining why I think it's bad, but I couldn't keep it civil and don't want to spend another hour editing it to be, so I'll keep it brief and to the point: lesswrong has been a great source of info and discussion on COVID-19 in the past couple of weeks, much better than most mainstream sources, but as usual, I don't recommend the site to friends or family because I know posts like this always pop up and I don't want to expose people to this obvious info hazard or be put in the position of defending why I recommended a community that posts info hazards like this.
As a mostly-lurker, I'm really just raising my hand here and saying "posts like this make me extremely uncomfortable and unwilling to recommend this community to others." Obviously not everyone wants this community to become mainstream and I'm really not trying to make anyone feel bad, but I think it's worth mentioning since other than David Manheim, I don't see my opinion represented in the comments yet and it looks like it's a minority one.
(Obviously it's up to the author whether or not to remove the post - I'm not requesting anything here, just expressing my preferences.)
In any case, the OP seems to be presenting true (as far as I can tell) and useful (potentially life-saving, in fact!) information. If you’re going to casually drop labels like “infohazard” in reference to it, you ought to do a lot better than a justification-free “this is bad”. Civil or not, I’d like to see that critique.
If you think the OP is harmful, by all means do not let civility stop you from posting a comment that may mitigate that harm! If you really believe what you’re saying, that comment may save lives. So let’s have it!
EDIT:Like Zack [LW(p) · GW(p)], I will strong-upvote this extended critique if you post.
TL;DR: “Infohazard” means any kind of information that could be harmful in some fashion. Let’s use “cognitohazard” to describe information that could specifically harm the person who knows it.
An information hazard is a concept coined by Nick Bostrom in a 2011 paper for Review of Contemporary Philosophy. He defines it as follows; “Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.”
[Not the origional poster, but I'll give it a shot]
This argument seems to hinge mostly on if the majority of those expected to read this content end up being Less Wrong regulars or not - with the understanding that going viral e.g. reddit hug of death would drastically shift that distribution.
Even accepting everything in the post as true on it's face it's unlikely such info would take the CDC out of the top 5 sources of info on this for the average American, but it's understandable people would come away with a different conclusion if lead here by some sensationalist clickbait headline and primed to do so. That entire line of argument is increadably speculative, but nessisarily so if viral inbound links up your readership two orders of magnitude. Harm and total readership would be very sensitive to the framing and virality of the referer. It's maybe relevant to ask if content on this forum has gone viral previously and if so, to what degree it was helpful/harmful.
I'm not really decided one way or the other, but that private/member only post option sounds like a really good idea. It sounds like there's some substance to this disagreement, but it also has a pascal's mugging character to it that makes me very reluctant to endorse the "info hazard" claim. Harm reductions seems like a reasonable middle ground.
I think that the definition is completely clear. "Information hazards are risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm. Such hazards are often subtler than direct physical threats, and, as a consequence, are easily overlooked. " This has nothing to do with existential risk.
If lower trust in the CDC will save lives, facts that reduce trust are not an infohazard, and if lower trust in the CDC will lead to more deaths, they are. So - GIVEN THAT THE FACTS ARE TRUE, the dispute seems to be about different predictive models, not confusion about what an infohazard is. Even then, the problem here is that the prediction itself is not sufficiently specific. Lower trust among what group, for example? Most Lesswrongers are unlikely to decide to oppose vaccines, for example, but there are people who read Lesswrong who do so.
But again, the claims were in some cases incorrect, they confuse the CDC with the Trump administration more broadly, and many are unreasonable post-hoc judgments about what the CDC should have done that I think make the CDC look worse than a reasonable observer should conclude.
I would be utterly shocked if this post had a net positive effect.
To be clear, I expect this post to be directly positive for my friends and family to read. From my vantage point CDC is recommending severe underpreparation, and I need my mum, dad, and their families and friends to stop listening to the CDC and go and prepare for several months of self-imposed quarantine. I’ve fortunately gotten my mum to stockpile food, but I expect my dad will be harder, and I am glad I have a resource showing that the CDC is not to be trusted over simple math and common sense.
I am aware that losing institutional trust means coordination problems, but if your institution lies you mustn’t prop it up anyway just because it has power. Through misleading and dangerous advice, they’re forcing our hand here when we try to have an honest conversation, not the other way around.
(You said you didn’t want to spend hours on this, so please don’t feel obliged to reply, I just wanted to reply to the thing you said that seemed false in my personal experience.)
I wrote a big critique outlining why I think it's bad, but I couldn't keep it civil and don't want to spend another hour editing it to be
If you post it anyway (maybe a top-level post for visibility?), I'll strong-upvote it. I vehemently disagree with you, but even more vehemently than that, I disagree with allowing this class of expense to conceal potentially-useful information, like big critiques. (As it is written of the fifth virtue, "Those who wish to fail must first prevent their friends from helping them.")
I'm really not trying to make anyone feel bad
Shouldn't you? If the OP is actually harmful, maybe the authors should feel bad for causing harm! Then the memory of that feeling might stop them from causing analogous harms in analogous future situations. That's what feelings are for, evolutionarily speaking.
Personally, I disapprove of this entire class of appeals-to-consequences (simpler to just say clearly what you have to say, without trying to optimize how other people will feel about it), but if you find "This post makes the community harder to defend, which is bad" compelling, I don't see why you wouldn't also accept "Making the authors feel bad would make the community easier to defend (in expectation), which is good".
If you post it anyway (maybe a top-level post for visibility?), I'll strong-upvote it. I vehemently disagree with you, but even more vehemently than that, I disagree with allowing this class of expense to conceal potentially-useful information, like big critiques.
I think you're ignoring the harms from posting something uncivil. Civility is an extremely important norm. I would not support something that is directly insulting, even if it is an important critique.
However, I did strong-upvote this comment (meaning sirjackholland [LW · GW]'s comment on this post) and I applaud them both for not publishing their original critique and for expressing their position anyway.
I don't recommend the site to friends or family because I know posts like this always pop up and I don't want to expose people to this...
This is just basically correct! Good job! :-)
Arguably, most thoughts that most humans have are either original or good but not both. People seriously attempting to have good, original, pragmatically relevant thoughts about nearly any topic normally just shoot themselves in the foot. This has been discussed [LW · GW] ad [LW · GW] nauseum [LW · GW].
This place is not good for cognitive children, and indeed it MIGHT not be good for ANYONE! It could be that "speech to persuade" is simply a cultural and biological adaptation of the brain which primarily exists to allow people to trick other people into giving them more resources, and the rest is just a spandrel at best.
It is admirable that you have restrained yourself from spreading links to this website to people you care about and you should continue this practice in the future. One experiment per family is probably more than enough.
HOWEVER, also, you should not try to regulate speech here so that it is safe for dumb people without the ability to calculate probabilities, detect irony, doubt things they read, or otherwise tolerate cognitive "ickiness" that may adhere to various ideas not normally explored or taught.
There is a possibility that original thinking is valuable, and it is possible that developing the capacity for such thinking through the consideration of complex topics is also valuable. This site presupposes the value of such cognitive experimentation, and then follows that impulse to whatever conclusions it leads to.
Regulating speech here to a level so low as to be "safe for anyone to be exposed to" would basically defeat the point of the site.
Edit: This is a type of post that should have been vetted with someone for infohazards and harms before being posted, and (Further edit) I think it should have been removed by the authors., though censorship is obviously counterproductive at this point.
Infohazards are a real thing, as is the Unilateralists's curse. (Edit to add: No, infohazards and unilateralist's curse are not about existential or global catastrophic risk. Read the papers.) And right now, overall, reduced trust in CDC will almost certainly kill people. Yes, their currently political leadership is crappy, and blameworthy for a number of bad decisions - but it doesn't change the fact that undermining them now is a very bad idea.
Yes, the CDC has screwed up many times, but publicly blaming them for things that were non-obvious (like failing to delay sending out lab kits for further testing,) or that they screwed up, and everyone paying attention including them now realizes they got wrong (like being slow to allow outside testing,) in the middle of a pandemic seems like exactly the kind of consequence-blind action that lesswrongers should know better than to engage in.
Disclaimer: I know lots of people at CDC, including some in infectious diseases, and have friends there. They are human, and get things wrong under pressure - and perhaps there are people who would do better, but that's not the question at hand.
You're wrong about this. Trust in the CDC is not a single-variable scale and not a generically useful resource. Trust in the CDC is a mix of peoples' estimation of the CDC's competence, and their estimation of whether the CDC is biased towards under-response or over-response. It is severely harmful for people to over-estimate the CDC's competence, or to fail to recognize that the CDC is biased towards under-response.
Having previously over-estimated CDC's competence caused many parties which could have been bypassing the CDC to create and deploy tests, to fail to respond in time. I expect that decision-makers currently relying on the CDC's competence will implement distancing measures and ban gatherings much too late.
The main reason we might want people to over-estimate the CDC's competence is that this trust could be used to solve coordination problems. However, the coordination problems that CDC could plausibly solve--closing airports, banning public gatherings, and implementing quarantines--are problems that it solves using legal power, not using generic community trust. To the extent that community trust is required to implement such measures, knowing that the CDC has been consistently biased towards under-response will make it easier, to a greater degree than knowing that they've been incompetent will make it harder.
My evaluation is that reducing trust in the CDC has net-positive consequences. But note that, separately, I don't think an evaluation of this depth is typically required before truthfully speaking about an organization's credibility. I expect that nearly all of the time, when trading off between speaking truth and empowering an institution, speaking truth is the correct move, and those who think otherwise will be mistaken.
I can't reply to all of this now, but in short, yes, it's not a single variable scale, and yes, over-reliance on government was on net very harmful to this point. But no, most of the CDC's influence isn't legal powers, since the US legal system simply doesn't work that way. The CDC cannot tell governors what to do, nor can the president - that's all about their ability to persuade people that they should be listened to, and it's going to be critical if and when they get their shit together.
I also think several object-level claims in the post are *wildly* off base in several places, in part showing a basic lack of understanding of the issues, and in part claiming retrospectively that they should have know things that no-one knew in advance. It claims The CDC should have approved testing that they weren't part of the approval process for - no, HHS isn't the same as CDC, nor is the FDA. It says the CDC needed to respond with tests quicker, but they evidently should have gone slower with distributing lab tests to ensure they didn't have a false positive problem. The CDC correctly told people that focusing on masks would be a bad idea, because taking focus away from handwashing is really fucking stupid given relative efficacy and scalability of the two.
And your prior assumption that on balance it's better to attack an institution instead of empowering it is predicated on the claims 1) being true, and 2) directly having a bearing on whether the institution should be trusted. In this case, I've noted that 1 is false, and for 2, I don't think that directly attacking their credibility for admitted missteps is either necessary or helpful in telling the truth.
You're right that the blocks to testing were largely caused by HHS and the FDA, not the CDC. We described that in the text, but I agree that there's too large a risk someone skims the headings and misses that. I think it's important to include because it's entangled with things the CDC did do, but I've edited the heading to be clearer.
I think you're confusing the CDC with the US government generally in many more places, and have failed to differentiate in ways that are misleading to readers. And as I said, I think you're both blaming the CDC for things they got right, like discouraging use of already scarce masks by the uninfected public, and wrong to blame the CDC for mistakes that are only clear in retrospect.
This is a type of post that should have been vetted with someone for infohazards and harms before being posted, and pending that, I think it should be deleted by moderators or removed by the authors.
As a response to this, the moderator team did indeed reach out (CC'ing David) to one of the people I think David and I both consider to be among the best informed decision-makers in biorisk. With their permission, here is the key excerpt from their response:
> [Me summarizing David:] David is under the impression that people like Elizabeth and Jim are under an obligation to show posts like this to people in biorisk like yourself and definitely not publish if you had any objections (and that posts that don't do so should be immediately deleted). Do you think they are under that obligation and that we should delete posts of this type?
I do not think they are under an obligation to do this. If the post contained object-level nonobvious content related to generating or exacerbating biorisks, I would consider them under a moral obligation to do so, the strength of which would depend on the particulars of the situation.
If the post overemphasizes the degree to which it's handled the outbreak badly only mildly-moderately, or based on reasonable-seeming lines of argumentation in my view, I'd likely consider that within the reasonable range of opinions/perspectives to hold and share on forums like LW. If the post was highly misleading, such that I thought it communicated the wrong picture of the CDC, then I'd think it was epistemically virtuous to make top-level updates, and if the authors refused to do that, writing a counter-post explaining why their post was misleading would seem like a good thing to do to me, though not something I'd want to demand, if I were in position to demand such a thing, which I don't consider myself to be.
Overall, my sense is that you made a prediction that people in biorisk would consider this post an infohazard that had to be prevented from spreading (you also reported this post to the admins, saying that we should "talk to someone who works in biorisk at at FHI, Openphil, etc. to confirm that this is a really bad idea").
We have now done so, and in this case others did not share your assessment (and I expect most other experts would give broadly the same response). I think the authors were correct in predicting a response like this if they had ran it by anyone else, and I also don't think they were under any obligation to run the post by anyone else. This is not in any way a post that is particularly likely to contain infohazards, and I feel very comfortable with people posting posts in this general reference class without running them by anyone else first.
Of course, please continue to point out any errors and ask for factual corrections to the post. And downvote the post if you think it is overall more misleading than helpful. A really big reason for posting things like this publicly is so that we can correct any errors and collectively promote the most important information to our attention. But it seems clear to me that this post does not constitute any significant infohazard that the LessWrong team should prevent from spreading.
I do also think that it is important for LessWrong to have a good infohazard policy, in particular for more object-level ideas, both in biorisk and artificial intelligence. In those domains, I would have probably followed your recommended policy of drafting the post until we had run the post by some more people. I am also happy to chat more with you about what our policies in these more object-level domains should be.
It does seem to me that your comments on this post (and your private messages, and postings to other online groups warning of infohazards in this space) have overall been quite damaging to good discourse norms, and I would strongly request that you stop asking people to take posts down, in particular in the way you have here. Our ability to analyze ideas on the basis of their truth-value, and not the basis of their political competitiveness and implications is one of our core strengths on LessWrong, and it appears to me that in this thread you've at least once argued for conclusions you think are prosocial, but not actually true [LW(p) · GW(p)], which I think is highly damaging.
You've also claimed that hard to access expert-consensus was on your side, when it evidently is not, which I think is also really damaging, since I do think our ability to coordinate around actually dangerous infohazards requires accurate information about the beliefs of our experts, and it seems to me that overall people will walk away with a worse model of that expert consensus after reading your comments.
Most of the consensus that has been built around infohazards in the bio-x-risk community is about the handling of potentially dangerous technological inventions, and major security vulnerabilities. You claimed here (and other places) that this consensus also applied to criticizing government institutions during times of crisis, which I think is wrong, and also has very little chance of actually ever reaching consensus (at least in crises of this type).
The effects of your comments have also been quite significant. The authors of this post have expressed large amounts of stress to me and others. I (and others on the mod team like Ben [LW · GW]) have spent multiple hours dealing with this, and overall I expect authority-based criticism like this to have very large negative chilling effects that I think will make our overall ability to deal with this crisis (and others like it) quite a bit worse. You have also continued writing comments like this in private messages and other forums adjacent to LessWrong, with similar negative effects. While I don't have jurisdiction over those places, I can only implore you strongly to cease writing comments of this type, and if you think something is spreading misinformation, to instead just criticize it on the object-level. Here, on LessWrong, where I do have jurisdiction, I still don't think I am likely to invoke my moderator powers, but I am going to strong-downvote any future comments like this (and have already done so for this one).
If you do believe that we should change our infohazard policies to include cases like this, then you are welcome to argue for that by making a new top-level post. But please don't claim that we already have norms, policies and broad buy-in, and that a post like this should have already been taken down, which is just evidently wrong.
I will of course leave Jim and/or Elizabeth to give their thoughts on the ethics of the situation, but I was surprised to see you take this line David, so I wanted to briefly share my perspective with you.
The US govt is majorly failing to deal with coronavirus, and in many worlds the fatalities will be massive (10million plus). At some point it will be undeniable and hopefully the CDC and so on will be able to give the important quarantining advice, and I'll support them at that time.
But in the meantime,my honest impression, for reasons accurately described above about their recommendations (around things like masks aren't helpful and claiming that community spread hasn't happened when they've in-principle not tested for it), is that they've been dishonest and misleading, leading people to substantially underestimate the risk.
Perhaps you think the post is actually false on those charges, and if so that criticism is good and proper and I endorse it wholeheartedly. But if not, I'm understanding your position to be that we should not point out the dishonesty and misleading information for the greater good. While I can imagine many politicians are indeed in that situation, I always feel that here, on LessWrong, we should try to actually be able to talk honestly and openly about the truth of the matter, and be a place where we can actually build an accurate map and not systematically self-deceive for this reason. That's my perspective on the matter that leads me to be pretty positively disposed to the above post ethically.
(I'm overloaded with various emergency prep stuff today and can't have a super long convo today – should be able to reply tomorrow though.)
(10 hour time zone lags make conversations like this hard.)
My claim is not that it's certainly true that this is bad, and should not have been said. I claim that there is a reasonable chance that it could be bad, and that for that reason alone, it should have been checked with people and discussed before being posted.
I also claim that the post is incorrect on its merits in several places, as I have responded elsewhere in the thread. BUT, as Bostrom notes in his paper, which people really need to read, infohazards aren't a problem because they are false, they are a problem because they are damaging. So if I thought this post were entirely on point with its criticisms, I would have been far more muted in my response, but still have bemoaned the lack of judgement in not bothering to talk to people before posting it. But in that case, I might have agreed that while the infohazard concerns were real, they would be outweighed by truth seeking norms on LW. I'm not claiming that we need censorship of claims here, but we do need standards [LW · GW], and those standards should certainly include expecting people to carefully vet potential infohazards and avoid unilateralist curse issues before posting.
I want to be clear with you about my thoughts on this David. I've spent multiple hundreds of hours thinking about information hazards, publication norms, and how to avoid unilateralist action, and I regularly use those principles explicitly in decision-making. I've spent quite some time thinking about how to re-design LessWrong to allow for private discussion and vetting for issues that might lead to e.g. sharing insights that lead to advances in AI capabilities. But given all of that, on reflection, I still completely disagree that this post should be deleted, or that the authors were taking worrying unilateralist action, and I am happy to drop 10+ hours conversing with you about this.
Let me give my thoughts on the issue of infohazards.
I am honestly not sure what work you think the term is doing in this situation, so I'll recap what it is for everyone following. In history, there has been a notion that all science is fundamentally good, that all knowledge is good, and that science need not ask ethical questions of its exploration. Much of Bostrom's career has been to draw the boundaries of this idea and show where it is false. For example, one can build technologies that a civilization is not wise enough to use correctly, that lead to degradation of society and even extinction (you and I are both building our lives around increasing the wisdom of society so that we don't go extinct). Bostrom's infohazards paper is a philosophical exercise, asking at every level of organisation what kinds of information can hurt you. The paper itself has no conclusion, and ends with an exhortation toward freedom of speech, its point is simply to help you conceptualise this kind of thing and be able to notice in different domains. Then you can notice the tradeoff and weigh it properly in your decision-making.
So, calling something an infohazard merely means that it's damaging information. An argument that has a false conclusion is an infohazard, because it might cause people to believe a false conclusion. Publishing private information is an infohazard, because it allows adversaries to attack you better, but we still often publish infohazardous private material because it contributes to the common good (e.g. listing our home address on public facebook events helps people burgle your house but it's worth it to let friends find you). Now, the one kind of infohazard that there is consensus on in the x-risk community that focuses on biosecurity, is sharing specific technological designs for pathogens that could kill masses of people, or sharing information about system weaknesses that are presently subject to attack by adversaries (for obvious reasons I won't give examples, but Davis Kingsley helpfully published an example that is no longer truein this post [LW · GW] if anyone is interested), so I assume that this is what you are talking about, as I know of no other infohazard that there is a consensus about in the bio-x-risk space that one should take great pains to silence and punish defectors on.
The main reason Bostrom's paper is brought up in biosecurity is in the context of arguing that the spread of specific technological designs for various pathogens and or damaging systems shouldn't be published or sketched out in great detail. As Churchill was shocked by Niels Bohr's plea to share the nuclear designs with the Russians, because it would lead to the end of all war (to which Churchill said no and wondered if Bohr was a Russian spy), it might be possible to have buildable pathogens that terrorists or warring states could use to hurt a lot of people or potentially cause an existential catastrophe. So it would be wise to (a) have careful publication practises that involve the option of not-publishing details of such biological systems and (b) not publicise how to discover such information.
Bostrom has put a lot of his reputation on this being a worrying problem that you need to understand carefully. If someone on LessWrong were sharing e.g. their best guess at how to design and build a pathogen that could kill 1%, 10% or possibly 100% of the world's population, I would be in quite strong agreement that as an admin of the site I should preliminarily move the post back into their drafts, talk with the person, encourage them to think carefully about this, and connect them to people I know who've thought about this. I can imagine that the person has reasonable disagreements, but if it seemed like the person was actively indifferent to the idea that it might cause damage, then I can't stop them writing anywhere on the internet, but LessWrong has very good SEO and I don't want that to be widely accessible so it could easily be the right call to remove their content of this type from LessWrong. This seems sensible for the case of people posting mechanistic discussion of how to build pathogens that would be able to kill 1%+ of the population.
Now, you're asking whether we should treat criticism of governmental institutions during a time of crisis in the same category that we treat someone posting pathogens designs or speculating on how to build pathogens that can kill 100 million people. We are discussing something very different, that has a fairly different set of intuitions.
Is there an argument here that is as strong as the argument that sharing pathogen designs can lead to an existential catastrophe? Let me list some reasons why this action is in fact quite useful.
Helping people inform themselves about the virus. As I am writing this message, I'm in a house meeting attempting to estimate the number of people in my area with the disease, and what levels of quarantine we need to be at and when we need to do other things (e.g. can we go to the grocery store, can we accept amazon packages, can we use Uber, etc). We're trying to use various advice from places like the CDC and the WHO, and it's helpful to know when I can just trust them to have done their homework versus taking them as helpful but that I should re-do their thinking with my own first-principles models in some detail.
Helping necessary institutional change happen. The coronavirus is not likely to be an existential catastrophe. I expect it will likely kill over 1 million people, but is exceedingly unlikely to kill a couple percent of the population, even given hospital overflow and failures of countries to quarantine. This isn't the last hurrah from that perspective, and so a naive maxipok utilitarian calculus would say it is more important to improve the CDC for future existential biorisks rather than making sure to not hinder it in any way today. I think that standard policy advice is that stuff gets done quickly in crisis time, and I think that creating public, common knowledge of the severe inadequacies of our current institutions at this time, not ten years later when someone writes a historical analysis, but right now, is the time when improvements and changes are most likely to happen. I want the CDC to be better than this when it comes to future bio-x-risks, and now is a good time to very publicly state very clearly what it's failing at.
Protecting open, scientific discourse. I'm always skeptical of advice to not publicly criticise powerful organisations because it might cause them to lose power. I always feel like, if their continued existence and power is threatened by honest and open discourse... then it's weird to think that it's me who's defecting on them when I speak openly and honestly about them. I really don't know what deal they thought they could make with me where I would silence myself (and every other free-thinking person who notices these things?). I'm afraid that was not a deal that was on offer, and they're picking the wrong side. Open and honest discourse is always controversial and always necessary for a scientifically healthy culture.
So the counterargument here is that there is a downside strong enough possible here. Importantly, when Bostrom shows that information should be hidden and made secret because sharing it might lead to an existential catastrophe.
Could criticising the government here lead to an existential catastrophe?
I don't know your position, but I'll try to paint a picture, and let me know if this sounds right. I think you think that something like the following is a possibility. This post, or a successor like it, goes viral (virus based wordplay unintended) on twitter, leading to a consensus that the CDC is incompetent. Later on, the CDC recommends mass quarantine in the US, and the population follows the letter but not the spirit of the recommendation, and this means that many people break quarantine and die.
So that's a severe outcome. But it isn't an existential catastrophe.
(Is the coronavirus itself an existential catastrophe? As I said above, this doesn’t seem like it’s the case to me. Its death rate seems to be around 2% when given the proper medical treatment (respirators and the like), and so given hospital overload will likely be higher, perhaps 3-20% (depending on the variation in age of the population). My understanding is that it will likely peak at a maximum of 70% of any given highly connected population, and it's worth remembering that much of humanity is spread out and not based in cities where people see each other all of the time.
I think the main world in which this is an existential catastrophe is the world where getting the disease does not confer immunity after you lose the disease. This means a constant cycle of the disease amongst the whole population, without being able to develop a vaccine. In that world, things are quite bad, and I'm not really sure what we'll do then. That quickly moves me from "The next 12 months will see a lot of death and I'm probably going to be personally quarantined for 3-5 months and I will do work to ensure the rationality community and my family is safe and secured" to "This is the sole focus of my attention for the foreseeable future."
Importantly, I don't really see any clear argument for which way criticism of the CDC plays out in this world.)
And I know there are real stakes here. Even though you need to go against CDC recommendation today and stockpile, in the future the CDC will hopefully be encouraging mass quarantine, and if people ignore that advice then a fraction of them will die. But there are always life-and-death stakes to speaking honestly about failures of important institutions. Early GiveWell faced the exact same situation, criticising charities saving lives in developing countries. One can argue that this kills people by reducing funding for these important charities. But this was just worth a million times over it because we've coordinated around far more effective charities and saved way more lives. We need to discuss governmental failure here in order to save more lives in the future.
(Can I imagine taking down content about the coronavirus? Hm, I thought about it for a bit, and I can imagine that, if a country was under mass quarantine, if people were writing articles with advice about how to escape quarantine and meet people, that would be something we'd take down. There's an example. But criticising the government? It's like a fundamental human right, and not because it would be inconvenient to remove, but because it's the only way to build public trust. It makes no sense to me to silence it.)
The reason you mustn’t silence discussion when we think the consequences are bad, is because the truth is powerful and has surprising consequences. Bostrom has argued that if it’s an existential risk, this principle no longer holds, but if you think he thinks this applies elsewhere, let me quote the end of his paper on infohazards.
Even if our best policy is to form an unyielding commitment to unlimited freedom of thought, virtually limitless freedom of speech, an extremely wide freedom of inquiry, we should realize not only that this policy has costs but that perhaps the strongest reason for adopting such an uncompromising stance would itself be based on an information hazard; namely, norm hazard: the risk that precious yet fragile norms of truth-seeking and truthful reporting would be jeopardized if we permitted convenient exceptions in our own adherence to them or if their violation were in general too readily excused.
Footnote on Unilateralism
I don't see a reasonable argument that this was close to such a situation such that it's a dangerous unilateralist action to write this. This isn't a situation where 95% of people think it's bad but 5% think it's good.
If you want to know whether we've lifted the unilateralist's curse here on LessWrong, you need look no further than the Petrov Day event that we ran [LW · GW], and see what the outcome was [LW · GW]. That was indeed my attempt to help LessWrong practise and self-signal that we don't take unilateralist action. But this case is neither an x-risk infohazard nor worrisome unilateralist action. It’s just two people doing their part in helping us draw an accurate map of the territory.
Have you considered whether your criticism itself may have been a damaging infohazard (e.g. in causing people to wrongly place trust in the CDC and thereby dying, in negatively reinforcing coronavirus model-building, in increasing the salience of the "infohazard" concept which can easily be used to illegitimately maintain a state of disinformation, in reinforcing authoritarianism in the US)? How many people did you consult before posting it? How carefully did you vet it?
If you don't think the reasons I mentioned are good reasons to strongly vet it before posting, why not?
I have discussed the exact issue of public trust in institutions during pandemics with experts in this area repeatedly in the past.
There are risks in increasing the salience of infohazards, and I've talked about this point as well. The consensus in both the biosecurity world, and in EA in general, is that infohazards are underappreciated relative to the ideal, and should be made more salient. I've also discussed the issues with disinformation with experts in that area, and it's very hard to claim that people in general are currently too trusting of government authority in the United States - and the application to LW specifically makes me think that people here are less inclined to trust government than the general public, though it's probably more justifiable. But again, the protest isn't about just die-hard lesswrongers reading the post, it's about the risks.
But aside from that, I think there is no case to be made that the criticisms that I noted are off-base on the object-level are infohazards. Pointing out that the CDC isn't in charge of the FDA's decision, or pointing out that the CDC distributed tests *too quickly* and had an issue which they corrected hardly seems problematic.
The consensus in both the biosecurity world, and in EA in general, is that infohazards are underappreciated relative to the ideal, and should be made more salient.
Note that I pretty strongly disagree with this. I really wish people would talk less about infohazards, in particular when people talk about reputational risks. My sense is that a quite significant fraction of EAs share this assessment, so calling it consensus seems quite misleading.
I've also discussed the issues with disinformation with experts in that area, and it's very hard to claim that people in general are currently too trusting of government authority in the United States
I also disagree with this. My sense is that on average people are far too trusting of government authority, and much less trust would probably improve things, though it obviously depends on the details of what kind of trust. Trust in the rule of law is very useful. Trust in the economic policies of the united states, or its ability to do long-term planning appears widespread and usually quite misplaced. I don't think your position is unreasonable to hold, but calling its negation "very hard to claim" seems wrong to me, since again many people I think we both trust a good amount disagree with your position.
For point one, I agree that for reputation discussions, infohazards are probably overused, and I used it that way here. I should probably have been clearer about this in my own head, as I was incorrectly lumping infohazards together. In retrospect I regret bringing this up, rather than focusing on the fact that I think the post was misleading in a variety of ways on the object level.
For point two, I also think you are correct that there is not much consensus in some domains - when I say they are clearly not trusting enough, I should have explicitly (instead of implicitly) made my claim about public health. So in economics, governance, legislation, and other places, people are arguably too trusting overall - not obviously, but at least arguably. The other side is that most people who aren't trusting of government in those areas are far too overconfident in crazy pet theories (gold standard, monarchy, restructuring courts, etc.) compared to what government espouses - just as they are in public health. So I'm skeptical of the argument that lower trust in general, or more assumptions that the government is generically probably screwing up in a given domain, would actually be helpful.
Cool, then I think we mostly agree on these points.
I do want to say that I am very grateful about your object-level contributions to this thread. I think we can probably get to a stage where we have a version of the top-level post that we are both happy with, at least in terms of its object-level claims.
Thanks for answering. It sounds like, while you have discussed general points with others, you have not vetted this particular criticism. Is there a reason you think a higher standard should be applied to the original post?
In large part, I think there needs to be a higher standard for the original post because it got so many things wrong. And at this point, I've discussed this specific post, and had my judgement confirmed three times by different people in this area who don't want to be involved. But also see my response to Oliver below where I discuss where I think I was wrong.
The underlying statistical phenomenon is just regression to the mean: if people aren't perfect about determining how good something is, then the one who does the thing is likely to have overestimated how good it is.
I agree that people should take this kind of statistical reasoning into account when deciding whether to do things, but it's not at all clear to me that the "Unilateralist's Curse" catchphrase is a good summary of the policy you would get if you applied this reasoning evenhandedly: if people aren't perfect about determining how bad something is, then the one who vetoes the thing is likely to have overestimated how bad it is.
In order for the "Unilateralist's Curse" effect to be more important than the "Unilateralist's Blessing" effect, I think you need additional modeling assumptions to the effect that the payoff function is such that more variance is bad. I don't think this holds for the reference class of "blog posts criticizing institutions"? In a world with more variance in blog posts criticizing institutions, we get more good criticisms and more bad criticisms, which sounds like a good deal to me!
I think you should read Bostrom's actual paper for why this is a more compelling argument specifically when dealing with large risks. And it is worth noting that the reference class isn't "blog posts criticizing institutions" - which I'm in favor of - it's "blog posts attacking the credibility of the only institution that can feasibly respond to an incipient epidemic just as the epidemic is taking off and the public is unsure what to do about it."
I would support a policy where, if an LW post starts to go viral, then original authors or mods are encouraged to add disclaimers to the top of posts that they wouldn't otherwise need to add when writing for the LW audience. As SSC sometimes does.
I would not support a policy where LW authors always preemptively write for a general audience.
Here we face the tragedy of "reference class tennis" [LW · GW]. When you don't know how much to trust your own reasoning vs. someone else's, you might hope to defer the historical record for some suitable reference class of analogous disputes. But if you and your interlocutor disagree on which reference class is appropriate, then you just have the same kind of problem again.
I really don't think this is a reference class tennis problem, given that I'm criticizing a specific post for specific reasons, not making an argument that we should judge this on the basis of a specific reference class.
And given that, I'm still seeing amazingly little engagement of the object level question of whether the criticisms I noted are valid.
If and to the degree and in the circumstances and ways that the CDC is trustworthy, I desire to believe that the CDC is trustworthy.
If and to the degree and in the circumstances and ways that the CDC is untrustworthy, I desire to believe that the CDC is untrustworthy.
Let me not become attached to beliefs I may not want.
If you tell me that my statement that someone else is lying to us about important factual information that we need to get right in order to keep us and our friends and loved ones safe is true but harmful, and I need to delete my statement, because it is important that people believe the lying liars who are lying for our own good, and I should exercise prior restraint before I point out such things?
I too am surprised by this objection coming from David. But I also want to point out that it seems like it is mostly David's objection, and the vast majority here are supportive of the post.
It also seems like David thinks the post contains errors, and he says he would not have been anything like this vocal otherwise. Obviously we should work out quickly whether or not the post does contain errors, and correct any we find.
Hopefully this clarifies things for you a bit, but I am making essentially 3 claims. I'd be happy to know which of these you disagree with, if any.
First, the restate the idea of infohazards as it regards the litany of Tarski, this is a personal litany. It does not apply to making public statements, especially ones that are put in places that people who will be negatively affected by them will likely see them. Otherwise, I might apply the litany to say "If I am going to unconditionally cooperate in this prisoners dilemma, I desire that everyone knows I will unconditionally cooperate in this prisoners dilemma." This is obviously wrong and dumb.
Second, the claims in the post don't have the simple relationship with trustworthiness that one might assume, and some of the claims are in fact misleading. These bear further discussion.
Most obviously, blaming the CDC for the FDA and HHS not allowing 3rd party detection kits is somewhere between false and misleading.
In some cases, it's only clear in retrospect that the CDC got this wrong. Perhaps you think they should do better, but that's different than saying they are untrustworthy, or not credible.
There is a difference between "these facts make the CDC look bad" and "the CDC is untrustworthy." As I said elsewhere in comments, a number of points here are in that category.
There are situations where CDC did basically exactly the right thing, and the claim that they are untrustworthy is based on bad analysis. An example is discouraging use of face masks, which is exactly the correct thing for them to encourage given both the limited supply, and the fact that most people who are buying and hoarding them aren't going to use them correctly. They didn't even misrepresent the evidence - there really is evidence that community use of face-masks doesn't help. And even if not, the fact that the CDC makes good public recommendations seems like a really bad reason to encourage people to distrust them.
Other places, they did the right thing, and are being blamed for the fact that things went wrong. For example, distributing testing kits quickly was really important, so they did. The fact that one of the chemicals supplied was no good was detected before the any were used seems like a great reason to think the CDC is doing a good job, both in rushing, and in making sure nothing goes wrong and catching their mistake before the kits started being used.
Third, given that the authors said they realized it might be bad, this should never have been posted without discussion with someone external. Instead, they went ahead and posted it without asking anyone. Lesswrong should have higher standards than this.
Third, given that the authors said they realized it might be bad, this should never have been posted without discussion with someone external.
Suppose I’m a Less Wrong member who sometimes makes posts. Suppose I have some thoughts on this whole virus thing and I want to write down those thoughts and post them on Less Wrong.
You’re suggesting that after I write down what I think, but before I publish the post, I should consult with “someone external”.
But with whom? Are you proposing some general guideline for how to determine when a post should go through such consultation, and how to determine with whom to consult, and how to consult with them? If so, please do detail this process. I, for one, haven’t the foggiest idea how I would, in the general case, discern when a to-be-published post of mine needs to be vetted by some external agent, and how to figure out who that should be, etc.
This whole business of having people vet our posts seems like it’s easy to propose in retrospect as a purported unsatisfied criterion of posting a given post, but not so easy to satisfy in prospect. Perhaps I’m misunderstanding you. In any case, I should like to read your thoughts on the aforesaid guidelines.
(By the way, what assurances of vetting would satisfy you? Suppose the OP had contained a note: “This post has been vetted by X.”. And suppose otherwise the post were unchanged. For what value(s) of X would you now have no quarrel with the post?)
I'm proposing that literally anyone in the EA biosecurity world would have been a good place to start. Almost any of them would either have a response, or have a much better idea of who to ask. Just like for hey, I have an idea for how someone could misuse AI, running the potentially dangerous ideas by almost anyone in AI safety is enough for people to say either "I really wouldn't worry," or "maybe ask person Y," or "Holy shit, no."
As for what value of X, I'd be happy if basically anyone that had done work in biosecurity was asked. Anyone who signed up for / attended the Catalyst summit [LW · GW], for example. Or anyone who has posted about biosecurity on the EA forum. I know most of them, and on the whole I trust their judgement. Maybe I'm wrong, but in this case, I think most of their judgement would be to say either that it needs to be edited, or that it should probably be checked with someone at Open Phil or FHI before posting, since it's potentially a bad idea.
Most obviously, blaming the CDC for the FDA and HHS not allowing 3rd party detection kits is somewhere between false and misleading.
Please support this claim. It seems obvious that they shat the bed (don't know which agency, let god sort them out for now, history and FOIA requests will sort them out in the future). It seems obvious from reading the news that many many local and commercial labs would have been ready with capacity a lot sooner than they are if FDA/CDC/HHS conglomerate got out of the way sooner.
It's quite plausible that this is due to Trump pressure, history will sort this out, but my estimation of guilt will likely just move from "weasel" to "weak for not resisting", and the facts remain the same
The CDC is not the same as HHS or the FDA, since they have different staff, are in different locations, and they have different goals (42 USC 6a versus 42 USC 43 and 21 USC).
Given that, I'm not sure why we should trust the CDC more or less because of the actions of the FDA. I'm not sure why this claim needs further support. Note that the CDC has no legal or other authority over what tests non-federal government laboratories can perform. They do have oversight over both certain types of labs from a biosafety standpoint, but that's mostly irrelevant to allowing them to do tests, and there is no claim that the CDC banned research. And if we are asking the question that this post purports to answer - should we trust the CDC - it makes quite a difference whether the decision being discussed was something they had control over.
... many local and commercial labs would have been ready with capacity a lot sooner than they are if FDA/CDC/HHS conglomerate got out of the way sooner.
If you want to know whether the "FDA/CDC/HHS conglomerate," should be blamed, I'd ask whether you think they are all the same thing, or whether this question in incoherent. As noted above, they aren't the same, so I claim the question is mostly incoherent. You might suggest that they are all a part of the same government, so they should be lumped together. I'd suggest that you could ask whether you should trust the "DR_Manhattan/Davidmanheim/Elizabeth, jimrandomh conglomerate" in our judgement about whether to differentiate between these agencies. Clearly, of course, our judgement differs, but we're all a part of the same web site, so maybe we can all be lumped together. If that doesn't make sense, good.
All data I've seen indicates it was a poorly interplay between the FDA/HHS that caused the CDC to be the only source of tests (because FDA/HHS were the ones with the legal power to do so, and it's recorded that they used it). It's included on this list because interacted with decisions the CDC *did* make. I don't think it's misleading because we noted which agency did what, and have since edited the section header to make it clear even to skimmers.
It interacted with them, but it's not clear to me that it interacted in a way that's relevant to the credibility of the CDC.
The examples are "a list of actions from the CDC that we believe are misleading or otherwise indicative of an underlying problem", but this isn't an action from the CDC and it doesn't obviously indicate a problem at the CDC.
This is a very serious concern that we discussed before publishing- especially the parts about masks and potential racial differences. Ultimately we made some accommodations but decided that publishing was the best thing, for the following reason:
The usefulness of trust in the CDC is not independent of the quality of the job the CDC is doing. There is a level of mishandling bad enough that excess trust in the CDC would cost people's lives. I don't know if we're at that level- I sure hope we're not, both selfishly and altruistically- but it is really important to know when we are. And if we shut down information sharing on the assumption that trust in the CDC is good, we rob ourselves of the ability to identify that. Blind (performance of) trust also precludes the possibility that the CDC could be induced into a better response.
I'm curious what information would make you chance your mind about trust in the CDC being net positive, and how that information would be accessible.
Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion. Examples:
An N-week follow up showing that recovered individuals were not shedding virus and/or that close contacts weren't getting infected. (I've gone back and forth on N here. I think six is the minimum and the longer the better).
Evidence that the CDC's webpage guidelines were just for show and we were performing South-Korea-like drive by screenings (although, uh, that would bring up different concerns).
Properly controlled studies of attempts to get people to use masks showing that it led to a higher transmission rate.
And evidence that I was wrong on enough assertions would change my mind on the thesis, so I would of course withdraw it.
As to what would change my mind even if I still thought the post was true... If I found it was driving people to listen to worse sources, I would at least regret the order in which we'd published. However I don't know how I could know which source was worst without an open sharing of the problems with all of them.
I go back and forth on whether simply sufficiently bad consequences would be enough to change my mind. I'm attracted to the consequentialist framework that says they should be. But in a world where posts like this are discouraged, how can I know what the consequences really are? Maybe people are net-benefiting from their trust in the CDC because it leads them to do things like vaccinate and wash their hands- but how could I trust the numbers saying that? How could I know vaccination and hand washing were even good, if it was possible to suppress evidence that they weren't?
An option that I think should be on the table (at least to consider) is "the post is accessible to LessWrongers, but requires a log-in, so it can't go viral among people who have a lot less context".
This requires a feature we don't currently have, but I think we'll want sooner or later for political stuff, and is not that hard to build.
Right now I think this post is basically purely beneficial (I expect the people reading it to think critically about it and have access to give information), but if I found the post had gone viral I'd become much more uncertain. (this is not to say I'd think it was harmful, I'd just have much wider error bars)
The level of handwringing about this post seems completely out of proportion when there are many thousands of people coming up with all sorts of COVID-related conspiracy theories on facebook and twitter. If it went viral my guess is that it would actually increase trust in the CDC by giving people a more realistic grounding for their vague suspicions.
We do, and that's the point. It's not "hey, we're not as bad as them so don't complain to us!". It's that there is already a lot of distrust out there, and giving people something to latch onto with "see, I knew the CDC wasn't being honest with me!" can keep them from spiraling out of control with their distrust, since at least they know where it ends.
Mild well sourced criticism is way more encouraging of trust than no criticism under obvious threat of censorship because the alternative isn't "they must be perfect" it's "if they have to hide it, the problems are probably worse than 'mild'".
I responded to this on a different thread, but aside from the factual issues, this isn't "mild well sourced criticism." The post says the CDC is so untrustworthy that we can't point uninformed people to it as a valid place to learn things, and there is literally no decent source for what people should do. That's way beyond what anyone else credible was saying.
I think that requiring a login would reduce my concern about this post 95%. But given that it isn't, you can't wait for a post to go viral before deciding it was bad, you need to decide not to post / remove the post beforehand.
I go back and forth on whether simply sufficiently bad consequences would be enough to change my mind.
This makes me far more convinced that we need to address the infohazard concerns, which I tried to raise, rather than debate consequences directly - which everyone seems to agree are plausibly very bad, likely just fine, and somewhat unclear. There is a process issue that I see here - as far as I've read, you as an author decided that there were significant potential concerns, decided that they might be minimal enough to be fine, and then - without discussing the issue - unilaterally chose to post anyways.
This seems like the very definition of Unilateralist's curse, and if we can't get this right here on lesswrong, I'm terrified of how we'll do with AI risk.
Secondarily, for " Compelling evidence that we were wrong on any individual assertion would of course change my mind on sharing that particular assertion," I'll point to the bizarre blaming of the CDC for HHS and FDA's failure to allow independent testing.
And for the final point, about masks, there is no compelling reason to say they should be encouraging their use given that the vast majority of people don't know how to use them and from what I have seen/heard from people in biosecurity in the US, are almost all misusing them, so the possible benefit is minimal at best. But even if they are on net effective, would be due to a reasonable disagreement about social priorities during a potential pandemic.
However, I think that you should be more charitable than even that in your post. If there is compelling reason to think that the decisions made were eminently reasonable given the information CDC had at the time, blaming them for not knowing what you know now, with far more information, seems like a poor reason to say we should not trust them. And other than their general hesitation to be alarmist, which is a real failing but one that is a good decision for institutional reasons, "I can see this was dumb in hindsight" seems to cover most of the remaining points you made.
1) I heard that you actually didn't ignore unilateralist curse when preparing this and got outside feedback.
2) The claims were both correct and relevant to the CDC, (see my response to jimrandomh)
I'd change my mind about CDC if I were convince that these (or similar criticisms) were correct, as above, and were fair criticisms given the fact that you're speaking post-hoc from an epistemically superior vantage point of having are more information than they did when they made their decisions. And remember that CDC is an organization with legal constraints that make them unable to do some of the things you think are good ideas, and that they have been operating under a huge staff shortage due to years of a hiring freeze and budget cuts.
And remember that CDC is an organization with legal constraints that make them unable to do some of the things you think are good ideas, and that they have been operating under a huge staff shortage due to years of a hiring freeze and budget cuts.
These sound like reasons to trust the CDC even less, is that what you meant?
While for me it is, indeed, a reason to put less weight on their analysis or expect less useful work/analysis to be done by them in a short/medium-term.
But I think this consideration, also, weakens certain types of arguments about the CDC's lack of judgment/untrustworthiness. For example, arguments like "they did this, but should have done better" loses part of its bayesian weight as the organization likely made a lot of decisions under time pressure and other constraints. And things are more likely to go wrong if you're under-stuffed and hence prioritize more aggressively.
I don't expect to have a good judgment here, but it seems to me that "testing kits the CDC sent to local labs were unreliable" might fall here. It might have been a right call for them to distribute tests quickly and ~skip ensuring that tests didn't have a false positive problem.
A better example: one might criticize CDC for lack of advice aimed at the vulnerable demographics. But absence might result not from lack of judgment but from political constraints. E.g. jimrandomh writes [LW(p) · GW(p)]:
Addendum: A whistleblower claims that CDC wanted to advise elderly and fragile people to not fly on commercial airlines, but removed this advice at the White House's direction.
Upd: this might be indicative of other negative characteristics of CDC (which might contribute to unreliability) but I don't know enough about the US gov to asses it.
I want to apologize, and make sure there is a clear record of what I think both on the object level, and about my comment, in retrospect. (For other mistakes I made, not related to this comment, see here [LW · GW].)
I handled this very poorly, and wasted a significant amount of people's time. I still think that the claims in the post were materially misleading, (and think some of the claims still are, after edits.) The authors replaced the section saying not to listen to the CDC with a very different disclaimer, which now says: "Notably we’re not saying any of the things they do recommend are bad." I think we should have a clear norm that potentially harmful things need a much greater degree of caution than it displayed. But calling for it to be removed was stupid.
Above and beyond my initial comment, critically, I screwed up by being pissed off and responding angrily below about what I saw as an uninformed and misleading post, and continued to reply to comments without due consideration of the people involved in both the original post, and the comments. This was in part due to personal biases, and in part due to personal stress, which is not an excuse. This led to what can generously be described as a waste of valuable people's time, at a particularly bad time. I have apologized to some of those those involved already, but wanted to do so publicly here as well.
Reviewing the arguments
I initially said the post should have been removed. I also used the term "infohazard" in a way that was alarmist - my central claim was that it was damaging and misleading, not that it was an infohazard in the global catastrophic risk sense that people assumed.
Several counterarguments and response to my claim that it should be taken down were advanced follow. I originally responded poorly, so I wanted to review them here, along with my view on the strength of the claims.
1) I should not have been a jerk.
I was dismissive and annoyed about what seemed to me to be many obvious factual errors. My attitude was a mistake. It was also stupid for a number of reasons, and at the very least I should have contacted the authors directly and privately, and been less confrontational. Again, I apologize.
2) Telling people to check with others before posting, and threatening to remove posts which were not so checked, is censorship, which is harmful in other ways.
As I mentioned above, saying the post should be removed was stupid, but I maintain, as I did then, that when a person is unsure about whether saying something is a good idea, and it is consequential enough to matter, they should ask for some outside advice. I think this should be a basic norm, one that lesswrong and the rationality community should not just recommend but where feasible, should try to enforce. I do think that there was a reasonable sense of urgency in getting the message out in this case, and that excuses some level of failure to vet the information carefully.
3) We should encourage people to say true things even when harmful, or as one person said "I'd want people to err heavily on the side of sharing information even if it might be dangerous."
This stops short of Nietzschean honesty, but I still don't think this holds up well. First, as I said, I think the post was misleading, so this simply does not apply. But the discussion in the comments and privately pushed on this more, and I think it's useful to clarify what I claimed. I agree that we should not withhold information which could be important because of a vague concern, and if this post were correct, it would fall under that umbrella. However, what the post seem to me to try to do is collect misleading statements to make it clearer that a bad organization is, in fact, bad - playing level 2 regardless of truth. That seems obviously unacceptable. I do not think lying is acceptable to pursue level 2 goals in Zvi's explanation of Simulacra [LW · GW], except in dire circumstances.
But the principle advocated here says to default to level 1 brutal / damaging honesty far more often than I think is advisable, not to lie. My initial impression what the the CDC was doing far better than it in fact was, and that the negative impacts were greatly under-appreciated.
I can understand why the balance of how much truth to say when the effect is damaging is critical, and think that Lesswrong's norms are far better than those elsewhere. I agree that the bare minimum of not actively lying is insufficient, but as I said above, I disagree with others about how far to go in saying things that might be harmful because they are true.
4) We should not attempt to play political games by shielding bad organizations and ignoring or obscuring the truth in order to build trust incorrectly.
I think this is a claim that people should never play level 3 [LW · GW]. I endorse this. I agree that I was attempting to defend an institution that was doing poorly from claims that it was doing poorly, on the basis that a significant fraction of those claims were unfair. As I said above, this would be a defense. In retrospect, the organization was far worse than I thought at the time, as I realized far too late, and discussed more here [LW · GW]. On the other hand, many of the claims were in fact misleading, and I don't think that false attacks on bad things are OK either.
Note to downvoters: While I disagree with this comment, it expresses a real concern and opens a conversation that does very much need to happen. So I've upvoted it back out of the negatives, and think it should probably stay positive.
This is a harmful post, and should be deleted or removed.
Was outside of LW norms. It came off as a blunt attempt to shut down discussion, with very little in terms of justification for doing so. This is in no way a clear cut infohazard, and even if it was, I'm not convinced that shutting down discussion of things that might be infohazards is a good policy, especially on a relatively obscure site centered around truth seeking. Statements this confident about issues this complicated should only be said after some extensive analysis and discussion of the situation. jtm's presentation of the issue struck me as far more tempered and far less adversarial. I'd encourage Davidmanheim to supplant his comment with a more fleshed out version of his position.
I am shocked to hear that people need proof something is an infohazard before deciding that the issue needs to be discussed BEFORE posts like this go live. I see no evidence that any such discussion occurred, and in fact the responses above seem to indicate that they didn't.
But I did change the phrasing, so as not to claim I was trying to shut down discussion. The point I was making, however, remains.
I am shocked to hear that people need proof something is an infohazard before deciding that the issue needs to be discussed BEFORE posts like this go live.
I think there's a few issues here:
When deciding to take down a post due to infohazard concerns, what should that discussion look like?
How thorough should the vetting process for posts be before it gets posted, especially given infohazard and unilateralist curse considerations?
Is this post an infohazard and if so how dangerous is it?
My previous comment was with regards to 1.
With regards to 2, it's a matter of thresholds. Especially on this forum, I'd want people to err heavily on the side of sharing information even if it might be dangerous. I wouldn't want people to stop themselves from sharing information unless the potential infohazard was extremely dangerous or a threat to the continued existence of this community. This is mainly due to the issues that crop up once you start blinding yourself. As I understand it, 2 people discussed this issue before posting, and deemed it worthwhile to post anyway. To me, that seems like more than enough caution for the level of risk that I've seen associated with this post. Granted, I don't think the authors took the unilateralists curse into account, and that's something that everyone should keep in mind when thinking about posting infohazards. It would also be nice to have some sort of "expanding circles of vetting" where a potentially spicy post can start off only being seen by select individuals, then people above a certain karma threshold, then behind a login by the rest of the LW community, and then as a post viewable and linkable by the general public.
The cat left the bag a month ago, and in a significantly less levelheaded and more memetically virulent form. This post won't change that, and is at worst a concise summary of what has already been said.
1) Yes, this discussion is important, but it should have taken place before a post like this was posted.
2) The standard for something that is admittedly an infohazard can't be " the authors themselves think the thing they are doing is a good idea." And knowing most of the people in this space, I strongly suspect that if anyone in biosecurity in EA had read this, they would have said that it needs at least further consideration and a rewrite. Perhaps I wrong, but I think it is reasonable to ask someone in the relevant area, rather than simply having authors discuss it.
People have many avenues of vetting things before making a public post - it's not like the set of people in EA who work on biosecurity is a secret, and they have publicly offered to vet potential infohazards in other places.
3) I disagree with you that most of the criticism in other places was of the form exhibited here. Claiming we shouldn't trust the CDC seems dangerous to me. And this post isn't simply repeating what others have said. Liberal news sources often note that the Trump administration isn't trustworthy, or say that the CDC has screwed up, but as far as I have seen, they *don't* claim that the organization is fundamentally untrustworthy, as this post does. In my view, the central framing changes the tone of the rest of the points from "the CDC doesn't get everything right, and we should be cautious about blindly accepting their claims" to "the CDC is fundamentally so broken that it should be actively ignored." While I'm sure that some lesswrong readers are going to marginally and responsibly update their beliefs in light of the new information presented here based on their well-reasoned understanding of the US government and its limitations, many readers will not.
Would your opinion change significantly if we changed the wording to highlight that this is an opinion on the trustworthiness of the CDC in this moment, with these constraints, rather than a fundamental property of the CDC?
I read that exactly the opposite way - it says they discussed it, not that they consulted anyone external, much less that they checked with people and were told that there was a consensus that this would be fine. Unilateralist curse isn't solved by saying "I thought about it, and talked with my co-author, and this seems OK."