Posts

New SARS-CoV-2 variant 2020-12-20T21:22:54.711Z
My prediction for Covid-19 2020-05-31T23:25:17.814Z

Comments

Comment by TheMajor on Covid 7/22: Error Correction · 2021-07-23T12:14:57.677Z · LW · GW

The Dutch festival actually was a 2-day event with a total capacity of 10,000 people per day. But it is reasonable to assume that some amount of people attend the first and then the second day, so the total number of participants is lower than 20,000 and correspondingly the rate of infection is unknown but somewhere between 5% and 10%.

Comment by TheMajor on One Study, Many Results (Matt Clancy) · 2021-07-22T08:58:22.772Z · LW · GW

Just wanted to confirm you have accurately described my thoughts, and I feel I have a better understanding of your position as well now.

Comment by TheMajor on One Study, Many Results (Matt Clancy) · 2021-07-21T09:29:41.669Z · LW · GW

I agree with your reading of my points 1,2,4 and 5 but think we are not seeing eye to eye on points 3 and 6. It also saddens me that you condensed the paragraph on how I would like to view the how-much-should-we-trust-science landscape to its least important sentence (point 4), at least from my point of view.

As for point 3, I do not want to make a general point about the reliability of science at all. I want to discuss what tools we have to evaluate the accuracy of any particular paper or claim, so that we can have more appropriate confidence across the board. I think this is the most important discussion regardless of whether it increases or decreases general confidence. In my opinion, attempting to give a 0th-order summary by discussing the average change in confidence from this approach is doing more harm than good. The sentence "You just want to make the general point that you can't trust everything you read, with the background understanding that sometimes this is more important, and sometimes less." is exactly backwards from what I am trying to say.

For point 6, I think it might be very relevant to point out that I'm European, and the anti-vax and global warming denialism really is not that popular around where I live. They are more considered stereotypes of being untrustworthy than properly held beliefs, thankfully. But ignoring that, I think that most of the people influencing social policy and making important decisions are leaning heavily on science, and unfortunately particularly on the types of science I have the lowest confidence in. I was hoping to avoid going into great detail on this, but as short summary I think it is reasonable to be less concerned with the accuracy of papers that have low (societal) impact and more concerned with papers that have high impact. If you randomly sample a published paper on Google Scholar or whatever I'll happily agree that you are likely to find an accurate piece of research. But this is not an accurate representation of how people encounter scientific studies in reality. I see people break the fourth virtue all the way from coffeehouse discussions to national policy debates, which is so effective precisely because the link between data and conclusion is murky. So a lot of policy proposals can be backed by some amount of references. Over the past few years my attempts to be more even have led me to strongly decrease my confidence in a large number of scientific studies, if only to account for the selection effect that these, and not others, were brought to my attention.

Also I think psychology and nutrition are doing a lot better than they were a decade or two ago, which I consider a great sign. But that's more of an aside than a real point.

Comment by TheMajor on One Study, Many Results (Matt Clancy) · 2021-07-20T18:52:33.850Z · LW · GW

I've upvoted you for the clear presentation. Most of the points you state are beliefs I held several years ago, and sounded perfectly reasonable to me. However, over time the track record of this view worsened and worsened, to the point where I now disagree not so much on the object level as with the assumption that this view is valuable to have. I hope you'll bear with me as I try to give explaining this a shot.

I think the first, major point of disagreement is that the target audience of a paper like this is the "level 1" readers. To me it seems like the target audience consists of scientists and science fans, most of whom already have a lot of faith in the accuracy of the scientific process. It is completely true that showing this piece to someone who has managed to work their way into an unreasonable belief can make it harder to escape that particular trap, but unfortunately that doesn't make it wrong. That's the valley of bad rationality and all that. In fact, I think that strongly supports my main original claim - there are so many ways of using sophisticated arguments to get to a wrong conclusion, and only one way to accurately tally up the evidence, that it takes skill and dedication to get to the right answer consistently.

I'm sorry to hear about your friend, and by all means try to keep them away from posts like this. If I understand correctly, you are roughly saying "Science is difficult and not always accurate, but posts like this overshoot on the skepticism. There is some value in trusting published peer-reviewed science over the alternatives, and this view is heavily underrepresented in this community. We need to acknowledge this to dodge the most critical of errors, and only then look for more nuanced views on when to place exactly how much faith in the statements researchers make." I hope I'm not misrepresenting your view here, this is a statement I used to believe sincerely. And I still think that science has great value, and published research is the most accurate source of information out there. But I no longer believe that this "level 2 view", extrapolating (always dangerous :P) from your naming scheme, is a productive viewpoint. I think the nuance that I would like to introduce is absolutely essential, and that conflating different fields of research or even research questions within a field under this umbrella does more harm than good. In other words, I would like to discuss the accuracy of modern science with the understanding that this may apply to smaller or larger degree to any particular paper, exactly proportional to the hypothetical universe-separating ability of the data I introduced earlier. I'm not sure if I should spell that out in great detail every couple of sentences to communicate that I am not blanket arguing against science, but rather comparing science-as-practiced with truthfinding-in-theory and looking for similarities and differences on a paper-by-paper basis.

Most critically, I think the image of 'overshooting' or 'undershooting' trust in papers in particular or science in general is damaging to the discussion. Evaluating the accuracy of inferences is a multi-faceted problem. In some sense, I feel like you are pointing out that if we are walking in a how-much-should-I-trust-science landscape, to a lot of people the message "it's really not all it's cracked up to be" would be moving further away from the ideal point. And I agree. But simultaneously, I do not know of a way to get close (not "help the average person get a bit closer", but get really close) to the ideal point without diving into this nuance. I would really like to discuss in detail what methods we have for evaluating the hard work of scientists to the best of our ability. And if some of that, taken out of context, forms an argument in the arsenal of people determined to metaphorically shoot their own foot off that is a tragedy but I would still like to have the discussion.

As an example, in your quote block I love the first paragraph but think the other 4 are somewhere between irrelevant and misleading. Yes, this discussion will not be a panacea to the replication crisis, and yes, without prior experience comparing crackpots to good sources you may well go astray on many issues. Despite all that, I would still really like to discuss how to evaluate modern science. And personally I believe that we are collectively giving it more credit than it deserves, which is spread in complicated ways between individual claims, research topics and entire fields of science.

Comment by TheMajor on One Study, Many Results (Matt Clancy) · 2021-07-20T10:01:47.465Z · LW · GW

That is very interesting, mostly because I do exactly think that people are putting too much faith in textbook science. I'm also a little bit uncomfortable with the suggested classification.

I have high confidence in claims that I think are at low risk of being falsified soon, not because it is settled science but because this sentence is a tautology. The causality runs the other way: if our confidence in the claim is high, we provisionally accept it as knowledge.

By contrast, I am worried about the social process of claims moving from unsettled to settled science. In my personal opinion there is an abundance of overconfidence in what we would call "settled science". The majority of the claims therein are likely to be correct and hold up under scrutiny, but the bar is still lower than I would prefer.

But maybe I'm way off the mark here, or maybe we are splitting hairs and describing the same situation from a different angle. There is lots of good science out there, and you need overwhelming evidence to justify questioning a standard textbook. But there is also plenty of junk that makes it all the way into lecture halls, never mind all the previous hoops it had to pass through to get there. I am very worried about the statistical power of our scientific institutes in separating truth from fiction, and I don't think the settled/unsettled distinction helps address this.

Comment by TheMajor on One Study, Many Results (Matt Clancy) · 2021-07-19T23:43:10.242Z · LW · GW

It seems to me that we should be really careful before extrapolating from the specific datasets, methods, and subfields these researchers are investigating into others. In particular, I'd like to see some care put into forecasting and selecting research topics that are likely or unlikely to stand up to a multiteam analysis.

I think this is good advice, but only when taken literally. In my opinion there is more than sufficient evidence to suggest that the choices made by researchers (pick any of the descriptions you cited) have a significant impact on the conclusions of papers across a wide variety of fields. Indeed, I think this should be the default assumption until proven otherwise. I'd motivate this primarily by the argument that there are many different ways to draw a wrong conclusion (especially under uncertainty), but only one right way to weigh up all the evidence. Put differently, I think undue influence of arbitrary decisions is the default, and it is only through hard work and collective scientific standards that we stand a chance of avoiding this.

Comment by TheMajor on One Study, Many Results (Matt Clancy) · 2021-07-19T23:37:08.663Z · LW · GW

I've seen calls to improve all the things that are broken right now: <list>

I think this is a flaw in and of itself. There are many, many ways to go wrong, and the entire standard list (p-hacking, selective reporting, multiple stopping criteria, you name it) should be interpreted more as symptoms than as causes of a scientific crisis.

The crux of the whole scientific approach is that you empirically separate hypothetical universes. You do this by making your universe-hypotheses spit out predictions, and then verify them. It seems to me that by and large this process is ignored or even completely absent when we start asking difficult soft science questions. And to clarify: I don't particularly blame any researcher, or institute, or publishing agency or peer doing some reviewing. I think that the task at hand is so inhumanly difficult that collectively we are not up to it, and instead we create some semblance of science and call it a day.

From a distanced perspective, I would like my entire scientific process to look like reverse-engineering a big black box labeled 'universe'. It has input buttons and output channels. Our paradigm postulate correlations between input settings and outputs, and then an individual hypothesis makes a claim about the input settings. We track forward what outputs would be caused by any possible input setting, observe the reality, and update with Bayesian odds ratios.

The problem is frequently that the data we are relying on is influenced by an absolutely gargantuan number of factors - as an example in the OP, the teenage pregnancy rate. I have no trouble believing that statewide schooling laws have some impact on this, but possibly so do for example above-average summer weather, people's religious background, the ratio of boys to girls in a community, economic (in)stability, recent natural disasters and many more factors. So having observed the teenage pregnancy rates, inferring the impact of the statewide schooling laws is a nigh impossible task. Even just trying to put this into words my mind immediately translated this to "what fraction of the state-by-state variance in teenage pregnancy rates can be attributed to this factor, and what fraction to other factors" but even this is already an oversimplification - why are we comparing states at a fixed time, instead of tracking states over time, or even taking each state-time snapshot as an individual dataset? And why is a linear correlation model accurate, who says we can split the multi-factor model into additive components (implied by the fractions)?

The point I am failing to make is that in this case it is not at all clear what difference in the pregnancy rates we would observe if the statewide schooling laws had a decidedly negative, small negative, small positive or decidedly positive impact, as opposed to one or several of the other factors dominating the observed effects. And without that causal connection we can never infer the impact of these laws from the observed data. This is not a matter of p-hacking or biased science or anything of the sort - the approach doesn't have the (information theoretic) power to discern the answer we are looking for in the first place, i.e. to single out the true hypothesis from between the false ones.

 

As for your pragmatic question, how can we tell if a study is to be trusted? I'd recommend asking experts in your field first, and only listening to cynics second. If you insist on asking, my method is to evaluate whether or not it seems plausible to me that, assuming that the conclusion of the paper holds, this would show up as the announced effect observed in the paper. Simultaneously I try to think of several other explanations for the same data. If either of these tries gives some resounding result I tend to chuck the study in the bin. This approach is fraught with confirmation bias ("it seems implausible to me because my view of the world suggests you shouldn't be able to measure an effect like this"), but I don't have a better model of the world to consult than my model of the world.

Comment by TheMajor on One Study, Many Results (Matt Clancy) · 2021-07-19T18:52:30.549Z · LW · GW

Thank you for the wonderful links, I had no idea that (meta)research like this was being conducted. Of course it doesn't do to draw conclusion from just one or two papers like that, we would need a bunch more to be sure that we really need a bunch more before we can accept the conclusion.

Jokes aside, I think there is a big unwarranted leap in the final part of your post. You correctly state that just because the outcome of research seems to not replicate we should not assume evil intent (subconscious or no) on the part of the authors. I agree, but also frankly I don't care. The version of Science Nihilism you present almost seems like strawman Nihilism: "Science does not replicate therefore everything a Scientist says is just their own bias". I think a far more interesting statement would be "The fact that multiple well-meaning scientists get diametrically opposed results using the same data and techniques, which are well-accepted in the field, shows that the current standards in Science are insufficient to draw the type of conclusions we want."

Or, from a more information-theoretic point of view, our process of honest effort by scientists followed by peer review and publication is not a sufficiently sharp tool to assign numbers to the questions we're asking, and a large part of the variance in the published results is indicative of numerous small choices by the researchers instead of indicative of patterns in the data. Whether or not scientists are evil shills with social agendas (hint: they're mostly not) is somewhat irrelevant if the methods used won't separate truth from fiction. To me that's proper Science Nihilism, none of this 'intent' or 'bias' stuff.

In a similar vein I wonder if the page count of the robustness check is really an indication of a solution to this problem. The alternative seems bleak (well, you did call it Nihilism) but maybe we should allow for the possibility that the entire scientific process as commonly practiced is insufficiently powerful to answer these research questions (for example, maybe the questions are ill-posed). To put it differently, to answer a research question we need to separate the hypothetical universes where it has one answer from the hypothetical universes where it has a different answer, and then observe data to decide which universe we happen to be in. In many papers this link between the separation and the data to be observed is so strenuous that I would be surprised if the outcome was determined by anything but the arbitrary choices of the researchers.

Comment by TheMajor on How my school gamed the stats · 2021-02-21T11:31:49.701Z · LW · GW

What do you mean 'problem'? Everybody involved wants the inspection to go well, the correlation between the outcome of the inspection and the quality of the school/firm's books is incidental at best.

Comment by TheMajor on Covid 2/18: Vaccines Still Work · 2021-02-19T16:44:44.048Z · LW · GW

This is a very good point, and in my eyes explains the observations pretty much completely. Thanks!

Comment by TheMajor on Covid 2/18: Vaccines Still Work · 2021-02-19T14:13:56.590Z · LW · GW

(yet it was contained in the UK, which is great and suggests I'm talking BS)

I continue to be extremely surprised by the UK decline in numbers. The Netherlands is reporting a current estimated R of 1.1-1.2 for the English strain and 0.8-0.9 for the wild types. They furthermore estimate that just over half of all newly reported cases are English strain by now. But the UK daily cases have dropped by 80% in 40 days, which at a reproduction time of 6 days would mean R = 0.79 throughout.

In the past I suggested a few potential, not mutually exclusive, explanations:

  1. The UK has implemented significantly more effective measures, and if we just copy them we can totally beat the English strain.
  2. The height of the UK peak in the second week of January was caused by Christmas and New Years holiday craze, which caused significant delayed reporting ('better take that test after I visit all my friends and family, otherwise I won't be allowed to join them') and massively overestimates the peak, and also the decay.
  3. The Dutch models are crap.
  4. The UK numbers are crap.
  5. The English strain has spread throughout the London area so rapidly that it hit local group immunity, and the plummet afterwards is caused by a lack of geographical spread. Once this picks up again the UK will see a stark rise in cases.

I previously put my money on hypothesis number 5, but as time goes on it steadily loses credibility. If anybody has a suggestion for what's going on in the UK right now I'm all ears, I am currently not taking their drop in cases at face value.

Comment by TheMajor on Covid 2/11: As Expected · 2021-02-12T11:32:01.127Z · LW · GW

The loss of life and health of innocent people who got suckered into a political issue without considering the ramifications?

I mean, the group of people who holds out on getting a vaccine as long as possible will definitely be harder to convince than the average citizen. But with these numbers (death rate, long term health conditions, effectiveness of vaccines) around are you seriously suggesting trying to help them is not cost-effective? From the post I think you're talking about tens of millions of people in the USA alone, if not 100M+.

Comment by TheMajor on Covid 2/11: As Expected · 2021-02-12T11:30:54.877Z · LW · GW

I personally have a very tough time fitting your interpretation into my model of the world. To me the popularity and actions of Facebook et al. are mostly disconnected from our ability to communicate with family and close friends.

In my opinion the timeline seems to be a little more as follows:

  • People are on Facebook and Twitter and other social media platforms both to stay in touch with friends and to complain about the outgroup.
  • COVID-19 hit, significantly reducing quality of life everywhere. People realign their political discussions and notions of outgroup along COVID-lines - are you a believer in lockdowns and masks and science or the opposite? This temporarily supersedes other political discussions, not because people have wonderfully unique and insightful opinions on COVID countermeasures but because this is the biggest event happening and as such is necessarily political.
  • After approximately one year of lockdowns and countermeasures people have sunk significant parts of their public profile into their thoughts regarding COVID. A large portion of the public, as well as officials, will support silencing opposition if only to retain a coherent public image (after all, if communication on COVID is not more important than free speech, what have you been doing all these months?).
  • Facebook rises to the occasion and offers to selflessly censor people according to criteria set by the WHO.

I'd like to couple this with a prediction that Facebook will not start censoring older messaged by the WHO and other Respected Officials. I see Facebook's cooperation more as a power grab with plausible deniability than a desire for certain messages (officially endorsed) over others (crackpot/other). It only exists through the support of the very serious people, so it is counterproductive to start challenging them on their own history.

Lastly I think that if you genuinely want to have a heart-to-heart with your friends and family it is silly to restrict yourself to communicating via Facebook. Call them, start a blog, meet somewhere outside for a walk if you want. This has the twin benefit of you not having to worry about issues being 'controversial' as defined by Facebook, and them not having to publicly change their thoughts over your message. Also it is much less embarrassing if it turns out you were unbelievably overconfident all along.

Comment by TheMajor on Quadratic, not logarithmic · 2021-02-08T08:51:07.874Z · LW · GW

You are correct, but the hope is that the probabilities involved stay low enough that a linear approximation is reasonable. Using for example https://www.microcovid.org/, typical events like a shopping trip carry infection risks well below 1% (dependent on location, duration of activity and precautions etc.).

Comment by TheMajor on Vaccinated Socializing · 2021-02-02T17:02:48.495Z · LW · GW

I meant after the first shot, sorry for the confusion.

Comment by TheMajor on Vaccinated Socializing · 2021-02-02T09:43:47.108Z · LW · GW

I think ojno has a point. Furthermore, to the best of my knowledge the protection from the vaccines takes a bit of time (10 days? 14 days?) to kick in after the vaccination. Arguably "proceed with the same caution as before" is a better message than "go nuts, dance and hug and visit all your friends" in this period, and for simplicity's sake this has become the default message.

Who am I kidding, this is of course because we don't want vaccination to be unfair. If you get social benefits from being vaccinated (by not having to abide by some of the restrictions) then the prioritisation discussion would be even fiercer than it is now. Plus, the more Sacrifices to the Gods you publicly support (h/t Svi) the more of a Serious Person you are, which the CDC tries very hard to be.

Comment by TheMajor on Lessons I've Learned from Self-Teaching · 2021-01-24T16:15:48.543Z · LW · GW

Mathoverflow has discussion on it. In short:

  • This area definition is equivalent to the standard definition, although this was (to me) not immediately obvious.
  • Some statements (linearity of integrals, for example) are obvious from the one definition, while others (the Monotone Convergence Theorem) are obvious from the other definition. Unfortunately, proving that the two definitions are equivalent is pretty much the proof for these statements (assuming the other definition).
  • The general approach of "given a claim, test it on indicator functions, then simple functions, then all integrable positive functions, then all integrable functions, then (if desired) integrable complex functions" is called the standard machine of measure theory, so there is educational benefit to seeing it.
Comment by TheMajor on Covid 1/21: Turning the Corner · 2021-01-22T10:23:06.839Z · LW · GW

It was pointed out to me that it is really not accurate to consider the UK daily COVID numbers as a single data-point. There could be any number of possible explanations for the decrease in the numbers. Some possible explanations include:

  1. The current lockdown and measures are sufficient to bring the English variant to R<1.
  2. The current measures bring the English variant to an R slightly above 1, and the wild variants to R well below 1, and because nationally the English variant is not dominant yet (even though it is in certain regions) this gives a national R<1.
  3. The English strain has spread so aggressively regionally that group immunity effects in the London area have significantly slowed the spread, while not spreading as quickly geographically.

Most notably, hypotheses 2 & 3 predict that the stagnation will soon reverse back into acceleration (with hypothesis 3 predicting a far higher rate than 2), as the English variant becomes more prevalent throughout the rest of the UK. Let's hope the answer is door number 1?

Comment by TheMajor on Covid: The Question of Immunity From Infection · 2021-01-21T10:53:00.798Z · LW · GW

To what extent does 'positive PCR test' equate to 'infectious'? Or is there some other good indicator? I know most health authorities say something like "if you have been contact with a person who tested positive, then from the point they are no longer symptomatic/first negative test after you have to be careful for X days', so I assumed they are (somewhat) related.

Comment by TheMajor on Covid 1/14: To Launch a Thousand Shipments · 2021-01-14T20:39:55.988Z · LW · GW

To the best of my knowledge there are four evil inaccurate but not-completely-moronic reasons for sticking with a 2-dose vaccination plan. Just to be clear: none of these arguments convincingly suggest that 2-dose will be a better method to combat the pandemic.

  1. Many officials may be convinced that "no Proper Scientific Procedure has investigated this" is identical to "there is no knowledge". In non-pandemic times, if you squint juuust right, this looks like a cost-benefit analysis of delaying medical research versus endorsing crackpot pharmaceutics. I find it more than plausible that many people (and certainly most bureaucracies) are not capable of adjusting this argument to a pandemic. In their defense, you have to be somewhat of an expert in the field to make the cost-benefit assessment on a case-by-case basis (even though it is obvious in this case).
  2. Are there legal/reputational risks to publicly supporting 1-dose vaccines before the Medical Establishment has given it a seal of approval? This would explain why nobody blinked now that they are the norm - people were simply waiting for some agency to accept the blame if in hindsight it turned out to be a mistake.
  3. 80% is noticeably lower than 95%, so you can expect about 4 times as many thrillseekers to take the vaccine, go to the local mall, lick every object they can find and come down with something terrible. It could even be COVID. This is awful for public perception of the vaccine. Or, taking less of an extreme, people might risk-compensate to the point where 2x80% is not as much better than 1x95% as naive math might suggest (although I fail to see how it could ever close the gap. People aren't compensating that much.... right?).
  4. At certain points during the distribution it is conceivable that increasing the immunity in a particularly vulnerable subgroup of the population from 80% to 95% might have a higher impact (on the death toll, medical systems, you name it) than increasing the immunity of an arbitrary selected subgroup of the remainder of the population from 0% to 80%. This chance is bigger if you instituted some messed up prioritization on your subgroups in the first place (see: everywhere).

Anyway, the case for 1-dose is overwhelming. I just wanted to point out how otherwise intelligent people might get this question so incredibly wrong, seeing as I've run into shades of all four of these arguments in the past.

Comment by TheMajor on Covid 1/7: The Fire of a Thousand Suns · 2021-01-07T22:41:27.090Z · LW · GW

Oh, it’s so much worse than that. What happens when the central planner combines threats to those who don’t distribute all the vaccine doses they get, with other threats to those who let someone ‘jump the line’? Care to solve for the equilibrium? 

You conclude that vaccination facilities will reduce their orders so they are guaranteed to be able to distribute all. I think in practice it is much easier to cook the books and/or destroy vaccines as necessary.

More pressingly, this is the first mention I've run into of the potential seriousness of the South African variant. But (perhaps for the first time since February) it really seems to be the case that "more data is needed before we can make an informed judgment on this"?

Comment by TheMajor on Collider bias as a cognitive blindspot? · 2020-12-31T13:56:39.494Z · LW · GW

There has been previous discussion about this on LessWrong. In particular, this is precisely the focus of Why the tails come apart, if I'm not mistaken.

If I remember correctly that very post caused a brief investigation into an alleged negative correlation between chess ability and IQ, conditioning on very high chess ability (top 50 or something). Unfortunately I don't remember the conclusion.

Edit: and now I see Mo Nastri already pointed this out. Oops.

Comment by TheMajor on New SARS-CoV-2 variant · 2020-12-28T08:36:19.766Z · LW · GW

Your point on alternative hypotheses is well taken, I only mentioned the superspreader one since that was considered the main possibility for strong relative growth of one variant over another without increased infectiousness. Could you expand on the likelihood of any of these being true/link to discussion on them?

Comment by TheMajor on My Model of the New COVID Strain and US Response · 2020-12-27T18:33:45.745Z · LW · GW

I also thought this, but was told this was not the case (without sources though). If you are right then the scaling assumption is probably close to accurate. I tried briefly looking for more information on this but found it too complicated to judge (for example, papers summarizing contact tracing results in order to determine the relative importance of superspreader events are too complicated for me to undo their selection effects - in particular the ones I saw limited to confirmed cases, or sometimes even confirmed cases with known source).

EDIT: if I check microCOVID for example, they state that the chance of catching it during a 1 hour dinner with another person who has been confirmed to have COVID is probably between 0.2% and 20%, The relevant event risks for group spread (as opposed to personal risk evaluations) are conditional on at least one person present having COVID. So is this interval a small chance or a large chance? I wouldn't be surprised if ~10% is significantly high that the linearity assumption becomes questionable, and a 1 hour dinner is far from the most risky event people are participating in.

Comment by TheMajor on My Model of the New COVID Strain and US Response · 2020-12-27T16:31:03.355Z · LW · GW

I agree that this means particular interactions would have a larger risk increase than the 70% cited (again, or whatever average you believe in).

In the 24-minute video in Zvi's weekly summary Vincent Racaniello makes the same point (along with many other good points), with the important additional fact that he is an expert (as far as I can tell?). The problem is that this leaves us in the market for an alternative explanation of the UK data, both their absolute increase in cases as well as the relative growth of this particular variant as a fraction of all sequenced COVID samples. There are multiple possible but unlikely explanations, such as superspreaders, 'mild' superspreaders along with a 'mild' increase in infectiousness, or even downright inflated numbers due to mistakes or political motives. To me all of these sound implausible, but if the biological prior on a mutation causing such extreme differences is sufficiently low they might still be likely a postiori explanations.

I commented something similar on Zvi's summary, but I don't know how to link to comments on posts. It has a few more links motivating the above.

Comment by TheMajor on My Model of the New COVID Strain and US Response · 2020-12-27T09:59:43.499Z · LW · GW

I had a long discussion on this very topic, and wanted to share my thoughts somewhere. So why not here.

Disclaimer: I am not an expert on any of this.

The scaling assumption (if the new strain has an R of 1.7 when the old one has an R of 1, then we need countermeasures pulling the old one down to 0.6 to get the new one to 0.6 * 1.7 = 1) is almost certainly too pessimistic an estimate, but I have no clue by how much. A lot of high risk events (going to a concert, partying with 10+ people in a closed room for an entire night, having a multiple hour Christmas dinner with the entire family) will become less than linearly more risky. I interpreted the "70%" (after some initial confusion) to represent an increase in risk per event or unit time of exposure. But if you are sharing the same air with possibly contagious people for a long period of time your risk is all the way on the saturated end of the geometric distribution, and it simply can't go above 100%. So high risk events will likely stay high risk events.

At the same time, I expect a lot of medium and low risk events to become almost proportionally more risky. This includes events like having one or two people over for dinner while keeping the room properly ventilated, going to supermarkets, going to the office and using public transport. Something that has been bugging me is that the increase in R-value has been deduced from the actual increased rate at which it spreads, so it is simply not possible that every activity has less than 70% (or whatever number you believe in) increased risk, since that is apparently the population average under the UK lockdown level 2 conditions. So some of this nonlinearity has already been factored in, making it very difficult to say what stronger lockdowns would mean.

In conclusion, I think it is possible that even if the new variant is 70% more transmissible that lockdown conditions that would have pushed the old strain down to 0.7 or only 0.8 might be sufficient to contain this new strain, and of course if the new strain is less transmissible than this we have even more leeway. At the same time I have absolutely no clue how to get a reliable estimate of the "old R needed".

Comment by TheMajor on Covid 12/24: We’re F***ed, It’s Over · 2020-12-26T13:44:20.056Z · LW · GW

My father sent me this video (24 min) that makes the case for all of this being mostly a nothingburger. Or, to be more precise, he says he has only low confidence instead of moderate confidence that the new strain is substantially more infectious, which therefore means don’t be concerned. Which is odd, since even low confidence in something this impactful should be a big deal! It points to the whole ‘nothing’s real until it is proven or at least until it is the default outcome’ philosophy that many people effectively use.

I think this is a great video, it explained a lot of things very clearly. I'm not a biologist/epidemologist/etc., and this video was very clear and helpful. In particular the strong prior "a handful of mutations typically does not lead to massive changes in reproduction rate" is a valuable insight that makes a lot of sense.

That being said, the main arguments against this new strain variant being a large risk seem to be:

  • The prior mentioned above.
  • The fact that current estimates of increased transmission rates are based on PCR testing, which does not identify variants.
  • The possibility of alternative explanations for the increase in nationwide infections in the UK, which have not been sufficiently ruled out (in particular superspreaders).
  • I think he is claiming that the NERVTAG meeting minutes are drawing a causal link between the lower ct value of this variant on PCR tests and its increased transmissibility, and that this is an uncertain inference to draw.

However, personally I think the strongest case for the increased transmissibility of this new variant comes not from indirect evidence as presented above, but from the direct observation of exponential growth in the relative number of cases over multiple weeks/months. See for example the ECDC threat assesment brief or the PHE technical briefing. These seem to strongly imply that, while being agnostic about the mechanism, this new variant is spreading very rapidly. So all things considered the linked video makes me update only very weakly towards a lower probability of this new variant being massively transmissible - a good explanation for growth shown in both reports is still missing if it is not inherently more transmissible.

Comment by TheMajor on What evidence will tell us about the new strain? How are you updating? · 2020-12-26T12:06:29.507Z · LW · GW

Good point, I'm likely misinterpreting nextstrain website then.

Comment by TheMajor on What evidence will tell us about the new strain? How are you updating? · 2020-12-26T11:05:03.121Z · LW · GW

I can answer this one, or more specifically the PHE can. The tl; dr of this technical briefing is that the new strain tests positive on two assays (N, ORF1ab) and negative on a third (S), and that up to some noise this is currently the only strain to do so. So the number of PCR tests that are both S-negative and COVID-positive is a good indication of the spread of the new strain, without the need for genome sequencing. This document makes this argument precise, and then produces a painful graph on page 8 showing the 'S dropout' proportion at the Milton Keynes Lighthouse lab (Buckinghamshire). Mid December they show a proportion of over 60%.

This has led to me to update towards the new variant being as aggressive as previously feared, because unlike genome sequencing PCR test data does not lag several weeks behind. Combined with the fact that genome sequencing is done sporadically at best (if I understand correctly, nextstrain data explains the UK has sequenced 85 samples since September, with neighbouring countries showing similar numbers) I think it may already be more widely spread/beyond containment in a lot of European countries. Edit: Oskar Mathiasen gives a difference source with incompatible numbers, I am no longer confident in this point.

I also share shminux' fears that this more aggressive strain may be difficult to contain with just the measures we have taken so far.

Comment by TheMajor on New SARS-CoV-2 variant · 2020-12-22T11:07:11.225Z · LW · GW

I've been trying to understand this discussion (and I agree that this is one of the central questions for the model of how things will progress from here, in particular if March-style lockdowns will be sufficient or not to halt the spread of this strain). But now I'm mainly confused - isn't such a dramatic increase in Rt incompatible with the slower increase in the graph, as pointed out by CellBioGuy?

 

Edit: I've read yesterday's PHE investigation report, and they do explicitly confirm it is an increase of over +0.5 to the Rt under the conditions in England in weeks 44-49 of this year. So this seems like the bad possible interpretation, where it really does spread significantly more.

Comment by TheMajor on It turns out that group meetings are mostly a terrible way to make decisions · 2020-12-19T11:44:54.782Z · LW · GW

I certainly expect status games, above and beyond power games. Actually saying 'power games' was the wrong choice of words in my comment. Thank you for pointing this out!

That being said, I don't think the situation you describe is fully accurate. You describe group meetings as an arena for status (in the office), whereas I think instead they are primarily a tool for forcing cooperation. The social aspect still dominates the decision making aspect*, but the meeting is positive sum in that it can unify a group into acting towards a certain solution, even if that is not the best solution available.

 

*I think this is the main reason so many people are confused by the alleged inefficiency of meetings. If you have a difficult problem and no good candidate solutions it is in my experience basically never optimal to ask a group of people at once and hope they collectively solve it. Recognizing that this is at best a side-effect of group meetings cleared up a lot of confusion for me.

Comment by TheMajor on It turns out that group meetings are mostly a terrible way to make decisions · 2020-12-17T22:16:56.861Z · LW · GW

I'm gonna pull a Hanson here. What makes you think group meetings are about decision making?

 

I think the primary goal of many group meetings is not to find a solution to a difficult problem, but to force everybody in the meeting to publicly commit to the action that is decided on. This both cuts off the opportunity for future complaining and disobedience ('You should have brought that up in the meeting!') and spreads the blame if the idea doesn't work ('Well, we voted on it/discussed it'). Getting to the most effective solutions to your problems is secondary to achieving cooperation within the office.

Most group meetings are power games. Their main purpose is to, forcibly or not, create long-term cooperation by the people in the meeting. This is why they are often 'dull', or 'long', or 'ineffective' - the very cost you incur by attending is a signal of your loyalty and commitment. Trying to change this would make meetings less effective, not more effective.

Comment by TheMajor on How long does it take to become Gaussian? · 2020-12-08T20:32:11.871Z · LW · GW

Why not the Total Variation norm? KS distance is also a good candidate.

Comment by TheMajor on Covid 12/3: Land of Confusion · 2020-12-04T07:45:17.801Z · LW · GW

I usually just hope the Twitter links aren't that important/interesting.

Comment by TheMajor on How We Failed COVID-19 · 2020-12-03T17:33:45.868Z · LW · GW

I think your early analysis is accurate, but connecting this to 'reliable information sources about COVID' is completely off the mark. I don't know how to explain properly why I think this is so completely wrong - or at least, not without delving into a few-month sequence based on the material of https://samzdat.com. The 1-minute version goes something like:

There are many possible steps that all need to go right before appropriate collective action is taken to combat a national or global threat. This is especially true if we have shared responsibility, and even more so if the most promising countermeasures involve social changes (i.e. changes in the daily lives of a significant portion of the population). One of these steps is 'having access to proper information about the virus'. A few others are 'having access to rallying points for collective social action', 'willingness to make these social changes, instead of accepting the loss in life and health, in the first place', and I'm sure there are many others. I am not at all convinced that knowledge about the virus is the bottleneck in this process (in fact, I think it is the easiest step of them all). In my opinion the gap between not having accurate information and having accurate information is much much smaller than the gap between having accurate information and collectively acting on it.

Lastly, I think blaming lack of social action on lack of knowledge is a common mistake (maybe even politically motivated tool), and I thank Lou Keep linked above for their wonderful explanation of this point.

Comment by TheMajor on Rationalist Town Hall: Pandemic Edition · 2020-10-23T08:13:58.491Z · LW · GW

I am not able to make it because of a one-off other appointment (a flight, actually). So I don't think this is very informative for the sake of planning. Usually my Sundays are unclaimed.

Comment by TheMajor on Rationalist Town Hall: Pandemic Edition · 2020-10-22T08:00:47.534Z · LW · GW

I really would have loved to attend, but won't be able to make it at that time. Will you (with permission of the participants, I imagine) record the meeting, or maybe write some possibly anonymised summary of the discussion after?

Comment by TheMajor on Rationality and Climate Change · 2020-10-06T08:19:48.086Z · LW · GW

I definitely agree that there is a bias in this community for technological solutions over policy solutions. However, I don't think that this bias is the deciding factor for judging 'trying to induce policy solutions on climate change' to not be cost-effective. You (and others) already said it best: climate change is far more widely recognised than other topics, with a lot of people already contributing. This topic is quite heavily politicized, and it is very difficult to distinguish "I think this policy would, despite the high costs, be a great benefit to humanity as a whole" from "Go go climate change team! This is a serious issue! Look at me being serious!".

Which reminds me: I think the standard counter-argument to applying the "low probability, high impact" argument to political situations applies: how can you be sure that you're backing the right side, or that your call to action won't be met with an equal call to opposite action by your political opponents? I'm not that eager to have an in-depth discussion on this in the comments here (especially since we don't actually have a policy proposal or a method to implement it), but one of the main reasons I am hesitant about policy proposals is the significant chance for large negative externalities, and the strong motivation of the proposers to downplay those.

Emiya said cost-effectiveness will be treated extensively, and I am extremely eager to read the full post. As I said above, if there is a cost-effective way for me to combat climate change this would jump to (near) the top of my priorities instantly.

Comment by TheMajor on Rationality and Climate Change · 2020-10-05T14:08:05.152Z · LW · GW

I completely agree, and would like to add that I personally draw a clear line between "the importance of climate change" and "the importance of me working on/worrying about climate change". All the arguments and evidence I've seen so far suggest solutions that are technological, social(/legal), or some combination of both. I have very little influence on any of these, and they are certainly not my comparative advantage.

If OP has a scheme where my time can be leveraged to have a large (or, at least, more than likely cost-effective) impact on climate change then this scheme would instantly be near the top of my priorities. But as it stands my main options are mostly symbolic.

As an aside, and also to engage with lincoln's points, I am highly sceptical of proposed solutions that require overhauls in policy and public attitude. These may or may not be the way forward, but my personal ability to tip the scales on these matters are slim to none. Wishing for societal change to suit any plans is just that, a wish.

Comment by TheMajor on Covid 10/1: The Long Haul · 2020-10-02T07:21:59.840Z · LW · GW

You want to incentivise people to get positive COVID tests? Ballsy.

On a more serious note, I doubt anybody would be interested in enforcing this. Diners are going out of business due to COVID restrictions, and for many restaurant owners the choice between going out of business or looking the other way when people ask to be seated is clear. Furthermore the goal of all this is to keep the number of people who have contracted COVID as low as possible, your proposed 'fix' would only allow a small minority to work/participate.

Comment by TheMajor on Covid 9/17: It’s Worse · 2020-09-19T07:31:12.131Z · LW · GW

I think Hanlon's razor applies here. Thank you for sharing the 5k/day, I will make a serious effort to obtain similar doses.

Comment by TheMajor on Covid 9/17: It’s Worse · 2020-09-18T07:47:54.633Z · LW · GW

For reference, what dose are you thinking of? Here in EU-land I can only get 5ug (200 IU) supplements easily.

Comment by TheMajor on What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers · 2020-09-15T07:00:24.856Z · LW · GW

Certainly, but it's not malicious in the sense of deliberately citing bad science. More like negligence.

Comment by TheMajor on What's Wrong with Social Science and How to Fix It: Reflections After Reading 2578 Papers · 2020-09-13T13:48:37.798Z · LW · GW

I think there is an important (and obvious) third alternative to the two options presented at the end (of the snippet, rather early in the full piece), namely that many scientists are not very interested in the truth value of the papers they cite. This is neither malice nor stupidity. There is simply no mechanism to punish scientists who cite bad science (and it is not clear there should be, in my opinion). If a paper passes the initial hurdle of peer review it is officially Good Enough to be cited as well, even if it is later retracted (or, put differently, "I'm not responsible for the mistakes the people I cited make, the review committee should have caught it!").

Comment by TheMajor on New Paper on Herd Immunity Thresholds · 2020-07-31T16:46:38.151Z · LW · GW

It would if those neighbourhoods are very homogeneous in terms of connectivity. Why would their (in)homogeneity be similar to European countries?

Comment by TheMajor on The Goldbach conjecture is probably correct; so was Fermat's last theorem · 2020-07-15T08:40:51.168Z · LW · GW

Since a+b = b+a shouldn't the total number of 'different sums' be half of what you give? Fortunately the rest of the argument works completely analogously.

Comment by TheMajor on The Goldbach conjecture is probably correct; so was Fermat's last theorem · 2020-07-15T08:36:30.939Z · LW · GW

How does the randomness of the digits imply that the statement cannot be proven? Superficially the quote seems to use two different notions of randomness, namely "we cannot detect any patterns" (i.e. a pure random generator is the best predictor we have) and "we have shown that there can be no patterns" (i.e. we have shown no other predictor can do any better). Is this a known result from Ergodic Theory?

Comment by TheMajor on My prediction for Covid-19 · 2020-06-04T07:30:57.150Z · LW · GW

I'm happy to hear that some of these changes have been unexpectedly positive for you! Personally I already did a bunch of these things (shop for one week's worth at a time, have days working from home, order online). To offer a bit of a peek at the flip side: I work in mathematics, which is uniquely suited for working at home (there's a modern joke that to do mathematics all you need is pen, paper, a bottle of water and a supercomputer), yet 2 of the 11 colleagues in my group have suffered burnouts since March from the added stress of having to look after their households. We are considering going back to 20% office capacity soon in staggered shifts, which while nice still means we won't have a chance to talk or work together in practice. Obviously this is exactly the point, but I want to note that this is a far cry from normal. My productivity at the moment is at an all time low, and several of my other friends have already heard that if this situation continues for much longer they will be let go from their jobs. In this sense I think this is unsustainable, or at the very least a serious hit to our global growth and productivity.

I have absolutely no quantitative guess what the impact of superspreaders is, and it would be amazing if we could stop them quickly. I think Zvi also pointed out that superspreaders get eliminated quickly in a pandemic, one way or another (and 'having the disease, surviving and then becoming immune' counts as the other).

I thought that the current spread of Covid-19 in warmer countries (Brazil, India) was evidence against the virus being very susceptible to temperatures, but there are a lot of confounders. If you know of any good summary of the current knowledge on this please let me know, I am very interested in this (and it would likely change my predictions massively).

Comment by TheMajor on My prediction for Covid-19 · 2020-06-03T09:37:01.166Z · LW · GW

Thank you for your comment. I don't think we disagree - Slovakia (and other European countries) have done extremely well by acting quickly. I also fully support washing hands, wearing face masks and being smart about social contact. I think you are responding to my sentence

[...] this suggests to me that 'The Dance' will look a lot more like 'a full quarantine, but with a few restrictions lifted' than like 'restoring social contact, but wash your hands and wear a face mask'.

I'm sorry for the confusion. The situation you describe sounds more like my scenario 3 than my scenario 2, and I think I explained poorly where I draw the line in my quoted statement above. Going by wikipedia, the most recent round of relaxations in Slovakia still sounds far from life as normal to me. The maximum occupancy set at shops is up to 1 person per 15 square meters, which is 25% of what it is in normal times. I imagine similar concerns apply to offices, but I can't find how many people are back to working at the office. The opening of schools happened only this Monday, so it really is too early to tell what the impact of that will be. Zvi voices some concerns.

Lastly, my entire goal was to try and talk about relative impact, and give perspective to the magnitude of the measures we can expect going forwards. I don't see where I have mistakenly given absolute statements on effectiveness of measures (in fact I only mentioned face masks once, without making any statement for or against them in the OP), but if you point them out I would be happy to change them.

Comment by TheMajor on My prediction for Covid-19 · 2020-06-01T13:04:01.115Z · LW · GW

I agree completely. However, I think the amount it has gone up is critical here. A lot of the countermeasures and increased preparation are linear countermeasures against an exponential threat - maybe a region that could previously only handle 1000 ICU patients can now take care of 2000, but if R0 is significantly above 1 (lets say 1.5) this only buys you about one and a half week. I think this topic deserves its own entire post at some point, and I didn't want to get bogged down in details in the section on "What doesn't change", but if the true rule is "if under X circumstances in March it was smart to go into lockdown, it is November smart to go into lockdown 2 weeks after seeing X" my conclusions are still the same.

I might write that full post sometime on this and more back-and-forth, if people are interested. I made serious concessions to brevity above.