New Paper on Herd Immunity Thresholdspost by Zvi · 2020-07-29T20:50:01.242Z · score: 41 (19 votes) · LW · GW · 17 comments
The Model Anyone Convinced? How Variant is Connectivity Anyway? Can We Quantify This Effect? None 16 comments
Previously: On R0
This new paper suggests that herd immunity could be achieved with only about 10% infected rather than the typically suggested 60%-70%.
They claim this is due to differences in connectivity and thus exposure, and in susceptibility to infection. They claim that the best model fit for four European epidemics at 16%-26% for England, 9.4%-11% for Belgium, 7.1%-9.9% for Portugal, and 7.5%-21% for Spain.
This being accurate would be excellent news.
The 60%-70% threshold commonly thrown around is, of course, utter nonsense. I’ve been over this several times, but will summarize.
The 60%-70% result is based on a fully naive SIR (susceptible, infected, recovered) model in which all of the following are assumed to be true:
- People are identical, and have identical susceptibility to the virus.
- People are identical, and have identical ability to spread the virus.
- People are identical, and have identical exposure to the virus.
- People are identical, and have contacts completely at random.
- The only intervention considered is immunity. No help from behavior adjustments.
All five of these mistakes are large, and all point in the same direction. Immunity matters much more than the ‘naive SIR’ model thinks. Whatever the threshold for immunity might be for any given initial reproduction rate, it’s nowhere near what the naive SIR outputs.
Often they even take the number of cases with positive tests to be the number of infections, and use that to predict forward or train their model.
This naive model is not a straw man! Such obvious nonsense models are the most common models quoted by the press, the most common models quoted by so-called ‘scientific experts’ and the most common models used to determine policy.
The effective response is some combination of these two very poor arguments:
- “Until you can get a good measurement of this effect we will continue to use zero.”
- “Telling people the threshold is lower will cause them to take less precautions.”
Neither of these is how knowledge or science works. It’s motivated cognition. Period.
See On R0 for more details on that.
So when I saw this paper, I was hoping it would provide a better perspective that could be convincing, and a reasonable estimate of the magnitude of the effect.
I think the magnitude they are suggesting is very reasonable. Alas, I do not think the paper is convincing.
The model involves the use of calculus and many unexplained Greek letters. Thus it is impressive and valid.
If that’s not how science works, I fail to understand why they don’t explain what the hell they are actually doing.
Take the model description on page four. It’s all where this letter is this and that letter is that, with non-explicit assumption upon non-explicit assumption. Why do people write like this?
I tried to read their description on page 4 and their model made zero sense. None. The good news is it made so little sense that it was obvious that I couldn’t possibly be successfully making heads or tails of the situation, so I deleted my attempt to write up what I thought it meant (again, total gibberish) and instead I went in search of an actual explanation later in the paper.
All right, let’s get to their actual assumptions on page 19, where they’re written in English, and assume that the model correctly translates from the assumptions to the result because they have other people to check for that.
They believe the infectivity of ‘exposed individuals’ is half that of infectious ones, and that this period of being an ‘exposed individual’ takes four days to develop into being infectious. Then they are infectious for an average four days, then stop.
That’s not my model. I don’t think someone who caught the virus yesterday is half as infectious as they will be later. I think they’re essentially not infectious at all. This matters a lot! If my model is right, then if you go to a risky event on Sunday, someone seeing you on Monday is still safe. Under this paper’s model, that Monday meeting is dangerous. In fact, given the person has no symptoms yet and the person they caught it from still doesn’t, it’s very dangerous. That’s a big deal for practical planning. It makes it much harder to be relatively safe. It makes it much harder to usefully contact trace. Probably other implications as well.
What it doesn’t change much is the actual result. These are not the maths we are looking for, and their answers don’t much matter.
That’s because they’re controlled for by assuming the original R0, slash whatever assumption you make about the mean level of infectivity and susceptibility.
Technically, yes, there’s a difference. Everything is continuous, and the exact timing of when people are how infectious changes the progression of things a bit. A bit, but only a bit. If what we are doing is calculating the herd immunity threshold, you can pick any curve slope you want for exactly when people infect others. It will effect how long it takes to get to herd immunity. Big changes in average delay times would matter some (but again, over reasonable guesses, I’m thinking not enough to worry about) for how far we can expect to overshoot herd immunity before bringing infections down.
But the number of infected required will barely change. The core equation doesn’t care. Why are we jumping through all these hoops? Who is this going to convince, exactly?
This is actually good news. If all the assumptions in that section don’t matter, then none of them being wrong can makes the model wrong.
Second assumption is that acquired immunity is absolute. Once you catch Covid-19 and recover, you can’t catch it again. This presumably isn’t strictly true, but as I keep repeating, our continued lack of large scale reinfection makes it more approximately true every day.
Third assumption they suggest is that people with similar levels of connectivity are more likely to connect, relative to random connections between individuals. This seems obviously true on reflection. It’s not a good full picture of how people connect, but it’s a move in the right direction, unless it goes too far. It’s hard to get a good feel for how big this effect is in their model, but I think it’s very reasonable.
Fourth assumption is that there is variance in the degree of connectivity, and social distancing lowers the mean and variance proportionally (so the variance as a proportion of the mean is unchanged). They then note that it is possible that social distancing decreases differentiation in connectivity, which would effect their results. I don’t know why they think about this as a one-way issue. Perhaps because as scientists they have to be concerned with things that if true would make their finding weaker, but ignore if they would make them stronger and are speculative. They suggest a variation where social distancing reduces connectivity variance.
I would ask which directional effect is more likely here. It seems to me more likely that social distancing increases variance. If R0 before distancing was somewhere between 2.6 and 4, and it cuts it to something close to 1, that means the average person is cutting out 60% to 75% of their effective connectivity. By contrast, at least half the people I know are cutting more than 90% of their connectivity, and also cutting their physical exposure levels when connecting, on top of that. In many cases, it’s more than 95%, and in some it’s 99%+. If anything, the existing introverts are doing larger percentage cuts while also feeling better about the lifestyle effects. Whereas essential workers and kids who don’t care and those who don’t believe this is a real thing likely are not cutting much connectivity at all.
I’ve talked about it enough I don’t want to get into it beyond that again here, but I’d expect higher variance distributions than before. The real concern is whether the connectivity levels during distancing are no longer that correlated to those without distancing, because that would mean we weren’t getting the full selection effects. The other hidden variable is if people who are immune then seek out higher connectivity. That effectively greatly amplifies social distancing. Immunity passports two months ago.
Fifth, they modeled ‘non-pharmaceutical interventions’ as a gradual lowering of the infection rate. This is supposed to cover masks, distancing, hand washing and such. They said 21 days to implement distancing, then 30 days at max effectiveness, then a gradual lifting whose speed does not impact the model’s results much.
They then take the observed data and use Bayesian inference to find the most likely parameters for their model.
To do that, they made two additional simplifying assumptions.
The first was that the fraction of cases that were identified is a constant throughout the period of data reported. This is false, of course. As time went on, testing everywhere improved, and at higher infection rates testing gets overwhelmed more easily and people are less willing to be tested. They are using European data, which means there might be less impact than this would have in America, but it’s still pretty bad to assume this is a constant and I’m sad they didn’t choose something better. I don’t know if a different assumption changes their answers much.
The second was that local transmission starts when countries/regions report 1 case per 5 million population in one day. An assumption like this seems deeply silly, like flipping a switch, but I presume the model needed it and choosing the wrong date to start with would be mostly harmless. If it would be a substantial impact, then shall we say I have concerns.
They then use the serological test in Spain and used it to calculate that the reporting rate of infections in Spain was around 6%. That seems to me to be on the low end of realistic. If anything, my guess would be that the serological survey was an undercount, because it seems likely some people don’t show immunity on those tests but are indeed immune, but the resulting number seems relatively low so I’ll accept it.
They then use the rate of PCR testing relative to Spain in the other contries to get reporting rates of 9% for Portugal, 6% for Belgium and 2.4% for England. That 2.4% number is dramatically low given what we know and I’m suspicious of it. I’m curious what their guess would be for the United States.
Then they took the best fit of the data, and produced their model.
Don’t all yell at once. My model Doesn’t think anyone was convinced. Why?
The paper doesn’t add up to more than its key insight, nor does it prove that insight.
Either you’re going to buy the core insight of the paper the moment you hear it and think about it (which I do), in which case you don’t need the paper. Or you don’t buy the core insight of the paper when you hear it, in which case nothing in the paper is going to change that.
The core insight of the paper is that if different people are differently vulnerable to infection, and different people have different amounts of connectivity and exposure, and those differences persist over time, then the people who are more vulnerable and more connected get infected faster, and thus herd immunity’s threshold is much lower.
Well, no shirt, Sherlock.
If the above paragraph isn’t enough to make that point, will the paper help? That seems highly unlikely to me. Anyone willing to think about the physical world will realize that different people have radically different amounts of connectivity. Most who think about the physical world will conclude that they also have importantly different levels of vulnerability to infection and ability to infect, and that those two will be correlated.
Most don’t buy the insight.
Why are so few people buying this seemingly trivial and obvious insight?
I gave my best guess in the first section. It is seen as an argument, and therefore a solider, for not dealing with the virus. And it is seen as not legitimate to count something that can’t be quantified – who are you to alter the hard numbers and basic math without a better answer you can defend? Thus, modesty, and the choice of an estimate well outside the realm of the plausible.
Add in that most people don’t think about or believe in the physical world in this way, as something made up of gears and cause and effect that one can figure out with logic. They hear an expert say ‘70%’ and think nothing more about it.
Then there are those who do buy the insight. If anything, I am guessing the paper discourages this, because its most prominent effect is to point out that accepting the insight implies a super low immunity threshold, thus causing people to want to recoil.
Once you buy the insight, we’re talking price. The paper suggests one outcome, but the process they use is sufficiently opaque and arbitrary and dependent on its assumptions that it’s more proof of concept than anything else.
It’s mostly permission to say numbers like ‘10% immunity threshold’ out loud and have a paper one can point to so one doesn’t sound crazy. Which is useful, I suppose. I’m happy the paper exists. I just wish it was better.
There’s nothing especially obviously wrong with the model or their final estimate. But that does not mean there’s nothing wrong with their model. Hell if I know. It would take many hours pouring over details and likely implementing the model yourself and tinkering with it before one can have confidence in the outputs. Only then should it provide much evidence for what that final price should look like.
And it should only have an impact then if the model is in practice doing more than stating the obvious implications of its assumptions.
If this did paper did convince you, or failed to convince you for reasons other than the ones I give here, I’m curious to hear about it in the comments.
How Variant is Connectivity Anyway?
I think very, very variant. I hope to not repeat my prior arguments too much, here.
Out of curiosity, I did a Twitter poll on the distribution of connectivity, and got this result with 207 votes:
Divide USA into 50% of non-immune individuals taking relatively less Covid-19 risk and 50% taking relatively more. What % of total risk is being taken by the safer 50%?
Less than 10%: 27.5%
I would have voted for under 10%.
This is an almost exactly even split between more or less than 15%, so let’s say that the bottom 50% account for 15% of the risk, and the other 50% account for 85% of the risk.
If we assumed the nation was only these two pools, and people got infected proportionally to risk taken, what does this make the herd immunity threshold?
Let’s continue to be conservative and assume initial R0 = 4, on the high end of estimates.
For pure SIR, immunity threshold is 75%.
With two classes of people, immunity threshold is around 35%.
Adding even one extra category of people cuts the herd immunity threshold by more than half, all on its own.
If this 85/15 rule is even somewhat fractal, we are going to get to herd immunity very quickly.
Hopefully this was a good basic intuition pump for how effective such factors can be – and it seems more convincing to me than the paper was.
Can We Quantify This Effect?
Yes. Yes, we can.
We haven’t. And we won’t. But we could!
It would be easy. All you have to do is find a survey method that generates a random sample, and find a distribution of connectivity. Then give everyone antibody tests, then examine the resulting data. For best results on small samples, also give the survey to people who have already tested positive.
This is not a hard problem. It requires no controlled experiments and endangers no test subjects. It has huge implications for policy. Along the way, you’d also be able to quantify risk from different sources.
Then you can use that data to create the model, and see what threshold we’re dealing with.
That’s the study that needs to happen here. It probably won’t happen.
Until then, this is what we have. It’s not convincing. It’s not making me update. But it is a study one can point to that supports an obviously correct directional update, and comes up with a plausible estimate.
So for that, I want to say: Thank you.
Comments sorted by top scores.