New Paper on Herd Immunity Thresholds

post by Zvi · 2020-07-29T20:50:01.242Z · LW · GW · 16 comments

Contents

  The Model
  Anyone Convinced?
  How Variant is Connectivity Anyway?
  Can We Quantify This Effect?
None
16 comments

Previously: On R0

This new paper suggests that herd immunity could be achieved with only about 10% infected rather than the typically suggested 60%-70%.

They claim this is due to differences in connectivity and thus exposure, and in susceptibility to infection. They claim that the best model fit for four European epidemics at 16%-26% for England, 9.4%-11% for Belgium, 7.1%-9.9% for Portugal, and 7.5%-21% for Spain.

This being accurate would be excellent news.

The 60%-70% threshold commonly thrown around is, of course, utter nonsense. I’ve been over this several times, but will summarize.

The 60%-70% result is based on a fully naive SIR (susceptible, infected, recovered) model in which all of the following are assumed to be true:

  1. People are identical, and have identical susceptibility to the virus.
  2. People are identical, and have identical ability to spread the virus.
  3. People are identical, and have identical exposure to the virus.
  4. People are identical, and have contacts completely at random.
  5. The only intervention considered is immunity. No help from behavior adjustments.

All five of these mistakes are large, and all point in the same direction. Immunity matters much more than the ‘naive SIR’ model thinks. Whatever the threshold for immunity might be for any given initial reproduction rate, it’s nowhere near what the naive SIR outputs.

Often they even take the number of cases with positive tests to be the number of infections, and use that to predict forward or train their model.

This naive model is not a straw man! Such obvious nonsense models are the most common models quoted by the press, the most common models quoted by so-called ‘scientific experts’ and the most common models used to determine policy.

The effective response is some combination of these two very poor arguments:

  1. “Until you can get a good measurement of this effect we will continue to use zero.”
  2. “Telling people the threshold is lower will cause them to take less precautions.”

Neither of these is how knowledge or science works. It’s motivated cognition. Period.

See On R0 for more details on that.

So when I saw this paper, I was hoping it would provide a better perspective that could be convincing, and a reasonable estimate of the magnitude of the effect.

I think the magnitude they are suggesting is very reasonable. Alas, I do not think the paper is convincing.

The Model

The model involves the use of calculus and many unexplained Greek letters. Thus it is impressive and valid.

If that’s not how science works, I fail to understand why they don’t explain what the hell they are actually doing.

Take the model description on page four. It’s all where this letter is this and that letter is that, with non-explicit assumption upon non-explicit assumption. Why do people write like this?

I tried to read their description on page 4 and their model made zero sense. None. The good news is it made so little sense that it was obvious that I couldn’t possibly be successfully making heads or tails of the situation, so I deleted my attempt to write up what I thought it meant (again, total gibberish) and instead I went in search of an actual explanation later in the paper.

All right, let’s get to their actual assumptions on page 19, where they’re written in English, and assume that the model correctly translates from the assumptions to the result because they have other people to check for that.

They believe the infectivity of ‘exposed individuals’ is half that of infectious ones, and that this period of being an ‘exposed individual’ takes four days to develop into being infectious. Then they are infectious for an average four days, then stop.

That’s not my model. I don’t think someone who caught the virus yesterday is half as infectious as they will be later. I think they’re essentially not infectious at all. This matters a lot! If my model is right, then if you go to a risky event on Sunday, someone seeing you on Monday is still safe. Under this paper’s model, that Monday meeting is dangerous. In fact, given the person has no symptoms yet and the person they caught it from still doesn’t, it’s very dangerous. That’s a big deal for practical planning. It makes it much harder to be relatively safe. It makes it much harder to usefully contact trace. Probably other implications as well.

What it doesn’t change much is the actual result. These are not the maths we are looking for, and their answers don’t much matter.

That’s because they’re controlled for by assuming the original R0, slash whatever assumption you make about the mean level of infectivity and susceptibility.

Technically, yes, there’s a difference. Everything is continuous, and the exact timing of when people are how infectious changes the progression of things a bit. A bit, but only a bit. If what we are doing is calculating the herd immunity threshold, you can pick any curve slope you want for exactly when people infect others. It will effect how long it takes to get to herd immunity. Big changes in average delay times would matter some (but again, over reasonable guesses, I’m thinking not enough to worry about) for how far we can expect to overshoot herd immunity before bringing infections down.

But the number of infected required will barely change. The core equation doesn’t care. Why are we jumping through all these hoops? Who is this going to convince, exactly?

This is actually good news. If all the assumptions in that section don’t matter, then none of them being wrong can makes the model wrong.

Second assumption is that acquired immunity is absolute. Once you catch Covid-19 and recover, you can’t catch it again. This presumably isn’t strictly true, but as I keep repeating, our continued lack of large scale reinfection makes it more approximately true every day.

Third assumption they suggest is that people with similar levels of connectivity are more likely to connect, relative to random connections between individuals. This seems obviously true on reflection. It’s not a good full picture of how people connect, but it’s a move in the right direction, unless it goes too far. It’s hard to get a good feel for how big this effect is in their model, but I think it’s very reasonable.

Fourth assumption is that there is variance in the degree of connectivity, and social distancing lowers the mean and variance proportionally (so the variance as a proportion of the mean is unchanged). They then note that it is possible that social distancing decreases differentiation in connectivity, which would effect their results. I don’t know why they think about this as a one-way issue. Perhaps because as scientists they have to be concerned with things that if true would make their finding weaker, but ignore if they would make them stronger and are speculative. They suggest a variation where social distancing reduces connectivity variance.

I would ask which directional effect is more likely here. It seems to me more likely that social distancing increases variance. If R0 before distancing was somewhere between 2.6 and 4, and it cuts it to something close to 1, that means the average person is cutting out 60% to 75% of their effective connectivity. By contrast, at least half the people I know are cutting more than 90% of their connectivity, and also cutting their physical exposure levels when connecting, on top of that. In many cases, it’s more than 95%, and in some it’s 99%+.  If anything, the existing introverts are doing larger percentage cuts while also feeling better about the lifestyle effects. Whereas essential workers and kids who don’t care and those who don’t believe this is a real thing likely are not cutting much connectivity at all.

I’ve talked about it enough I don’t want to get into it beyond that again here, but I’d expect higher variance distributions than before. The real concern is whether the connectivity levels during distancing are no longer that correlated to those without distancing, because that would mean we weren’t getting the full selection effects. The other hidden variable is if people who are immune then seek out higher connectivity. That effectively greatly amplifies social distancing. Immunity passports two months ago.

Fifth, they modeled ‘non-pharmaceutical interventions’ as a gradual lowering of the infection rate. This is supposed to cover masks, distancing, hand washing and such. They said 21 days to implement distancing, then 30 days at max effectiveness, then a gradual lifting whose speed does not impact the model’s results much.

They then take the observed data and use Bayesian inference to find the most likely parameters for their model.

To do that, they made two additional simplifying assumptions.

The first was that the fraction of cases that were identified is a constant throughout the period of data reported. This is false, of course. As time went on, testing everywhere improved, and at higher infection rates testing gets overwhelmed more easily and people are less willing to be tested. They are using European data, which means there might be less impact than this would have in America, but it’s still pretty bad to assume this is a constant and I’m sad they didn’t choose something better. I don’t know if a different assumption changes their answers much.

The second was that local transmission starts when countries/regions report 1 case per 5 million population in one day. An assumption like this seems deeply silly, like flipping a switch, but I presume the model needed it and choosing the wrong date to start with would be mostly harmless. If it would be a substantial impact, then shall we say I have concerns.

They then use the serological test in Spain and used it to calculate that the reporting rate of infections in Spain was around 6%. That seems to me to be on the low end of realistic. If anything, my guess would be that the serological survey was an undercount, because it seems likely some people don’t show immunity on those tests but are indeed immune, but the resulting number seems relatively low so I’ll accept it.

They then use the rate of PCR testing relative to Spain in the other contries to get reporting rates of 9% for Portugal, 6% for Belgium and 2.4% for England. That 2.4% number is dramatically low given what we know and I’m suspicious of it. I’m curious what their guess would be for the United States.

Then they took the best fit of the data, and produced their model.

Anyone Convinced?

Don’t all yell at once. My model Doesn’t think anyone was convinced. Why?

The paper doesn’t add up to more than its key insight, nor does it prove that insight.

Either you’re going to buy the core insight of the paper the moment you hear it and think about it (which I do), in which case you don’t need the paper. Or you don’t buy the core insight of the paper when you hear it, in which case nothing in the paper is going to change that.

The core insight of the paper is that if different people are differently vulnerable to infection, and different people have different amounts of connectivity and exposure, and those differences persist over time, then the people who are more vulnerable and more connected get infected faster, and thus herd immunity’s threshold is much lower.

Well, no shirt, Sherlock.

If the above paragraph isn’t enough to make that point, will the paper help? That seems highly unlikely to me. Anyone willing to think about the physical world will realize that different people have radically different amounts of connectivity. Most who think about the physical world will conclude that they also have importantly different levels of vulnerability to infection and ability to infect, and that those two will be correlated.

Most don’t buy the insight.

Why are so few people buying this seemingly trivial and obvious insight?

I gave my best guess in the first section. It is seen as an argument, and therefore a solider, for not dealing with the virus. And it is seen as not legitimate to count something that can’t be quantified – who are you to alter the hard numbers and basic math without a better answer you can defend? Thus, modesty, and the choice of an estimate well outside the realm of the plausible.

Add in that most people don’t think about or believe in the physical world in this way, as something made up of gears and cause and effect that one can figure out with logic. They hear an expert say ‘70%’ and think nothing more about it.

Then there are those who do buy the insight. If anything, I am guessing the paper discourages this, because its most prominent effect is to point out that accepting the insight implies a super low immunity threshold, thus causing people to want to recoil.

Once you buy the insight, we’re talking price. The paper suggests one outcome, but the process they use is sufficiently opaque and arbitrary and dependent on its assumptions that it’s more proof of concept than anything else.

It’s mostly permission to say numbers like ‘10% immunity threshold’ out loud and have a paper one can point to so one doesn’t sound crazy. Which is useful, I suppose. I’m happy the paper exists. I just wish it was better.

There’s nothing especially obviously wrong with the model or their final estimate. But that does not mean there’s nothing wrong with their model. Hell if I know. It would take many hours pouring over details and likely implementing the model yourself and tinkering with it before one can have confidence in the outputs. Only then should it provide much evidence for what that final price should look like.

And it should only have an impact then if the model is in practice doing more than stating the obvious implications of its assumptions.

If this did paper did convince you, or failed to convince you for reasons other than the ones I give here, I’m curious to hear about it in the comments.

How Variant is Connectivity Anyway?

I think very, very variant. I hope to not repeat my prior arguments too much, here.

Out of curiosity, I did a Twitter poll on the distribution of connectivity, and got this result with 207 votes:

Divide USA into 50% of non-immune individuals taking relatively less Covid-19 risk and 50% taking relatively more. What % of total risk is being taken by the safer 50%?

Less than 10%: 27.5%

10%-15%: 22.2%

15%-25%: 19.3%

25-50%: 30.9%

I would have voted for under 10%.

This is an almost exactly even split between more or less than 15%, so let’s say that the bottom 50% account for 15% of the risk, and the other 50% account for 85% of the risk.

If we assumed the nation was only these two pools, and people got infected proportionally to risk taken, what does this make the herd immunity threshold?

Let’s continue to be conservative and assume initial R0 = 4, on the high end of estimates.

For pure SIR, immunity threshold is 75%.

With two classes of people, immunity threshold is around 35%.

Adding even one extra category of people cuts the herd immunity threshold by more than half, all on its own.

If this 85/15 rule is even somewhat fractal, we are going to get to herd immunity very quickly.

Hopefully this was a good basic intuition pump for how effective such factors can be – and it seems more convincing to me than the paper was.

Can We Quantify This Effect?

Yes. Yes, we can.

We haven’t. And we won’t. But we could!

It would be easy. All you have to do is find a survey method that generates a random sample, and find a distribution of connectivity. Then give everyone antibody tests, then examine the resulting data. For best results on small samples, also give the survey to people who have already tested positive.

This is not a hard problem. It requires no controlled experiments and endangers no test subjects. It has huge implications for policy. Along the way, you’d also be able to quantify risk from different sources.

Then you can use that data to create the model, and see what threshold we’re dealing with.

That’s the study that needs to happen here. It probably won’t happen.

Until then, this is what we have. It’s not convincing. It’s not making me update. But it is a study one can point to that supports an obviously correct directional update, and comes up with a plausible estimate.

So for that, I want to say: Thank you.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

16 comments

Comments sorted by top scores.

comment by Owain_Evans · 2020-07-30T11:47:26.066Z · LW(p) · GW(p)
This naive model is not a straw man! Such obvious nonsense models are the most common models quoted by the press, the most common models quoted by so-called ‘scientific experts’ and the most common models used to determine policy.

I think you underestimate the sophistication of the top epidemic modelers: Neil Ferguson, Adam Kucharski, Marc Lipsitch, and others. I tend to agree we need urgent empirical work on herd immunity thresholds (see my other comment) but the top epi people are aware of the considerations you raise. Communicating with the public is very challenging under the current circumstances and so it's reasonable these people would choose words carefully.

Your statement is also empirically false. One of the most influential models is the "Imperial Model", which certainly impacted UK policy and probably US and European policy too. Other countries did versions of the model. The lead researcher on the model literally became a household name in the UK. The Imperial Model is an agent-based model (not an SIR model). It has a very detailed representation of how exposure/contact differ among different age groups (work vs. school) and in regions with different population densities. It doesn't assume the only intervention is immunity, and follow up work has tested many different interventions. (AFAIK, it does assume equal susceptibility. But as it's an agent-based model you could experiment with heterogeneity in susceptibility. And I think evidence for variable susceptibility for reasons other than age remains fairly weak: https://twitter.com/OwainEvans_UK/status/1268873649202909185)

Replies from: SDM, Douglas_Knight
comment by Sammy Martin (SDM) · 2020-07-30T13:10:10.233Z · LW(p) · GW(p)

The lesson here may be that the public line about 'there's a fixed 70% herd immunity threshold' is just that - a public line, and isn't (and never was - if I remember rightly, the Imperial model from March estimated a herd immunity threshold of 40% without a lockdown) biasing the output of modelling. It could also be the case that doctors or generic public health people in the US are repeating the 70% line while epidemiologists and modellers with specific expertise (in the US and elsewhere) are being more methodical.

For what it's worth, I haven't heard much mention of a 70% immunity threshold in the UK recently, but I suspect the public conversation is worse in the US. That being said, there is still explicit derision of the concept of herd immunity, based on declining antibody counts that don't give strong evidence for anything, so Zvi's point that a lot of people don't want to hear about herd immunity still clearly applies - see e.g. this:

Prof Jonathan Heeney, a virologist at the University of Cambridge, said the findings had put “another nail in the coffin of the dangerous concept of herd immunity”.

With that as the background, I'd be interested to know your opinion on this UK government report. They go over a bunch of factors that might increase transmission and say that a 'reasonable worst case' scenario is R_t increasing to 1.7 in September and remaining constant, assuming effectively zero government action - total second wave deaths are about double the first, with a similar peak of currently infected individuals and the peak in January (meaning a lot of time to course-correct and reimpose measures). As far as I can tell that's just a guesstimate modelling assumption, not motivated by any kind of complicated transmission model.

(Honestly, this is a fair bit better than I would have guessed for the worst case scenario - a far cry from the sorts of things we discussed here in March [LW · GW].)

They don't say how plausible they think this scenario is or give explicit motivation for R_t=1.7, just model the consequences of that change.

Does this look like a paper that doesn't account for a potentially lower immunity threshold, so is probably overestimating the damage of a winter wave? And what about seasonality - they claim that the degree of seasonality of Covid-19 is highly uncertain. Is this true? I've heard some sources say it's probably not that seasonal and others say it definitely is. What's your read of that question? A winter wave seems to be the most likely route to a damaging second wave in Europe and it would be good to know how plausible that is.

comment by Douglas_Knight · 2020-07-30T19:10:20.115Z · LW(p) · GW(p)

The Imperial model is worse than the SIR model.

It accreted detail for a decade just to prove that they were doing something. It is a good demonstration of the typical failure modes of an agent-based model. A useful model has very few parameters abstract parameters, so that they can be measured from reality. Agent-based models are useful to explore the space of relevant parameters, not to simulate a country. If simulating a country is "sophisticated," then I don't want to be a sophist.

Replies from: Owain_Evans
comment by Owain_Evans · 2020-07-31T09:51:32.710Z · LW(p) · GW(p)

I wasn't saying I'm a fan of the Imperial Model and I agree with most of these points. I think there are epi modelers who aware of the limitations of models.

comment by Owain_Evans · 2020-07-30T11:37:32.626Z · LW(p) · GW(p)

IMO what's needed here is detailed empirical analysis. There are many places round the world that have had spread that was only weakly controlled. If you get the % seropositive for a bunch of places, you could (to some extent) extrapolate to Europe/US/East Asia, where there's currently more control. Here's where I'd look:

  • Brazil has had a raging epidemic for quite a few months. % positive tests is currently >70%. It seems very likely that some towns have hit herd immunity. Similar story for South Africa and Mexico. (Many other countries have similarly bad epidemics, but these three have relatively good data.)
  • Peru has had a bad epidemic. There's a serology studying showing 71% seroprevalence in a town that was known to be very badly hit. It's probably lower than 71% but would be good to investigate. https://twitter.com/isabelrodbar/status/1285456607065681921
  • Some Indian states had >25% seropositive in studies that started in early July and there are huge number of new cases since then. Again, some towns have probability hit herd immunity.
  • Could also look at villages near Bergamo in Italy.
  • This study found 16% seropositive in a small town in Germany (e.g. with low population density). This town was locked down after an outbreak and so the 16% almost certainly underestimates the herd immunity threshold. This study was done pretty carefully (though the lead author has an axe to grind).
Replies from: DanielFilan
comment by DanielFilan · 2020-07-31T02:11:08.725Z · LW(p) · GW(p)

In SIR models you can overshoot herd immunity, right? As such, I'm not sure I should take ~30% seroprevalence as strong evidence that herd immunity is greater than ~20%. That being said, it's hard to understand how you could have ~70% seroprevalence if herd immunity is ~20%.

Replies from: Owain_Evans
comment by Owain_Evans · 2020-07-31T09:59:29.134Z · LW(p) · GW(p)

To be clear, I think the 71% result needs more investigation and (on priors) is probably lower. Yes, there is reason to expect overshoot. It seems the amount of overshoot would vary based on (a) NPIs being taken at the time (e.g. are some people never leaving the house) and (b) proportion of people who have cross-immunity or innate reduced susceptibility. (In principle, you could imagine 80% of people in a town live as normal and 20% won't leave the house till the pandemic is over.) Again, I think if we did a lot of studies, we'd get a sense of both the minimum herd immunity threshold and the variability in overshoot.

comment by Douglas_Knight · 2020-07-30T21:18:01.887Z · LW(p) · GW(p)
1. People are identical, and have identical susceptibility to the virus.
2. People are identical, and have identical ability to spread the virus.
3. People are identical, and have identical exposure to the virus.
4. People are identical, and have contacts completely at random.
5. The only intervention considered is immunity. No help from behavior adjustments.
All five of these mis­takes are large, and all point in the same di­rec­tion.

I think you are making an error about 5. There are several questions you could ask the SIR model. If you mix them up, you get the wrong answer, but that's not the fault of the model. The SIR model allows non-immunity changes by just changing R. The question of what would herd immunity be without behavior adjustments is a perfectly reasonable question. It is the question of what level of immunity would allow us to go back to normal without risking an outbreak.

Maybe I don't understand what you mean by 2 and 3, but I don't see how they predict systematic deviation from the SIR model, unless the effects in 2 and 3 are correlated. Probably I would just subsume 2 and 3 into 1 and 4.

I see three main deviations from the SIR model. One is natural immunity. Like Owain, I think that this is overplayed, at least in Europe. The second is the network difference you talk about between the connected and the isolated. But the third is the obvious network structure of cities. Talking about whether Italy has achieved herd immunity is an error: Milan can achieve it without protecting Naples. Talking about a national immunity threshold is a category error and using national PCR and antibody numbers is not so useful. (I'm not sure how badly this paper makes this mistake. It does talk about Madrid and Catalonia, but in other countries I think it uses the only data it can.)

comment by [deleted] · 2020-07-31T07:31:57.713Z · LW(p) · GW(p)

There are neighborhoods in Indian cities that are already over 60%.

This doesn't empirically hold up.

Replies from: TheMajor
comment by TheMajor · 2020-07-31T16:46:38.151Z · LW(p) · GW(p)

It would if those neighbourhoods are very homogeneous in terms of connectivity. Why would their (in)homogeneity be similar to European countries?

comment by shminux · 2020-07-30T05:09:52.057Z · LW(p) · GW(p)

I thought that the data show that the immunity lasts maybe 2-3 months? If so, we will never get to 10%

Replies from: Zvi, ellardk@gmail.com
comment by Zvi · 2020-07-30T10:15:08.069Z · LW(p) · GW(p)

No, just no. You are being misled. Lots of people were sick in March and almost none of them caught it again in July. We know it's a minimum of 4 months.

I have talked about this many times in my posts so I won't say more here.

comment by Kerry (ellardk@gmail.com) · 2020-07-30T16:49:55.220Z · LW(p) · GW(p)

In some people, antibodies start to wane at that point, but they still have antibodies for some time. So there's definitely at least some immunity for longer than that, plus other types of immunity (T-cell, etc.) Plus, if everyone is losing immunity over different time frames, they're not going to contract it nearly as easily as when we were all at zero, since many others around them will still be immune. The staggering probably helps a lot. I think the same is true for colds, and I don't get a cold every couple of months, though I know some people do. More like once a year, and colds are caused by a bunch of different viruses, so it's not even once a year for each virus.

comment by Purplehermann · 2020-07-31T01:06:32.600Z · LW(p) · GW(p)

Assumption 3: People connect with others of similar connectivity.

This seems obviously wrong to me, at least in part.

There are a few factors I can think of that influence connectivity.

Job. (Cashier, Barista, teacher>normal desk job)

number of social circles.

Size of social circle

How much of a given circle an individual actually interacts with.

I'm sure there are more. Aside from size of social circle, most humans are more likely to be connected to a random [very connected person] than a random [not very connected person].

(Differences existing in exposure, connectivity etc.. are obvious imo)

comment by Pattern · 2020-07-30T16:00:08.339Z · LW(p) · GW(p)

Flow:

Immunity passports two months ago.

As in we should have had those two months ago?

Don’t all yell at once. My model Doesn’t think anyone was convinced. Why?

Capitalization in the middle of a sentence is an unusual form of emphasis.

 

Nitpick:

fractal

On one hand this should be "Self similar". But the word "fractal" is commonly used this way.

 

Response:

This is a great post, and I've really appreciated this series. Thank you.

comment by Kerry (ellardk@gmail.com) · 2020-07-30T07:07:06.563Z · LW(p) · GW(p)
The 60%-70% result is based on a fully naive SIR (susceptible, infected, recovered) model in which all of the following are assumed to be true:
People are identical, and have identical susceptibility to the virus.
People are identical, and have identical ability to spread the virus.
People are identical, and have identical exposure to the virus.
People are identical, and have contacts completely at random.
The only intervention considered is immunity. No help from behavior adjustments.

Ugh. I just can't believe how ridiculous this all is, and how no one can see through it, and how those who can don't say anything because they'll get yelled at. And I can't believe someone insisted on using such a model for such major decisions and that our leaders went along with it. But I've seen enough of this stuff to know it's not all that shocking.

I think a lot of people really don't grasp the insight. Like, for me, I can just envision a bunch of people in my had and picture them going about their lives in different ways, and it's very easy for me to see how there would be huge variance here. But most people are shockingly bad at replicating how people behave, especially when it involves a bunch of different behaviors at one time for no real reason. Even though they can see this with their own eyes.

In my head, I immediately run through images of a person who is a loud talker and socializer going around spreading it everywhere. Once he or she stops doing that and gets at least some immunity, you are going to have way fewer cases. I picture an essential worker with a lot of public contact going home and infecting his or her family. Picture these types of people x1000 in a community, and picture what happens when all these people are immune, or, sadly, in some cases, dead. You will most likely see a huge drop in infection rates. Not perfect, but a big drop, and makes social distancing measures more effective for vulnerable people, since they will be less exposed overall. Even if immunity wanes, there will be less virus out there for you to pick up again. I know people want black and white answers, but you can definitely see how it would depend on community dynamics as to when someone infected becomes unlikely to come in close contact with someone who isn't immune.