Posts
Comments
Hi Luke, thanks for your question!
It's not that we want to go the FDA route per se – the tech that we're developing would simply require it. The expertise that we have at PanLabs is mostly in synthetic biology, which lends itself to developing new biologic drugs, which are regulated by the FDA. However, we're not ruling out the possibility of going the drug-free route with future research programs; it would just be a matter of capacity-building first.
(Side note: You wouldn't use CRISPR nor HeLa cells, but rather traditional cloning techniques + any of a number of other cell lines traditionally used for recombinant protein production. But that's tangential to your question.)
I'm far from an expert here, but anything involving cell culture is generally thought to be pretty expensive. The media is expensive, other culture conditions can add to the cost as well (e.g. continuous supply of CO2 for mammalian cells), transfection reagents are expensive, and you have to expend a lot of effort keeping out bacterial/fungal/viral contamination. The inherent variability in biological processes means that you have to deal with batch-to-batch variability in your recombinant protein product, which might mean added expenses in monitoring and analysis (and headaches dealing with regulatory agencies). Basically, cells are fickle and require a lot of babysitting and care.
mRNA vaccines don't have any of these issues, because cell culture isn't involved for the most part. Most everything is done in vitro — in that sense, mRNA vaccine production is more like chemistry than biology. And, therefore, it's more similar to peptide-based vaccine production (chemical synthesis) than protein-based vaccine production is — which, again, is why it's weird to contrast peptide- and protein-based vaccines together against mRNA vaccines.
What name would you use to talk about a chain of two or more amino acids?
Yeah I don't know if there is a term that's not super clunky like "amino acid-based polymer." But I think the more fundamental issue is that it's weird to group peptide-based and protein-based vaccines together for the claims that you're making. You can't both fend off the claim that "whole protein domains are better immunogens than peptides because they have more native-like folds" by saying that "proteins ARE peptides, and are included in my category" AND say that "it's irresponsible for us as a society not to have invested more in peptide-based vaccines because they're so cheap and easy to synthesize", when "cheap and easy to synthesize" don't apply to protein-based vaccines.
I would expect that we have experimentally determined the structure of the native fold. I would expect that our computer models might be good enough to predict how an amino acid chain of <20 amino acids folds.
Yes, I would agree with both of those (although less sure about the second one, given that I'd expect a peptide of <20 amino acids to have an ensemble of conformations; it's one thing to be able to predict the lowest energy conformation, another to predict exactly what percentage of the population is in each of 20 possible conformations). But more importantly, it's difficult to bridge from one to the other. If you're starting with the original peptide sequence, the best you can do is probably introduce a disulfide bond or cyclize it to introduce some conformational constraint, but that's not going to constrain it the same way that the native scaffold would've. You could maybe do de novo computational design of the peptide (not starting with the native sequence) to have the desired fold, but that's more cutting-edge stuff and not what RadVac did, and I'm not sure how well that would work anyway.
I suspect that the low efficacy of mRNA vaccines (only 95% - low is relative) is likely because they're only targeting the spike protein, which apparently has 'high mutant escape potential'. We have a LOT more information about the virus now than we did when the mRNA designs were finalized. If companies had been allowed to make mRNA vaccine updates without resetting all the clinical trials, I believe we'd have a substantially better/stronger vaccine than just 95%, with a single shot instead of two.
Eh, I guess I'm skeptical that it's that easy, even now, to make changes to the mRNA vaccines that would bring us from 95% up to 100% protection (against symptomatic infection), without knowing more about what's happening to that 5%. The spike protein does have "high mutant escape potential" as we are seeing, but I'm not sure that's what's behind 95% vs. 100% efficacy, given that the clinical trial data was gathered mostly before mutants really started taking off. It could just be inherent variability in the strength of people's immune systems. Not that it matters a ton, it's not like those 5% are developing severe disease, and given that I think it's weird to call the efficacy low (even in a relative sense — relative to which more effective vaccines?).
Proteins are peptides. When you give whole-proteins, protein domains or nanopeptides (short peptides). With peptide vaccine I mean a vaccine that contains peptides opposed to one that contains mRNA.
It's worth noting that calling proteins peptides is not standard terminology. If you say "peptide-based vaccines," it's generally understood that you're not including protein-based vaccines (also known as subunit vaccines). And in your original post, it does sound like you are making this distinction. If you aren't making this distinction, then it doesn't quite make sense to decry the lack of focus on "peptide-based vaccines" (including protein-based ones) as opposed to mRNA when there are plenty of protein-based candidates (most notably Novavax) in advanced clinical development.
ETA: Read more about Stöcker's vaccine; it's not a peptide-based vaccine, it's a protein subunit vaccine, using the whole RBD (peptides are usually <50 amino acids, the RBD is a couple hundred). So a different approach than RadVac. Just on priors I'd expect this to be more likely to work.
That leaves the question of what "as close to the real thing" means. We have multiple usecases like an universal flu vaccine where the goal is to get the body to develop antibodies precisely against those sections of proteins of the flu virus that are conserved and don't change year to year.
Yeah, that's fair — "as close to the real thing" isn't always the best (e.g., see inactivated or live attenuated vaccines, which have plenty of problems). But when comparing vaccines where a whole protein is presented (so subunit and mRNA) to peptide-based vaccines, I think the fact that the whole protein has a more native-like conformation is a huge advantage in favor of the former. In the example of universal flu vaccines, the most advanced candidates are all presenting whole protein domains, not peptides.
MHC proteins don't take native proteins. They take short peptides. If the short peptide doesn't naturally fold in the same way as it folds in the native protein I don't see how the human immune system would manage to build effective antibodies.
Fair enough for MHC's, but this only holds for T cell-based immunity. B cells, on the other hand, absolutely care about the conformation of the protein as it exists. To build effective antibodies to whole proteins, the immune system selects for B cells that display antibodies that bind well to those whole protein targets (MHC's, and therefore peptides, are involved in this process as well, but that doesn't change the fact that the antibody displayed by the B cell still has to bind well to the whole protein in order for that B cell to be selected for).
There are peptides in the RaDVaC vaccine where they created short peptides that fold into the shape of that the native protein has but that's a different shape then you get if you just take the subsequence of the native protein and let it fold.
It's good that they're doing this (I skimmed the white paper and saw an example where they introduced a non-native disulfide bond), it's probably better than just using the native sequence for all their peptides. But I'd say our tools for this are pretty limited, and you're still going to end up with a crude approximation of the native fold rather than exactly the native fold, which is just better.
The mRNA technology seems to provide no benefit over simply giving the peptides directly but the mRNA researchers really wanted to do fancy research on mRNA.
I think you're giving mRNA vaccines too little credit and peptide-based vaccines too much.* I haven't looked into peptide vaccines much at all but my general impression is that they don't work that well as a class. Hundreds of peptide vaccines have entered the clinic with no approvals (albeit, not all against infectious diseases, many against cancer which is more difficult) — without looking into the details, I couldn't tell you exactly why, but my guess would be lack of efficacy. When you present something to the immune system, it's important that it be in the right shape so that it trains the immune system properly. Peptides, removed from the context of the protein scaffold that they are normally a part of, are very floppy, and there's no guarantee that the conformation a peptide takes will be the same as the conformation it takes in the native protein (more precisely, probably some of the conformations the peptide takes will be similar to the conformations in the native protein, but how many other conformations will the peptide be taking that are dissimilar, and will steer the immune system the wrong way?). A good heuristic is that to have an effective vaccine, you want to present something to the immune system that's "as close to the real thing" as possible. mRNA vaccines, by being compatible with presentation of a full viral protein, allow you to do that. Peptide-based vaccines, less so.
*It's worth noting that mRNA vs. peptide is a bit of a weird distinction, since mRNA is just a delivery mechanism, and so is also compatible with delivering peptides if that's the way you wanted to go.
Looks like we have at least one case of reinfection with one of the new variants containing the E484K mutation (this isn't the South African variant, rather from the Brazilian lineage where E484K independently arose).
Potential counterargument against the above: Well, yes, any individual pharma company might only have one shot, but as a society we could have many different untested vaccine candidates going out to people all at once. We already have a risk diversification mechanism by having many different vaccine candidate options, so we don't need to further avoid risk by doing trials on all of them. And we also don't have to give untested vaccines to literally everybody, just small populations at first.
Response to the counterargument: If the proposal is to give a bunch of untested vaccine candidates to people early on in the pandemic before knowing for sure how they work, haven't you just reinvented clinical trials? Maybe a conception of them that is faster and to a larger population, but still fundamentally the same thing?
So I think that you're right to point out the bias that many experts have in overprioritizing wanting to avoid harm from a vaccine at the cost of ongoing deaths from the pandemic. That's likely a very strong consideration playing into their judgments here, and one that should be fought against.
That being said, I think another concern that is underappreciated in these debates is lack of efficacy, rather than lack of safety. What if—and this is probably more likely—a vaccine doesn't result in any harm, but also just doesn't do anything to prevent disease? How much time have you spent getting pharma to invest time and resources into developing a manufacturing process for a particular vaccine, and getting the vaccine distributed to people, only to find out it doesn't work? How many people will have lost trust in the vaccine development process such that they won't take a subsequent vaccine, even if it works much better? Do you really have the option to "start over"? Maybe moreso now than ever the answer is "yes" given the relative agility with which you can move from design to manufacturing process with mRNA vaccines, but with anything cell culture-based you've probably lost months, at which point you might be in a worse situation than if you'd started with trials. This scenario doesn't have to play out with literally zero efficacy, the same considerations apply if you have, say, 30% efficacy. You could blow your shot at developing a 90% efficacy vaccine b/c you've spent down all your resources on the 30% efficacy one.
The implicit theory behind this concern is that you have a limited bank of public trust + resources to work with, such that if you spend it all on the first try and it doesn't work out, then there's no going back. So best to make sure your one try definitely goes well, e.g. by setting up trials first.
I think it's unclear exactly to what extent this theory holds up, but I think it warrants serious consideration. Too often I think these debates stop at "well the expected risk of harm to any given individual from even an untested vaccine is much less than the expected risk of harm from being unvaccinated, so we should give untested vaccines as early as possible" rather than considering the true counterfactual at a society-wide level.
Anyway, these are in a domain that constitutes about 20% of human neutralizing antibodies and are quite possibly somewhat immunologically relevant.
Can you point to where I can find more about this estimate? I'm starting to think of the NTD as more immunodominant than that. E.g., out of 19 potently nAbs isolated here, about half were RBD-targeted and half were NTD-targeted. Here you see similar or more NTD-targeted antibodies in the convalescent plasma of 2 patients than RBD-targeted antibodies (both are outnumbered by anti-S2 antibodies, but I'm—perhaps naively—assuming those are mostly non-neutralizing). Deletions in the NTD also seem to be selected for in response to immune pressure (e.g., in response to convalescent plasma therapy, in persistently infected patients).
In particular, N501Y has already been identified as a mutation that increases the strength of binding to the ACE2 receptor. It also functions as a known escape mutation
Good news — maybe N501Y doesn't confer escape from neutralization.
I have seen papers exposing large numbers of recovered human serum samples against various escape mutations. When one at a time was presented, only the weakest ~7% of responses (presumably consisting of the fewest antibody types) were affected at all and only a subset of those dropped below likely having an effect. When two escape mutations were added, it went up to about 20%, still biased strongly towards the weaker responses. But again, not all those responses dropped to zero, some just got weaker.
I feel like I've seen the paper you're referencing but can't seem to find it now. In any case, I've also been surprised by other papers showing large impacts of single mutations against polyclonal sera. For example, the 69/70 deletions (in combination with the maybe-irrelevant D796H) drop neutralization activity against 3 high titer convalescent plasma samples by >50% (table 1, figure 4C here). Also, several variants—of note, especially E484K, found in the South African variant—seem to drop neutralization activity of convalescent sera, including some that are higher titer, several-fold (Fig 5A here). I used to have more faith in the polyclonal response but I'm starting to question it a bit.
Just came across this clip from an interview with Paul Offitt that is relevant here: He claims that, out of all the serious side effects resulting from vaccines in the past that he could think of, all emerged within 6 weeks, so the fact that vaccine trials are required to look 2 months after the second dose before applying for an EUA should mitigate most safety concerns.
As another commenter suggested, one exception could be antibody-dependent enhancement (ADE), in which antibodies induced by the vaccine could enhance the severity of subsequent infection—indeed, this concern was not widely appreciated with the dengue vaccine Dengvaxia until years of post-licensure safety follow-up. But at least the specific mechanism of ADE that is operative in dengue is unlikely to be relevant with COVID-19 (ADE with dengue involves vaccine-induced antibodies against one serotype not being able to neutralize another serotype; the antibodies will still bind to the virus, though, and bring the virus to immune cells, which the virus then infects. But SARS-CoV-2 doesn't have multiple serotypes, and does not seem to be able to infect immune cells.)
"mRNA vaccines get produced by hela cells which are a cell line based on human cells and as a result there's less reason to expect food allergy development due to mRNA vaccines."
This isn't quite right. One of the major advantages of mRNA vaccines over, say, recombinant protein vaccines, is that you don't need a cell line at all — once injected into your body, the mRNA finds its way into your own cells and your own cells begin producing the encoded protein—the business end of the vaccine—for you!
The process for growing up mRNA in the first place involves growing up a plasmid (circular DNA template) in bacteria, then isolating the DNA from the bacterial cells, linearizing the DNA template using an enzyme, then transcribing (turning DNA to RNA) the linearized template with RNA polymerase in vitro (i.e., not in cells at all — it's just purified polymerase + DNA). But yes, no particular reason to expect food allergy development.
It's worth noting here that coronaviruses are less vulnerable to selective pressure than most RNA viruses given that that they unusually encode proofreading activity, limiting genetic diversity. This article ( https://www.sciencemag.org/news/2020/03/mutations-can-reveal-how-coronavirus-moves-they-re-easy-overinterpret# ) claims that SARS-CoV-2 accumulates 1-2 mutations per month (its genome has 30,000 bases), which is...enormously inadequate to make even the smallest dent anytime this year in the monumental task that is modulating host immunity to modify onset of fever.
Good to know, thanks!
Hmm, well that book chapter claims measles and mumps vaccines are produced in chick embryo cell culture, which is different from propagation on chicken eggs. My quick Googling revealed that we don't have a licensed herpes vaccine, and that while there might be one or two smallpox vaccines that are produced in chicken eggs, many are done in cell culture.
You might be right about the broader (and more important) point about ease of facilities repurposing, however - I don't know enough to say, although the table in the book chapter makes me doubtful, given that pretty much all steps in the manufacturing process (production, isolation, purification, formulation) seem unique to each vaccine.
This is correct. We have lots of infrastructure and expertise for making new flu vaccines every year. It's not a good model for how long we should expect safety testing to take for a vaccine for a new virus. We don't have any licensed vaccines for any coronavirus, for example.
FWIW, eggs are actually specific to influenza vaccine manufacturing. Page 3 of this book chapter ( https://reader.elsevier.com/reader/sd/pii/B9780128021743000059?token=F492A74B3C4545B108379536769CF93D7F1DB89321DADE859256496F5D85CB6259372D34376809219BBBE2FFFDEF25FB ) has a really nice table showing the production process of a number of different vaccines - they are all very different from one another. This is why we need new vaccine platform technologies - i.e., tech that can be used to produce multiple different vaccines. mRNA vaccines would fall into this category and is a reason why Moderna's mRNA vaccine candidate for COVID-19 would be so exciting if it works.
It was only a matter of time before somebody tried this: https://www.biorxiv.org/content/10.1101/2020.03.13.991307v1.full.pdf
From the abstract: "Here we demonstrate a CRISPR-Cas13-based strategy, PAC-MAN (Prophylactic Antiviral CRISPR in huMAN cells), for viral inhibition that can effectively degrade SARS-CoV-2 sequences and live influenza A virus (IAV) genome in human lung epithelial cells. We designed and screened a group of CRISPR RNAs (crRNAs) targeting conserved viral regions and identified functional crRNAs for cleaving SARSCoV-2...The PAC-MAN approach is potentially a rapidly implementable pan-coronavirus strategy to deal with emerging pandemic strains. "
Here's a good op-ed on this topic: https://www.nytimes.com/2020/03/04/opinion/coronavirus-buildings.html
The author suggests that the lack of attention on building ventilation is due to uncertainty about how important close contact (i.e., close enough that a person's respiratory droplets could directly land on you) is for transmission, vs. more indirect airborne transmission.
(E.g., from CDC website: "Early reports suggest person-to-person transmission most commonly happens during close exposure to a person infected with COVID-19, primarily via respiratory droplets produced when the infected person coughs or sneezes. Droplets can land in the mouths, noses, or eyes of people who are nearby or possibly be inhaled into the lungs of those within close proximity. The contribution of small respirable particles, sometimes called aerosols or droplet nuclei, to close proximity transmission is currently uncertain. However, airborne transmission from person-to-person over long distances is unlikely.")
You might also be interested in the 1976 mass vaccination program in the US for swine flu, which was a case of perceived overreaction (given the anticipated pandemic never materialized) and also hurt the reputation of public health generally: https://www.discovermagazine.com/health/the-public-health-legacy-of-the-1976-swine-flu-outbreak
Or in "The Cutter Incident" in 1955, where a rush to get a polio vaccine out in advance of the next polio season resulted in some batches containing live polio virus, with several children receiving the vaccine actually getting polio instead: https://en.wikipedia.org/wiki/Cutter_Laboratories#The_Cutter_incident
There's definitely a history of incidents in public health of perceived overreaction followed by public backlash, which could potentially be playing into public health officials' heads nowadays. I don't know if becoming more conservative and less-quick-to-take-action is necessarily a wrong lesson, though – even if you think, just simply on the numbers, that taking preventative measures in each of these incidents was correct ex ante given the stakes involved, reputational risks are real and have to be taken into account. As much as "take action to prepare for low probability, high consequence scenarios when the expected cost < expected benefit" applies to personal preparation, it doesn't translate easily to governmental action, at least not when "expected cost" doesn't factor in "everyone will yell at you and trust you less in the future if the low probability scenario doesn't pan out, because people don't do probabilities well."
This does put us in a bit of a bind, since ideally you'd want to have public health authorities be able to take well-calibrated actions against <10%-likely scenarios. But they are, unfortunately, constrained by public perception to some extent.
Hmm, can you think of a plausible biological mechanism by which a virus could evolve to not cause fever, or to cause fever later than usual? My initial reaction is to be skeptical that fever screening would result in the effects you suggest, mainly because whether or not you get a fever is mostly a function of your innate immune system kicking in and not a function of the virus. Whether or not you get a fever is mostly out of the virus's control, so to speak. The virus could perhaps evolve methods of evading innate immunity, but other examples I've seen of viral adaptation to innate immunity seem like they involve complex mechanisms, which I would guess would not evolve on timescales as short as we're concerned with here (although here I'd welcome correction from someone with more experience in these matters).
But even if there were potential evolutionary solutions close at hand for a virus to evolve evasion to host innate immune responses, I'm not sure that fever screening would really accelerate the discovery of those solutions, given that the virus is already under such extreme selection pressure to evade host immunity. After all, the virus has to face host immune systems in literally every host, whereas fever screening only applies to a tiny fraction of them.
I downvoted this comment (as well as your comment below) for strongly pushing misinformation. As others have noted, the CRISPR/Cas9 system has evolved in bacteria precisely to target viral genomes — "CRISPR is not able to target viruses at all" is simply false. "...and also does not destroy the things it targets" is also false, in a sense; a well-targeted Cas9-induced double-stranded break in the DNA/RNA of a viral genome can certainly disable a crucial viral gene and reduce viral replication, even if you don't consider this "destruction" of the genome.
That's not to say that the CRISPR/Cas9 system is quite ready for antiviral therapy in vivo. One problem is that you could rapidly generate viral escape mutants. Not only do you create selection pressure for the virus to mutate such that your bespoke CRISPR/Cas9 system can't target it anymore, the Cas9 cutting itself guides this process along more rapidly, since double-stranded breaks are often accompanied by random insertions and deletions at the cut site (incorporated during attempted cellular repair of the break). This could potentially be addressed by targeting important, conserved regions of the viral genome and/or by multiplexed editing (i.e., targeting multiple sites simultaneously).
Perhaps a bigger challenge is delivery. Systemic delivery (i.e., throughout the body) is risky, since you can get off-target edits in cells that aren't even infected with virus, which could result in increased risk of cancer or other maladies. Targeted delivery to only a certain class of cells of interest is sometimes possible but difficult. There's also the perennial question of whether or not your looks-good-on-paper molecular mechanism of action translates to real clinical benefits, something that can only really be definitively answered in clinical trials. For example, you might see efficient viral genome cutting in vitro but see no clinical benefit in a patient, because maybe your Cas9 protein doesn't stick around long enough in cells, or maybe you can't get it into enough cells to matter, or it's detrimentally immunogenic, or a host of other hard-to-evaluate-in-advance reasons.
All that being said, this is a real direction of interest and many are looking into it — the OP is not "completely on the wrong track" and this idea is not "nowhere close to the sort of thing that would work". The fact that CRISPR/Cas9 is so programmable makes its potential use as an antiviral therapy exciting and at least worth exploring more, in my view. Here's a nice review if anyone would like to learn more (warning: paywalled): https://www.cell.com/trends/microbiology/fulltext/S0966-842X(17)30093-8
(Comment duplicated from the EA Forum.)
I think the central "drawing balls from an urn" metaphor implies a more deterministic situation than that which we are actually in – that is, it implies that if technological progress continues, if we keep drawing balls from the urn, then at some point we will draw a black ball, and so civilizational devastation is basically inevitable. (Note that Nick Bostrom isn't actually saying this, but it's an easy conclusion to draw from the simplified metaphor). I'm worried that taking this metaphor at face value will turn people towards broadly restricting scientific development more than is necessarily warranted.
I offer a modification of the metaphor that relates to differential technological development. (In the middle of the paper, Bostrom already proposes a few modifications of the metaphor based on differential technological development, but not the following one). Whenever we draw a ball out of the urn, it affects the color of the other balls remaining in the urn. Importantly, some of the white balls we draw out of the urn (e.g., defensive technologies) lighten the color of any grey/black balls left in the run. A concrete example of this would be the summation of the advances in medicine over the past century, which have lowered the risk of a human-caused global pandemic. Therefore, continuing to draw balls out of the urn doesn't inevitably lead to civilizational disaster – as long as we can be sufficiently discriminate towards those white balls which have a risk-lowering effect.