I wanted to tag this post with the "Health" tag, but while tagging it with any other tag was possible, trying to use this tag bugged out, i.e. the action timed out or something, and the tag wasn't applied.
I've been recommending the Rootclaim article on the Covid origin question for a while as an example of Bayesian reasoning (with likelihood ratios, priors and posteriors, etc.) that features reasoning so transparent that it seems valuable irrespective of whether it's correct or not. That is, if your priors on the various Covid origin hypotheses differ, or your interpretation of various pieces of evidence differs, then your conclusions will also differ, but at least you could argue fruitfully with the authors of the piece.
There's a whole popular genre of stories about someone from our world being transported to another (isekai), and a subset of those stories involves kickstarting (parts of) an industrial civilization via mostly just your own knowledge. Examples include the manga Dr. Stone and the Chinese webnovel Release That Witch. Throne of Magical Arcana does the same with scientific knowledge, in a world where the power to cast magic spells derives from understanding the world. And more broadly, there are tons of stories with the same theme that focus on one smaller body of knowledge, like agriculture or medicine.
I understand that Malaria resists attempts at vaccination, but regarding your 80% prediction by 2040, did you see the news that a Malaria vaccine candidate did reach 77% effectiveness in phase II trials just last month? Quote: "It is the first vaccine that meets the World Health Organization's goal of a malaria vaccine with at least 75% efficacy."
FYI, your post seems to be full of missing pictures; all the missing pictures seem to point to Gmail, so presumably they didn't survive being copy & pasted from an email conversation. (If the post isn't missing pictures for you, I recommend logging out of google and refreshing the comment, or looking at the comment from an incognito / private browser tab.)
To add some value to this linkpost, here are my notes from reading this long article:
It's an article on an anonymous blog in 2020. The author does cite their research, though, so you can draw your own conclusions.
The section "What I recommend" lists ten lifestyle recommendations (many of which are quite unintuitive) to reduce the effect of bad air quality on your life expectancy.
Every item in the initial table comparing lifestyle / single event to life cost due to bad air quality is explained and accompanied with citations. Specifically, here's how the article quantifies harm by PM 2.5 particles in the air (the math is based on two big papers, but I couldn't tell whether the interpretation or implication were plausible):
A heuristic to quantify harms
How much do particles hurt you? While it’s hard to be precise, this section will give two simple heuristics:
A life-long exposure of 33.3 PM2.5 costs 1 DALY. This is best for lifestyle changes. For example, moving from somewhere with no particulates to somewhere with a level of 100 costs 3 DALY.
At 2500 PM2.5, you lose disability-adjusted life in real time. This is best for one-off events. For example, if you’re exposed to a level of 5000 for 3 hours, you lose 6 disability-adjusted life hours.
In any case, you can disregard this specific heuristic and just act based on the article's specifics:
Air quality in trains and underground stations is apparently *extremely bad*. The numbers are truly ridiculous.
Ultrasonic humidifiers and incense are also really bad.
Candles emit most of their particulates when extinguished, so if you must use candles regularly, extinguish them with a lid.
Cooking emits lots of particulates, so opening a window or using a kitchen range hood helps a ton.
and more; see the section "What I recommend" at the top of the article, with elaboration and caveats at the bottom.
In retrospect, my reading of the post (and my reply) were more uncharitable than I would've liked. To clarify where I'm coming from, it pattern-matched to two things I've grown frustrated with over time: Firstly, it gave me the impression of an outside critique of a field without engaging with its strongest arguments, as happens a lot to the rationality community as well (e.g. here's an old SSC post on the general problem).
And secondly, the final sentence pattern-matched to the ubiquitous "we need systemic change" criticism of effective altruism (subjectively, it appears in every single news article on EA), which doesn't seem particularly fair when everyone in the field is aware that of course systemic change would be better in principle, but it's incredibly unclear how to handle such problems in a tractable manner. (Not to mention that tons of interventions intended to effect systemic change actually perform significantly worse than e.g. cash transfers.)
Finally, when I mentioned you hadn't added an arbitrary number of students to the analogy, I meant that in your modified analogy a single individual seemingly has to save the entire world, whereas once you allow for many students, one way to resolve such a world would be to promote altruism more widely or even help build a community of effective altruism, as Peter Singer has done. Isn't that the kind of systematic approach you were calling for?
I don't find this post valuable. It criticizes a simplified (and very dated) model of someone who has written much more extensively (and recently) about his views. Also, it implies that the analogy would be improved by adding an arbitrary number of ponds, but doesn't adjust it in other ways like adding an arbitrary number of students. And it ignores that the analogy obviously cannot be applied as-is nowadays, because the underlying assumption (saving a life is extraordinarily cheap) is no longer correct (if it ever was); for instance GiveWell estimates that their top charities currently save a life per $3,000 to $5,000 donated.
So how does this post add value to the discourse? Who is supposed to benefit from the exhortation that this "is reason enough to stop, think and try to build a better system"?
Some comments: * Re: blood-clotting, I think you've bolded the wrong section. "it is difficult to estimate a background rate for these events in people who have not had the vaccine. However, based on pre-COVID figures" is the part to bold, which makes the rest of the sentence rather pointless. You cannot use pre-COVID figures to estimate expected blood-clotting when we're during a pandemic which involves an illness that specifically causes blood-clotting. * Institutions use extensive amounts of caveats and other forms of blame-avoiding language as a matter of course, but this language doesn't contain much information. That is, irrespective of how high the actual risk is, I wouldn't expect the language to change much. For instance, the phrase "patients should be aware of the remote possibility" is a waste of time for me to read, and for them to write, unless it affects the agency's actual public health guidelines. * The focus on this one particular symptom is arbitrary. It seems implausible that a drug that actually made people sick would do so only via rare blood clots and only in young people, whereas it's commonplace in bad statistics to find arbitrary problems in arbitrary subgroups. Hence the accusation of p-hacking. This xkcd comic is a decent illustration of how such a thing can happen.
Now contrast that with the real harms caused by delaying vaccinations, as Zvi points out in his essay. Orders of magnitude more people will die due to delayed vaccinations, not to mention the second-order effect of harming vaccine acceptance worldwide for the foreseeable future.
Insofar as one accepts the notion that a) the risk of side-effects is not nearly high enough to warrant this response, and b) the harm to the vaccination effort and to vaccine acceptancy is orders of magnitude higher, then the actual political response in Europe looks like gross negligence, malfeasance, or outright malice - not of the form "let's intentionally hinder vaccination and get people killed" (which I agree would be implausible comic-book villainy), but rather of the form "as a politician, I only care about avoiding blame; I don't care if my (in)actions kill thousands, as long as I'm not blamed for this", which has to me become an increasingly plausible lense through which to see politics. Here is one Zvi post on the politics of blame-avoidance and inaction.
For me personally, the part during Covid that soured me a ton on EU competence was this: The politicians were so worried of being blamed for wasting tax-payer money on expensive vaccines, that they negotiated lower prices in exchange for receiving vaccines months later. This calculation was so crazy and wrong-headed that you kind of need something like Zvi's blame-avoidance model to make sense of it.
> My takeaway from Vitalik’s journey is that it took $50,000 worth of time and technical expertise to make that $50,000
My key takeaway here, besides the technical expertise required, was that in terms of capital requirements, current prediction market designs make it very easy to push a probability away from 0 or 1, and very hard to push towards it. (Vitalik even responded to this experience with a prediction market design that does not have this problem. Maybe something will come of this.)
Anyway, Vitalik needed a capital of ethereum worth ~1 million dollars to put DAI worth $~300k into the market, for which he got $~50k profit, while those bidding on the other side only had to put in those $~50k. Assuming they had pursued the same DAI-based strategy as Vitalik, which seemed to require holding 3x of the bet amount in ethereum, they still would've only needed $~150k in ethereum.
It should be counted as granting $154m, though, since the $150m grant was a grant to a third party that then went to the Serum Institute, too. Not that I understand why they did it that way, but I guess that can be chalked up to charity bureaucracy or something.
Though if you mean to say that making grants in December 2020 don't have the same weight as they would've had half a year earlier, that's a point well-taken.
A final note is that no one is denying that Vitamin D deficiency is very highly correlated with bad Covid-19 outcomes. The world in which Vitamin D supplementation doesn’t help is the world in where there holds some combination of (A) [...]
Thus, if you go to the doctor and they measure your Vitamin D levels as sufficient, that definitely is very good Covid-risk news for you personally. If they measure your levels as insufficient, that definitely is very bad Covid-risk news for you personally.
IIRC Scott offers at least one other explanation E), namely that illness might reduce your Vitamin D levels. Hence low Vitamin D level would be a symptom of the illness, without implying that starting with a higher vitamin D level would've helped against it. From this perspective, low vitamin D level is weak Bayesian evidence for having Covid, but presumably you can't just take vitamin D to change that.
This scenario reminds me of the hypothesis in Why We Get Sick (paraphrasing from memory) that low iron levels in pregnant women were not necessarily a problem to be remedied via iron supplements, but could instead be an evolutionary mechanism to deprive bacteria of crucial resources to decrease risk of illness during pregnany.
Also worth noting that Ada got the majority of his funding from the Gates Foundation, so they did end up helping build useful capacity in at least one case.
Nitpicking: The original article says "Of the $800m (£579m) we needed, we put in $270m and the rest we raised from the Gates Foundation and various countries.", which implies they got the majority of their money from elsewhere, but not that it was all from the Gates Foundation, which may or may not have provided a majority of the funding.
Digging deeper: The Gates Foundation website lists a direct grant of $4m, and this article mentions a $150m grant which should be this one by the Foundation, but which only happened in December 2020, leaving me confused regarding the timeline in the original interview. Maybe of the $800m they needed, they got other funding first, or they only needed most of the funding for the final step of scaling up production? And I guess they might have contributed more to the Institute in other grants I didn't find; their grant payments database lists 466 potentially Covid-related grants since 2020.
What has scaling up involved, practically speaking?
We committed ourselves to Covid-19 in March. We took a huge risk, because nobody knew then that any vaccine was going to work. Of the $800m (£579m) we needed, we put in $270m and the rest we raised from the Gates Foundation and various countries. We dedicated about 1,000 employees to the programme and deferred all product launches planned for 2020 for two to three years, so that we could requisition the facilities allocated to them. Then it was a question of equipping those facilities and getting them validated for use, which we did in record time. By August we were manufacturing and stockpiling a vaccine that we predicted, correctly, would be approved around December.
If the vaccines need to be adjusted to protect against future emerging variants, how much of a challenge will that be?
It would be simple now that the processes are up and running. We grow the virus in living cells, so we would simply change the master clone – the virus with which we infect those cells – and that then propagates through them. It would take us two to three months to start producing the new vaccine at capacity.
Reiterating a Zvi point:
What I find really disappointing, what has added a few months to vaccine delivery – not just ours – is the lack of global regulatory harmonisation. Over the last seven months, while I’ve been busy making vaccines, what have the US, UK and European regulators been doing? How hard would it have been to get together with the World Health Organization and agree that if a vaccine is approved in the half-dozen or so major manufacturing countries, it is approved to send anywhere on the planet?
Instead we have a patchwork of approvals and I have 70m doses that I can’t ship because they have been purchased but not approved. They have a shelf life of six months; these expire in April.
President Biden says that the U.S. will have enough coronavirus vaccine to inoculate 300 million Americans by this summer. Biden says Moderna and Pfizer will deliver the doses by the end of July, more than a month earlier than initially anticipated.
Re: your speculation regarding future transportation costs, I vaguely recall something by an economist a couple of years ago (maybe on the Krugman blog), stating that economic reasons by themselves were enough to ensure that transportation couldn't get arbitrarily cheap. But I can't recall the specifics.
There's a paragraph that only says "Without vaccinations, they "
There's another paragraph that ends with a comma: "If you don’t want to succeed, there are always plausible ways to not succeed. For example," But maybe that's intentional to lead into the following paragraph with "California has decided to [...]".
It's amazing how many problems were caused by using a political rather than monetary prioritisation scheme for the scarce vaccines.
Lots of head-meets-wall moments this week. Like every week. My "favorite" as an EU citizen was the EU criticizing the UK for trying to secure more than way-too-few vaccines. The notion that an entire first-world continent somehow failed so epically in one of the single most important challenges of 2020 (namely, securing enough vaccine) indicates that our leaders somehow weren't living anywhere close to reality.
And that no leaders worldwide thought of any non-zero-sum solutions to the problem (like paying to increase vaccine production capacity) does not bode well for our ability to solve more difficult coordination problems.
These narratives are frameworks, or models. There's the famous saying that all models are wrong, but some are useful. Here, the narratives take the complex world and try to simplify it by essentially factoring out "what matters". Insofar as such models are correct or useful, they can aid in decision-making, e.g. for career choice, prioritisation, etc.
Even Less Wrong itself was founded on such a narrative, one developed over many years. Here's EY's current Twitter bio, for instance:
Ours is the era of inadequate AI alignment theory. Any other facts about this era are relatively unimportant, but sometimes I tweet about them anyway.
Similarly, a political science professor or historian might conclude a narrative about trends in Western democracies, or something. And the narrative that "Everyone is going to die, the way things stand." (from aging, if nothing else) is as simple as it is underappreciated by the general public. If we took it remotely seriously, we would use our resources differently.
Finally, another use of the narratives in the OP is to provide a contrast to ubiquitous but wrong narratives, e.g. the doomed middle-class narrative that endless one-upmanship towards neighbors and colleagues will somehow make one happy.
In response to your second point re: free speech, a cross-post of a comment I made on Facebook on a related issue:
I'm not from the US, but despite knowing the common counter-arguments, I don't understand how platform censorship is consistent with your 1st amendment.
Technically, the 1st amendment only prevents the government from censoring stuff; in practice, that has IIRC meant that e.g. a recruitment twitch stream by the US military is arguably not allowed to block spam.
And if that isn't allowed, surely a system where any powerful member of government can pressure any private platform holder to censor arbitrary stuff doesn't make sense. All you've done is to add a level of indirection to the government censorship. Here's a story by Glenn Greenwald on the issue of platform censorship, and he ultimately resigned from The Intercept because he got censored while trying to report on the same story, too.
The notion of specificity may be useful, but to me its presentation in terms of tone (beginning with the title "The Power to Demolish Bad Arguments") and examples seemed rather antithetical to the Less Wrong philosophy of truth-seeking.
For instance, I read the "Uber exploits its drivers" example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart's arguments apart, all the while insulting this fictitious person with asides like "By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.".
In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit - reversed stupidity is not intelligence, and hence even if we "demolish" our counterpart's supposedly bad arguments, at best we discover that they could not shift our priors.
And more generally, the essay gave me a yucky sense of "rationalists try to prove their superiority by creating strawmen and then beating them in arguments", sneer culture, etc. It doesn't help that some of its central examples involve hot-button issues on which many readers will have strong and yet divergent opinions, which imo makes them rather unsuited as examples for teaching most rationality techniques or concepts.
Zvi, my heartfelt thanks once again for the herculean efforts you're going to in creating these Covid posts.
Tiny formatting error: The link on "even though doing that made them look superficially like the complete and total idiots they are:" points to a Google Doc that prompts me with "You need access" instead of to the Twitter image immediately below.
Here is how it works for the MMR vaccine (measles, mumps, and rubella):
Children should get two doses of MMR vaccine, starting with the first dose at 12 to 15 months of age, and the second dose at 4 through 6 years of age.
Or here are the WHO recommendations for various vaccines (from the larger source here) for children. The times in the Booster Dose column span months or years. This doesn't quite answer the question of how soon you need to be, but does provide an intuition for the orders of magnitude which are typically involved, at least for children. Though besides age, I suppose part of the epidemiological reasoning here must involve not just the reaction of the immune system, but also the prevalence of the disease.
Thanks for writing and finishing this story! There's something particularly commendable about the act of finishing any kind of project. Kudos!
"In the First Wizarding War, your mother contributed to the development of a secret weapon intended to neutralize Lord Voldemort." (chapter 4)
Given the incompetence of the wizarding world, it seems very appropriate that the way they'd come up with to defeat a powerful evil wizard would be to... offer him godhood and make him someone else's problem.
Alternative story ending: Lord Voldemort defeats himself by using Legilimency on Luna.
The curse on the position of Hogwarts' Defense Professor still seems to be active. How unfortunate for Mr. Lockhart. Though what was he thinking when he accepted the position in the first place?
Does anyone know what the 7 and 0s thing on the astrolabe is a reference for? Same question regarding the tactical reality anchors.
Okay, so he might still have some of the secrets of Slytherin, but he can't speak, walk, stand, or do anything else a toddler couldn't do - much like Harry when he was an infant in Godric's Hollow, minus the hands. He also lacks a developed personality, and his personas like Quirrel and Voldemort are gone for good. I'll grant you that if amnesiac!Tom were put into the same position as infant!Harry and were adopted by HJPEV's parents, he might again grow up to become a dark wizard. But he doesn't have 10 years' time to do that.
Put differently, what did you think Obliviating Voldemort in chapter 115 was supposed to do?
I'm disappointed at the lack of acknowledgement that Lord Voldemort should be unable to do anything due to being supposedly killed for good by Obliviation, and this includes being of any help in Luna's ritual which requires the secrets of Slytherin. Having HJPEV's main accomplishment from HPMoR undone by a single Finite Incantatem is deeply unsatisfying and entirely implausible.
I'm still not clear on what the astrolabe actually does. Is it a time machine that violates the 6-hour limit, after all?
"According to Luna's notes, they had all three components of the ritual: [...] The power of Gryffindor. [...] The intellect of Ravenclaw. [...] The secrets of Slytherin. [...] The sacrifice of Hufflepuff."
Scary. So besides Luna's atypical personality, supposedly part of what made the story so disjointed was the nargle erasing things that would've tied the scenes together.
The nargle strongly reminds me of Ur in Wildbow's Pact story, or a number of entities in the Antimemetics Division of the SCP. As scary, too.
Lord Voldemort's legillimancy tore through Luna's mind. -> Legilimency (at least it's always capitalized in the official wiki)
"Uh-huh," said Hadmistress McGonagall. -> Headmistress
There had never been third cauldron. -> a third cauldron
"The Forgotten Library was a regular hexagon centered around a giant Pensieve. Seven giant shelves radiated toward the corners. [...] Six desks were interlaced between the seven shelves." -> I don't understand. Where in this symmetrical library with six sides and six corners do you fit seven shelves in a regular pattern?
Yeah, I read the justifications back then, and I'm kind of beating a dead horse here, but this argument implies that Occlumency is useless against an opponent who has both physical force and a mind-affecting ability like Legilimency or Obliviate. And if HPMoR-verse worked like that, then this part from chapter 27 no longer applies:
Even the best Legilimens could be fooled that way. If a perfect Occlumens claimed they were dropping their Occlumency barriers, there was no way to know if they were lying. Worse, you might not know you were dealing with a perfect Occlumens. They were rare, but the fact that they existed meant you couldn't trust Legilimency on anyone.
And instead we have this, from the same chapter:
And so the race between telepathic offense and telepathic defense had been a decisive win for defense. Otherwise the entire magical world, maybe even the whole Earth, would have been a very different place...
Despite my misgivings, in the HPMoR finale Obliviate worked on a Legilimens like Voldemort. So I don't particularly see what scary things he could do at this point without the memories of an adult human, let alone an evil wizard.
That said, Mad-Eye Moody teaches us Constant Vigilance. Given what we know so far, what are some things to worry about in the finale?
a) Finite Incantatem might work on Obliviate. I don't think it does - Obliviate seems like a rather permanent erasure -, but if it does, it would restore Lord Voldemort's memory.
b) Gilderoy Lockhart might kill the Obliviated Lord Voldemort with the Sword of Gryffindor. In that case, Voldemort would be resurrected by one of his gazillion horcruxes, and it doesn't seem inconceivable that he'd have a failsafe to restore his memory at that point.
c) There's a nargle on the way, which sounds like an eldritch monster à la SCP. Luna's plan requires the magical secrets of Salazar Slytherin to kill it, and I'm not sure whether either Harry *or* the Obliviated Voldemort have them.
Actually, if "degrees east" was correct, I think maybe the next line should also change:
"An astrolabe displays the universe's location relative to itself. Luna set the latitude dials to -34.277 degrees and the longitude dials to 108.945 degrees." -> -108.945 degrees (Assuming the first sentence implies that one has to negate both coordinates, as happened with the north coordinate.)
The GPT-3 vibes are getting stronger. I don't entirely see the story significance of the second and third part of this chapter.
I guess... Luna needed a place covered by the Fidelius Charm to protect herself from Rowena's Basilisk (like in the Antimemetics Division) while using her mother's astrolabe, and for some reason the one such place anyone in the story knew about was in China?
On a separate note:
"I solemnly swear I am up to no good."
This was an amusing part of canon Harry Potter, but in the Less Wrong / HPMoR meme space, I find this oath very unsettling. It's like the antithesis of Harry's Unbreakable Vow not to destroy the world. And using the Marauder's Map this time indeed results in a terrible misunderstanding.
a) According to Eliezer, the Fidelius Charm was immediately used in 683 CE to hide the charm itself and therefore doesn't show up in HPMoR-verse. So this chapter doesn't seem consistent with HPMoR-canon - Luna certainly couldn't mention the Charm unprompted. Which is fine, though does have to be pointed out, since the HPMoR story couldn't have happened the way it did if the ridiculously overpowered Fidelius Charm existed and was still available in modern times.
b) Regarding the astrolabe: It may not necessarily violate the time travel restrictions in HPMoR, but doesn't it violate or circumvent the Interdict of Merlin, as it would allow its user to spy on powerful wizards of the past? I suppose the Interdict could still kick in and blur the representation in the astrolabe.
According to this, you have to be a Parselmouth to open the secret entrance in the bathroom. Since you IIRC can't just randomly copy the sounds, this implies that Luna is a Parselmouth. However, this is seemingly contradicted by her making random snake sounds in front of HJPEV, who did not immediately react by treating her as a potential Voldemort clone.
I'm fascinated by how the story only implies many things without stating them outright, and somewhat frustrated when I don't entirely understand the implications. (But I do appreciate how that makes for a narrator who's simultaneously untypical and unreliable, plus it's neat how my experience in this regard somewhat mirrors the bewilderment of other characters who interact with Luna.)
What was the brilliant idea that briefly occured to Lockhart, before his face fell? To take advantage of his student?
How did Luna come to represent Ravenclaw at the dueling tournament? Did she sleepwalk even during Lockhart's class, and somehow win the spot by casting spells while sleepwalking?
Over the years I've become increasingly more negative about bureaucracy and regulations, so I appreciate this essay that contextualizes such issues and attempts to explain how such social institutions evolve. That said, I only read the essay during the Nomination process, not in 2019, so I can't say whether it will affect my thinking long-term.