Somehow for me lots of images in this post don't load and can't be reloaded in-page, but do load if I load the individual images in a new tab. How is it even possible that the web works like this?_? (If it matters, I'm on Firefox.)
This question seems susceptible to base rate problems.
The linked document in the quote is by the Bureau of Labor Statistics. It covers fatal occupational injuries, and according to the graph on page 2, the most common category of fatal injuries, by far, is Transportation incidents, which presumably include car and truck accidents.
And page 3 of the report (with a corresponding table on page 9) compares fatal injuries by select occupations, and has fatality rates for "Structural iron and steel workers" at the same rate as "Driver/sales workers and truck drivers", and below e.g. Roofers (2x the fatality rate), Aircraft pilots and flight engineers (2.4x the rate), and especially fishing and hunting workers (5.5x the rate).
Unfortunately the report didn't provide fatality rates for e.g. desk jobs, and I don't have the energy to look those up. Suffice it to say that once you've reached the point where your physically demanding job has a fatality rate on par with fatality rates in car-based professions, it may still be possible to improve, but it probably won't be easy.
Welcome from a fellow German here! IIRC I also stumbled on Less Wrong via HPMoR, though back then the story wasn't even finished yet.
I must say, I'm impressed with the quality of your English writing at that age!
If you're ambitious and driven to choose a career to make the world a better place, check out the resources at 80,000 Hours from the Less-Wrong-adjacent Effective Altruism community. They've done lots of research and thinking into various career paths and their expected impacts, requirements, etc. They're not perfect, in that they e.g. expect a lot from their readers, and below a certain level of ambition and conscientiousness much of their advice might not be particularly applicable. But now might be a good time to check whether their resources could be useful to you.
If you think you could benefit from chatting with someone to get a rough overview of the landscapes of Less Wrong or effective altruism, I'm available to chat. I'm mostly a longtime lurker in the community, but I do have enough familiarity with it that I can at least point towards further resources on most topics.
Question on LW norms: When do you strongly upvote your own comments? Never? Always? If you're very confident in the comment? If you think the comment is particularly valuable? If the comment was time-consuming to write?
I'm not sure whether your sequence will touch on this, but the things that make me hopeful in this space are not techniques and strategies for individuals (which might require training, or willpower, or shared values), but rather suggestions for novel institutions and mechanisms for coordination.
For instance, when you take a public goods problem (like pollution or something), expecting tons of people to agree or negotiate on how to resolve the problem seems utterly intractable, whereas if you could have started with a market design which internalizes such negative externalities, the problem might mostly resolve itself.
Since successful longstanding institutions (like nations) necessarily have a strong bias towards self-preservation, however, I don't really see how most novel mechanisms could possibly be implemented (e.g. charter cities are a great idea, but pretty much all nations are deeply skeptical of them due to highly valuing their sovereignty).
One avenue that seems like it could have a bit more hope is in the cryptocurrency sphere, if only because it's still quite new, plus it's also inherently weird enough that people might not immediately balk at bizarre-sounding concepts like quadratic voting.
Summary of the essay "Alternatives to selling at below-market-clearing prices for achieving fairness (or community sentiment, or fun)": Why do people sell concert tickets below market-clearing prices? This has big negative consequences like incentivizing scalpers, but also some advantages like following some intuitive principles of fairness (e.g. not locking poor people out of the market), as well as more cynical reasons like "products selling out and having long lines creates a perception of popularity and prestige"; etc. So the post suggests a market design that allows selling at mostly market-clearing prices while still preserving e.g. fairness, and concludes with: "In all of these cases, the core of the solution is simple: if you want to be reliably fair to people, then your mechanism should have some input that explicitly measures people. Proof of personhood protocols do this (and if desired can be combined with zero knowledge proofs to ensure privacy). Ergo, we should take the efficiency benefits of market and auction-based pricing, and the egalitarian benefits of proof of personhood mechanics, and combine them together."
The essay Moving beyond coin voting governance points out in an aside that cryptocurrencies invest ridiculous sums in network security (proof of work), e.g. here's a chart of spending on proof of work vs. research & development. The difference is that network security was considered as a public good during design of the cryptocurrency protocols, while e.g. research wasn't. So the former gets huge amounts of funding; the latter doesn't. (Though the essay also points out that rewarding R&D explicitly would compromise their independence etc.)
Other mechanisms include quadratic voting, which was IIRC also used here on Less Wrong for the 2018 Review; as well as the related concept of quadratic funding.
To which extent these benefits will actually materialize is of course still an open question, but conceptually, this sounds like the right approach: align incentives, internalize externalities, consider public goods from the start, etc. Try to improve systems, rather than people.
What's the basis for your assumption that this proposal would be more politically palatable than open borders? That seems nigh-inconceivable to me. ACX has written a few posts on charter cities, including on their role in the U.S., and I did not get the idea that they would be remotely politically feasible there.
(1) Prior to covid, I was underrating how risky it is to get sick, because I was not accounting for the risk of chronic illness. I needed to update that prior, and take more general precautions against getting sick, period.
(2) Because chronic illness is not a unique or even (apparently) particularly special risk of COVID, fear of chronic COVID specifically should not change my risk calculus or precautions overall.
So I am simultaneously being more careful than I was before the pandemic, and less careful than my friends who still think "long COVID" poses a unique and novel threat that requires extra-special risk avoidance.
I don't have the expertise or training to evaluate detailed medical claims myself. I wasn't even able to find sources for the blood-brain-barrier thing (neither claims nor rebuttals), except for this thread on askreddit which I was too exhausted to peruse. In any case, at this point the discussion is not about medicine but about epistemology.
I have not yet been convinced that the vaccine causes brain damage. I think that at the very least, that argument requires sources for both a link between mRNA vaccines and an inflamed brain, and for the claim that this is an exceptional occurence / that this is something worse than what happens in e.g. an average fever.
I guess my prior is that bodies are pretty robust, and that most contrarian claims are wrong. Identifying correct contrarians is hard.
My god-of-the-gaps comment was directed at what I perceived as a complex hypothesis which looked like it was (over?)fitted to the available evidence. In such a situation, one can't falsify the hypothesis without new evidence, even though one figures there should be plenty evidence regarding most conceivable side effects by now.
I do agree about the issues with doctors, though. I have had several suboptimal encounters with the medical system, which have left me rather unimpressed with medical care (diagnosis in particular). I have an essay draft on this topic, but it's going to be a long while until I get to it.
Epistemically, this kind of argument reminds me of god-of-the-gaps or shrinking parameter spaces in string theory. That doesn't make the argument wrong, but it means that I don't really see a fruitful way to engage with it.
I suppose that if one's prior is that this kind of risk is negligible, the argument will sound unconvincing, whereas if it sounds plausible a priori, then lack of such studies seems concerning? Let's leave it at that. Though I could be convinced otherwise if I learned that this concern was taken seriously by a significant fraction of doctors or other public health professionals.
(Meta comment: Formatting is in rich text by default; selecting text displays a hover with formatting options. You can switch to markdown formatting in your user settings ("Activate Markdown Editor").)
If you had infinite time and resources, you'd ideally test for all conceivable outcome variables when designing clinical trials for anything. Of course there's always a chance that something was missed in the trials, but it certainly matters what that chance is. Do we have reason to believe it to be non-negligible, now that more than enough people have been vaccinated for even the tiniest of risks to manifest themselves?
In any case, if someone is specifically worried about the novel mRNA vaccines, they can take one of the classically produced vaccines instead.
There's no good reason to use a new and risky process like the mRNA process.
... there's no reason to take that risk with a speed up approval process.
... What about the higher efficacy of the mRNA vaccines?
(I also tried to look up a timeline of manufacturing volume by vaccine type, but unfortunately couldn't find anything useful. I had had the impression that the mRNA vaccines had been quicker to manufacture.)
An omniscient being could make a full cost-benefit analysis on this kind of stuff, but we have to reason under uncertainty, and things certainly don't look so clear-cut to me.
Suggestion: Pre-register how you imagine a truly safe approval process for a vaccine would look like. Take notes on a piece of paper: How many clinical trials would you expect? How long should they take? If a vaccine successfully passes all trials, how much time should official institutions like the FDA take to review the evidence before they made a decision on whether to authorize use of the vaccine?
Finally, does the process you've imagined account for the opportunity cost of delay during an emergency?
Only afterwards, look up how the vaccines were actually approved. For instance, this study (section "Results") provides an overview of how various vaccines were authorized in the USA, EU, and Canada.
Then compare the actual approval process with what you imagined it should look like. How do they compare?
The first part is just a jab at politics. IIRC the second part comments on some kinds of new proposed cryptocurrency regulations. When politicians regulate new technology, they compare it to old technology, and sometimes these comparisons make no sense and put an undue regulatory burden on the new technology. From what I understand, the proposed regulations could treat cryptocurrency miners as brokers, in which case miners would presumably fall under the purview of some kinds of financial regulation.
If this article is to be believed, the legislators tried to fix that problem. The article's last section implies that this attempt failed, however:
Updated to add
The $1tr infrastructure bill has passed its Senate vote by 60-33, but the new terminology for cryptocurrency miners and researchers didn't make it into the final legislation.
As so often happens in the Senate these days the bi-partisan amendment was blocked. Unanimous consent for the amendment was required and Senator Richard Shelby (R-AL) objected, effectively killing the changes that had taken weeks of careful negotiating.
It's now up to the House of Representatives to come up with its own version of the language in its bill, so it's not over yet.
I used to think the same way, and I still google things a lot, but at some point I had the vivid impression that the "Google Oracle" had been compromised or seriously deteriorated in quality.
If I want to find a particular product on Amazon, or a particular game on Steam, Google will pretty much always find it. But if there's no straightforward way to make money off of answering my question (or at least that's my impression), then Google will usually try to answer a similar-sounding question instead, one somebody could make money off of.
Some consequences: Idiosyncratic questions get banal answers to similar-sounding but uninteresting questions. Any questions about product comparisons are answered by pages upon pages of auto-generated product comparisons (which are usually crap) full of affiliate links to Amazon, irrespective of which product features I asked about and whether they're even mentioned in the "answers". Any medical questions (like "Is X unhealthy?") are routed towards useless websites like WebMD. Lifestyle questions are answered by pages upon pages of essentially the same article written by different writers, all providing the same answer based on Institutional Common Sense or the same source material; often there's no diversity of opinion in sight. And so on.
I'm curious why we experience search engines so differently - maybe we ask different questions, or we have different expectations or something, but my personal impression is that Google used to be the Oracle you mentioned, but that it lost its powers to Goodharting years ago.
I agree in principle that We Can Do Better, but would caution that these kinds of discussions should either explicitly ban (pseudo) island nations like Australia, South Korea, Taiwan etc., or argue how their (sometimes temporary) superior performance isn't entirely dependent on lucky geography which can't be replicated by most nations.
Are there any non-island-ish nations that have had similarly successful early Covid policy?
For another essay from the community about the fish oil story, ACX wrote a post on it in 2021. Though it's been a while since I read this chapter from Inadequate Equilibria, and so I don't remember how Eliezer's treatment of the story differs from Scott's.
It's a competition, i.e. only one of the services they eventually intend to offer.
The tokens have value because there are prizes: "But of course, since it’s a competition, there are prizes to be won — users can trade their way onto the ROI Leaderboard for a share of the competition prize pool."
Prizes are financed via the interest gained by lending away the UDSC for the duration of the competition (just as if Hedgehog were acting as a classical bank, I suppose): "Thanks to DeFi composability, Hedgehog can direct the USDC staked by users towards a Solana lending protocol for the duration of the competition. All of the yield generated from these deposits goes back to users in the form of competition prize pools."
The linked post ends with a full page of disclaimers.
With regards to locking up USDC: I've only just started reading up on Ethereum (e.g. this post), but from my very rudimentary understanding, I suppose the point here is that USDC is a stablecoin pegged to the price of 1 USD, so locking up USDC does not expose you to the same volatility as would happen if you locked up the equivalent amount of ETH instead.
Thanks for writing this! I got lots of food for thought.
With regards to r/calledit, I looked through the top 30+ posts, but I wasn't as impressed.
The Betty in 2021 thing is obviously great in multiple ways, and unfortunately so was this Onion satire.
But most of the Mark My Words statements seem like obvious cases of survivorship bias, i.e. if enough people make enough predictions, eventually some will turn out true. And I'm averse to even giving those credit, since they lack both confidence intervals and the context of these redditors' other predictions.
The Ruth Bader Ginsburg prediction looks very specific, but old people have a non-negligible chance to die each year, and the followup was inevitable in an increasingly partisan congress.
The 2nd-most upvoted post on the subreddit is this, supposedly predicting the Among Us game craze of 2020, but it's actually a photoshopped fake of this post.
At the lefthand side, we have about 3 cases per 100k among the vaccinated and 9 among the unvaccinated, a ratio of 3:1. That’s a surprisingly small ratio.
The San Diego ratio is presumably "surprisingly small" because they're doing the dumb thing where they compare the "fully vaccinated" (defined as anyone who has had all doses of their vaccine for >=14 days) with the "not fully vaccinated" (defined as the remainder of the population, i.e. no vaccine, or partly vaccinated for any vaccine that gets multiple doses, or fully vaccinated but <14 days since final dose). Since those categories don't cleanly carve reality at its joints, the resulting 3:1 ratio isn't particularly meaningful without doing more math.
I don't know to which extent that problem affects the conclusions in the rest of that section, though.
I wanted to tag this post with the "Health" tag, but while tagging it with any other tag was possible, trying to use this tag bugged out, i.e. the action timed out or something, and the tag wasn't applied.
I've been recommending the Rootclaim article on the Covid origin question for a while as an example of Bayesian reasoning (with likelihood ratios, priors and posteriors, etc.) that features reasoning so transparent that it seems valuable irrespective of whether it's correct or not. That is, if your priors on the various Covid origin hypotheses differ, or your interpretation of various pieces of evidence differs, then your conclusions will also differ, but at least you could argue fruitfully with the authors of the piece.
There's a whole popular genre of stories about someone from our world being transported to another (isekai), and a subset of those stories involves kickstarting (parts of) an industrial civilization via mostly just your own knowledge. Examples include the manga Dr. Stone and the Chinese webnovel Release That Witch. Throne of Magical Arcana does the same with scientific knowledge, in a world where the power to cast magic spells derives from understanding the world. And more broadly, there are tons of stories with the same theme that focus on one smaller body of knowledge, like agriculture or medicine.
I understand that Malaria resists attempts at vaccination, but regarding your 80% prediction by 2040, did you see the news that a Malaria vaccine candidate did reach 77% effectiveness in phase II trials just last month? Quote: "It is the first vaccine that meets the World Health Organization's goal of a malaria vaccine with at least 75% efficacy."
FYI, your post seems to be full of missing pictures; all the missing pictures seem to point to Gmail, so presumably they didn't survive being copy & pasted from an email conversation. (If the post isn't missing pictures for you, I recommend logging out of google and refreshing the comment, or looking at the comment from an incognito / private browser tab.)
To add some value to this linkpost, here are my notes from reading this long article:
It's an article on an anonymous blog in 2020. The author does cite their research, though, so you can draw your own conclusions.
The section "What I recommend" lists ten lifestyle recommendations (many of which are quite unintuitive) to reduce the effect of bad air quality on your life expectancy.
Every item in the initial table comparing lifestyle / single event to life cost due to bad air quality is explained and accompanied with citations. Specifically, here's how the article quantifies harm by PM 2.5 particles in the air (the math is based on two big papers, but I couldn't tell whether the interpretation or implication were plausible):
A heuristic to quantify harms
How much do particles hurt you? While it’s hard to be precise, this section will give two simple heuristics:
A life-long exposure of 33.3 PM2.5 costs 1 DALY. This is best for lifestyle changes. For example, moving from somewhere with no particulates to somewhere with a level of 100 costs 3 DALY.
At 2500 PM2.5, you lose disability-adjusted life in real time. This is best for one-off events. For example, if you’re exposed to a level of 5000 for 3 hours, you lose 6 disability-adjusted life hours.
In any case, you can disregard this specific heuristic and just act based on the article's specifics:
Air quality in trains and underground stations is apparently *extremely bad*. The numbers are truly ridiculous.
Ultrasonic humidifiers and incense are also really bad.
Candles emit most of their particulates when extinguished, so if you must use candles regularly, extinguish them with a lid.
Cooking emits lots of particulates, so opening a window or using a kitchen range hood helps a ton.
and more; see the section "What I recommend" at the top of the article, with elaboration and caveats at the bottom.
In retrospect, my reading of the post (and my reply) were more uncharitable than I would've liked. To clarify where I'm coming from, it pattern-matched to two things I've grown frustrated with over time: Firstly, it gave me the impression of an outside critique of a field without engaging with its strongest arguments, as happens a lot to the rationality community as well (e.g. here's an old SSC post on the general problem).
And secondly, the final sentence pattern-matched to the ubiquitous "we need systemic change" criticism of effective altruism (subjectively, it appears in every single news article on EA), which doesn't seem particularly fair when everyone in the field is aware that of course systemic change would be better in principle, but it's incredibly unclear how to handle such problems in a tractable manner. (Not to mention that tons of interventions intended to effect systemic change actually perform significantly worse than e.g. cash transfers.)
Finally, when I mentioned you hadn't added an arbitrary number of students to the analogy, I meant that in your modified analogy a single individual seemingly has to save the entire world, whereas once you allow for many students, one way to resolve such a world would be to promote altruism more widely or even help build a community of effective altruism, as Peter Singer has done. Isn't that the kind of systematic approach you were calling for?
I don't find this post valuable. It criticizes a simplified (and very dated) model of someone who has written much more extensively (and recently) about his views. Also, it implies that the analogy would be improved by adding an arbitrary number of ponds, but doesn't adjust it in other ways like adding an arbitrary number of students. And it ignores that the analogy obviously cannot be applied as-is nowadays, because the underlying assumption (saving a life is extraordinarily cheap) is no longer correct (if it ever was); for instance GiveWell estimates that their top charities currently save a life per $3,000 to $5,000 donated.
So how does this post add value to the discourse? Who is supposed to benefit from the exhortation that this "is reason enough to stop, think and try to build a better system"?
Some comments: * Re: blood-clotting, I think you've bolded the wrong section. "it is difficult to estimate a background rate for these events in people who have not had the vaccine. However, based on pre-COVID figures" is the part to bold, which makes the rest of the sentence rather pointless. You cannot use pre-COVID figures to estimate expected blood-clotting when we're during a pandemic which involves an illness that specifically causes blood-clotting. * Institutions use extensive amounts of caveats and other forms of blame-avoiding language as a matter of course, but this language doesn't contain much information. That is, irrespective of how high the actual risk is, I wouldn't expect the language to change much. For instance, the phrase "patients should be aware of the remote possibility" is a waste of time for me to read, and for them to write, unless it affects the agency's actual public health guidelines. * The focus on this one particular symptom is arbitrary. It seems implausible that a drug that actually made people sick would do so only via rare blood clots and only in young people, whereas it's commonplace in bad statistics to find arbitrary problems in arbitrary subgroups. Hence the accusation of p-hacking. This xkcd comic is a decent illustration of how such a thing can happen.
Now contrast that with the real harms caused by delaying vaccinations, as Zvi points out in his essay. Orders of magnitude more people will die due to delayed vaccinations, not to mention the second-order effect of harming vaccine acceptance worldwide for the foreseeable future.
Insofar as one accepts the notion that a) the risk of side-effects is not nearly high enough to warrant this response, and b) the harm to the vaccination effort and to vaccine acceptancy is orders of magnitude higher, then the actual political response in Europe looks like gross negligence, malfeasance, or outright malice - not of the form "let's intentionally hinder vaccination and get people killed" (which I agree would be implausible comic-book villainy), but rather of the form "as a politician, I only care about avoiding blame; I don't care if my (in)actions kill thousands, as long as I'm not blamed for this", which has to me become an increasingly plausible lense through which to see politics. Here is one Zvi post on the politics of blame-avoidance and inaction.
For me personally, the part during Covid that soured me a ton on EU competence was this: The politicians were so worried of being blamed for wasting tax-payer money on expensive vaccines, that they negotiated lower prices in exchange for receiving vaccines months later. This calculation was so crazy and wrong-headed that you kind of need something like Zvi's blame-avoidance model to make sense of it.
> My takeaway from Vitalik’s journey is that it took $50,000 worth of time and technical expertise to make that $50,000
My key takeaway here, besides the technical expertise required, was that in terms of capital requirements, current prediction market designs make it very easy to push a probability away from 0 or 1, and very hard to push towards it. (Vitalik even responded to this experience with a prediction market design that does not have this problem. Maybe something will come of this.)
Anyway, Vitalik needed a capital of ethereum worth ~1 million dollars to put DAI worth $~300k into the market, for which he got $~50k profit, while those bidding on the other side only had to put in those $~50k. Assuming they had pursued the same DAI-based strategy as Vitalik, which seemed to require holding 3x of the bet amount in ethereum, they still would've only needed $~150k in ethereum.
It should be counted as granting $154m, though, since the $150m grant was a grant to a third party that then went to the Serum Institute, too. Not that I understand why they did it that way, but I guess that can be chalked up to charity bureaucracy or something.
Though if you mean to say that making grants in December 2020 don't have the same weight as they would've had half a year earlier, that's a point well-taken.
A final note is that no one is denying that Vitamin D deficiency is very highly correlated with bad Covid-19 outcomes. The world in which Vitamin D supplementation doesn’t help is the world in where there holds some combination of (A) [...]
Thus, if you go to the doctor and they measure your Vitamin D levels as sufficient, that definitely is very good Covid-risk news for you personally. If they measure your levels as insufficient, that definitely is very bad Covid-risk news for you personally.
IIRC Scott offers at least one other explanation E), namely that illness might reduce your Vitamin D levels. Hence low Vitamin D level would be a symptom of the illness, without implying that starting with a higher vitamin D level would've helped against it. From this perspective, low vitamin D level is weak Bayesian evidence for having Covid, but presumably you can't just take vitamin D to change that.
This scenario reminds me of the hypothesis in Why We Get Sick (paraphrasing from memory) that low iron levels in pregnant women were not necessarily a problem to be remedied via iron supplements, but could instead be an evolutionary mechanism to deprive bacteria of crucial resources to decrease risk of illness during pregnany.
Also worth noting that Ada got the majority of his funding from the Gates Foundation, so they did end up helping build useful capacity in at least one case.
Nitpicking: The original article says "Of the $800m (£579m) we needed, we put in $270m and the rest we raised from the Gates Foundation and various countries.", which implies they got the majority of their money from elsewhere, but not that it was all from the Gates Foundation, which may or may not have provided a majority of the funding.
Digging deeper: The Gates Foundation website lists a direct grant of $4m, and this article mentions a $150m grant which should be this one by the Foundation, but which only happened in December 2020, leaving me confused regarding the timeline in the original interview. Maybe of the $800m they needed, they got other funding first, or they only needed most of the funding for the final step of scaling up production? And I guess they might have contributed more to the Institute in other grants I didn't find; their grant payments database lists 466 potentially Covid-related grants since 2020.
What has scaling up involved, practically speaking?
We committed ourselves to Covid-19 in March. We took a huge risk, because nobody knew then that any vaccine was going to work. Of the $800m (£579m) we needed, we put in $270m and the rest we raised from the Gates Foundation and various countries. We dedicated about 1,000 employees to the programme and deferred all product launches planned for 2020 for two to three years, so that we could requisition the facilities allocated to them. Then it was a question of equipping those facilities and getting them validated for use, which we did in record time. By August we were manufacturing and stockpiling a vaccine that we predicted, correctly, would be approved around December.
If the vaccines need to be adjusted to protect against future emerging variants, how much of a challenge will that be?
It would be simple now that the processes are up and running. We grow the virus in living cells, so we would simply change the master clone – the virus with which we infect those cells – and that then propagates through them. It would take us two to three months to start producing the new vaccine at capacity.
Reiterating a Zvi point:
What I find really disappointing, what has added a few months to vaccine delivery – not just ours – is the lack of global regulatory harmonisation. Over the last seven months, while I’ve been busy making vaccines, what have the US, UK and European regulators been doing? How hard would it have been to get together with the World Health Organization and agree that if a vaccine is approved in the half-dozen or so major manufacturing countries, it is approved to send anywhere on the planet?
Instead we have a patchwork of approvals and I have 70m doses that I can’t ship because they have been purchased but not approved. They have a shelf life of six months; these expire in April.
President Biden says that the U.S. will have enough coronavirus vaccine to inoculate 300 million Americans by this summer. Biden says Moderna and Pfizer will deliver the doses by the end of July, more than a month earlier than initially anticipated.
Re: your speculation regarding future transportation costs, I vaguely recall something by an economist a couple of years ago (maybe on the Krugman blog), stating that economic reasons by themselves were enough to ensure that transportation couldn't get arbitrarily cheap. But I can't recall the specifics.
There's a paragraph that only says "Without vaccinations, they "
There's another paragraph that ends with a comma: "If you don’t want to succeed, there are always plausible ways to not succeed. For example," But maybe that's intentional to lead into the following paragraph with "California has decided to [...]".
It's amazing how many problems were caused by using a political rather than monetary prioritisation scheme for the scarce vaccines.
Lots of head-meets-wall moments this week. Like every week. My "favorite" as an EU citizen was the EU criticizing the UK for trying to secure more than way-too-few vaccines. The notion that an entire first-world continent somehow failed so epically in one of the single most important challenges of 2020 (namely, securing enough vaccine) indicates that our leaders somehow weren't living anywhere close to reality.
And that no leaders worldwide thought of any non-zero-sum solutions to the problem (like paying to increase vaccine production capacity) does not bode well for our ability to solve more difficult coordination problems.
These narratives are frameworks, or models. There's the famous saying that all models are wrong, but some are useful. Here, the narratives take the complex world and try to simplify it by essentially factoring out "what matters". Insofar as such models are correct or useful, they can aid in decision-making, e.g. for career choice, prioritisation, etc.
Even Less Wrong itself was founded on such a narrative, one developed over many years. Here's EY's current Twitter bio, for instance:
Ours is the era of inadequate AI alignment theory. Any other facts about this era are relatively unimportant, but sometimes I tweet about them anyway.
Similarly, a political science professor or historian might conclude a narrative about trends in Western democracies, or something. And the narrative that "Everyone is going to die, the way things stand." (from aging, if nothing else) is as simple as it is underappreciated by the general public. If we took it remotely seriously, we would use our resources differently.
Finally, another use of the narratives in the OP is to provide a contrast to ubiquitous but wrong narratives, e.g. the doomed middle-class narrative that endless one-upmanship towards neighbors and colleagues will somehow make one happy.
In response to your second point re: free speech, a cross-post of a comment I made on Facebook on a related issue:
I'm not from the US, but despite knowing the common counter-arguments, I don't understand how platform censorship is consistent with your 1st amendment.
Technically, the 1st amendment only prevents the government from censoring stuff; in practice, that has IIRC meant that e.g. a recruitment twitch stream by the US military is arguably not allowed to block spam.
And if that isn't allowed, surely a system where any powerful member of government can pressure any private platform holder to censor arbitrary stuff doesn't make sense. All you've done is to add a level of indirection to the government censorship. Here's a story by Glenn Greenwald on the issue of platform censorship, and he ultimately resigned from The Intercept because he got censored while trying to report on the same story, too.
The notion of specificity may be useful, but to me its presentation in terms of tone (beginning with the title "The Power to Demolish Bad Arguments") and examples seemed rather antithetical to the Less Wrong philosophy of truth-seeking.
For instance, I read the "Uber exploits its drivers" example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart's arguments apart, all the while insulting this fictitious person with asides like "By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.".
In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit - reversed stupidity is not intelligence, and hence even if we "demolish" our counterpart's supposedly bad arguments, at best we discover that they could not shift our priors.
And more generally, the essay gave me a yucky sense of "rationalists try to prove their superiority by creating strawmen and then beating them in arguments", sneer culture, etc. It doesn't help that some of its central examples involve hot-button issues on which many readers will have strong and yet divergent opinions, which imo makes them rather unsuited as examples for teaching most rationality techniques or concepts.
Zvi, my heartfelt thanks once again for the herculean efforts you're going to in creating these Covid posts.
Tiny formatting error: The link on "even though doing that made them look superficially like the complete and total idiots they are:" points to a Google Doc that prompts me with "You need access" instead of to the Twitter image immediately below.
Here is how it works for the MMR vaccine (measles, mumps, and rubella):
Children should get two doses of MMR vaccine, starting with the first dose at 12 to 15 months of age, and the second dose at 4 through 6 years of age.
Or here are the WHO recommendations for various vaccines (from the larger source here) for children. The times in the Booster Dose column span months or years. This doesn't quite answer the question of how soon you need to be, but does provide an intuition for the orders of magnitude which are typically involved, at least for children. Though besides age, I suppose part of the epidemiological reasoning here must involve not just the reaction of the immune system, but also the prevalence of the disease.