I don't have the energy to write a 5000 word blog post explaining my reasoning, but I think ≤10% chance HCQ has clinically significant effects against COVID, chances of really impressive effects even lower.
Rob B: I gather Ra is to a first approximation just 'the sense that things are impersonally respectable / objective / authoritative / credible / prestigious / etc. based only on superficial indirect indicators of excellence.'
Ruby B: I too feel like I do not understand Ra. [...] Moloch, in my mind, was very clearly defined. For any given thing, I could tell you confidently whether it was Moloch or not. I can't do that with Ra. Also, Moloch is a single clear concept while Ra seems to be a vague cluster if it's anything. [...]
Rob B: Is there anything confusing or off about the idea that Ra is 'respectability and prestige maintained via surface-level correlates of useful/valuable things that are not themselves useful/valuable (in the context at hand)'? Either for making sense of Sarah's post or for applying the concept to real-world phenomena?
Ruby B: Yes, there is something off about that summary since the original post seems to contain a lot more than "seeking prestige via optimizing for correlates of value than actual value". [...] If your summary is at the heart of it, there are some links missing to the "hates introspection", "defends itself with vagueness, confusion, incoherence." [...]
Rob B: There are two ideas here:
(1) "a drive to seek prestige by optimizing for correlates of value that aren't themselves valuable"
The connection between these two ideas is this paragraph in Sarah's essay:
"'Respectability' turns out to be incoherent quite often — i.e. if you have any consistent model of the world you often have to take extreme or novel positions as a logical conclusion from your assumptions. To Ra, disrespectability is damnation, and thus consistent thought is suspect."
1 is the core idea that Sarah wants to point to when she says "Ra". 2 is a particular phenomenon that Sarah claims Ra tends to cause (though obviously lots of other things can cause fuzzy/inconsistent thinking too, and a drive toward such). Specifically, Sarah is defining Ra as 1, and then making the empirical claim that this is a commonplace drive, that pursuing any practical or intellectual project sufficiently consistently will at least occasionally require one to either sacrifice epistemics or sacrifice prestige, and that the drive is powerful enough that a lot of people do end up sacrificing epistemics when that conflict arises.
Ruby B: Okay, yeah, I can start to see that. Thanks for making it clearer to me, Rob!
Rob B: I think Sarah's essay is useful and coherent, but weirdly structured: she writes a bunch of poetry and mentions a bunch of accidental (and metaphorical, synesthetic, etc.) properties of Ra before she starts to delve into Ra's essential properties. I think part of why I didn't find it confusing was that I skimmed the early sections and got to the later parts of the essay that were more speaking-to-the-heart-of-the-issue, then read it back in reverse order. :P So I got to relatively clear things like the Horus (/ manifest usefulness / value / prestige-for-good-reasons) vs. Ra (empty respectability / shallow indicators of value / prestige-based-on-superficial-correlates-of-excellence) contrast first:
"Horus likes organization, clarity, intelligence, money, excellence, and power — and these things are genuinely valuable. If you want to accomplish big goals, it is perfectly rational to seek them, because they’re force multipliers. Pursuit of force multipliers — that is, pursuit of power — is not inherently Ra. There is nothing Ra-like, for instance, about noticing that software is a fully general force multiplier and trying to invest in or make better software. Ra comes in when you start admiring force multipliers for no specific goal, just because they’re shiny."
"When someone is willing to work for prestige, but not merely for money or intrinsic interest, they’re being influenced by Ra. The love of prestige is not only about seeking 'status' (as it cashes out to things like high quality of life, admiration, sex), but about trying to be an insider within a prestigious institution."
(One of the key claims Sarah makes about respectability and prestige maintained via surface-level correlates of useful/valuable things that are not themselves useful/valuable (/ Ra) is that this kind of respectability accrues much more readily to institutions, organizations, and abstractions than to individuals. Thus a lot of the post is about how idealized abstractions and austere institutions trigger this lost-purposes-of-prestige mindset more readily, which I gather is because it's harder to idealize something concrete and tangible and weak, like an individual person. Or maybe it has to do with the fact that it's harder to concretely visualize the proper function and work of something that's more abstract and large-scale, so it's easier to lose sight of the rationale for what you're seeing?)
"Seen through Ra-goggles, giving money to some particular man to spend on the causes he thinks best is weird and disturbing; putting money into a foundation, to exist in perpetuity, is respectable and appropriate. The impression that it is run collectively, by 'the institution' rather than any individual persons, makes it seem more Ra-like, and therefore more appealing."
All of that stuff makes sense. The earlier stuff from the first 2 sections of the post doesn't illuminate much, I think, unless you already have a more specific sense of what Sarah means by "Ra" from the later sections.
Ruby B: Your restructuring and rephrasing is vastly more comprehensible. That said, poetry and poetic imagery is nice and I don't begrudge Sarah her attempt.
And given your explanation, perhaps your summary description could be made slightly more comprehensive (though less comprehensible) like so:
"Ra is a drive to seek prestige by optimizing for correlates of value that aren't themselves valuable because you have forgotten the point of the correlates was to attain actual value." [...]
Rob B: Maybe "Ra is a drive to seek prestige by optimizing for correlates of value, in contexts where the correlates are not themselves valuable but this fact is made non-obvious by the correlate's abstract/impersonal/far-mode-evoking nature"?
Mark Norris Lance: [...] There is a long history of differential evaluation of actions taken by grassroots groups and similar actions taken by elites or those in power. This is evident when we discuss violence. If a low-power group places someone under their control it is kidnapping. If they assess their crimes or punish them for it, it is mob justice or vigilanteism. [...]
John Maxwell: Does the low power group in question have a democratic process for appointing judges who then issue arrest warrants?
That's a key issue for me... "Mob rule" is bad because the process mobs use to make their judgements are bad. Doubly so if the mob attacks anyone who points that out.
A common crime that modern mobs accuse people of is defending bad people. But if people can be convicted of defending bad people, that corrupts the entire justice process, because the only way we can figure out if someone really is bad is by hearing what can be said in their defense.
Yet almost everyone agrees the world will likely be importantly different by the time advanced AGI arrives.
Why do you think this? My default assumption is generally that the world won't be super different from how it looks today in strategically relevant ways. (Maybe it will be, but I don't see a strong reason to assume that, though I strongly endorse thinking about big possible changes!)
I think there's a strong argument for deception being simpler than corrigibility. Corrigibility has some fundamental difficulties in terms of... If you're imagining gradient descent process, which is looking at a proxy aligned model and is trying to modify it so that it makes use of this rich input data, it has to do some really weird things to make corrigibility work.
It has to first make a very robust pointer. With corrigibility, if it's pointing at all incorrectly to the wrong thing in the input data, wrong thing in the world model, the corrigible optimizer won't correct that pointer. It'll just be like, "Well, I have this pointer. I'm just trying to optimize for what this thing is pointing for," and if that pointer is pointing at a proxy instead, you'll just optimize that proxy. And so you have this very difficult problem of building robust pointers. With deception, you don't have this problem. A deceptive model, if it realizes the loss function is different than what it thought, it'll just change to doing the new loss function. It's actually much more robust to new information because it's trying to do this instrumentally. And so in a new situation, if it realizes that the loss function is different, it's just going to automatically change because it'll realize that's the better thing to do instrumentally.
The last time I saw it mentioned that COVID-19 can cause pulmonary fibrosis, it was in the context of autopsies. Do we have any more evidence about whether fibrosis is occurring in survivors, and if so about how common it is?
Lucas Perry: I guess I imagine the coordination here is that information on relative training competitiveness and performance competitiveness in systems is evaluated within AI companies and then possibly fed to high power decision makers who exist in strategy and governance for coming up with the correct strategy, given the landscape of companies and AI systems which exist?
Evan Hubinger: Yeah, that’s right.
I asked Evan about this and he said he misheard Lucas as asking roughly 'Are training competitiveness and performance competitiveness important for researchers at groups like GovAI to think about?'; Evan wasn't intending to call for "high power decision makers who exist in strategy and governance" to set strategy for AI companies.
I think it’s useful to sort of have in the back of your mind this analogy to evolution, but I would also be careful not to take it too far. I imagine that everything is going to generalize to the case of machine learning because it is a different process.
Should be "I think it’s useful to sort of have in the back of your mind this analogy to evolution, but I would also be careful not to take it too far and imagine that everything is going to generalize to the case of machine learning, because it is a different process."
If there’s deployment episodes than training episodes, and it just cares about how many times it goes through the blue door or the green arrow, or whatever, the green arrow is a proxy objective, and so if it gives up the fact that it’s optimizing for the green arrow, it’s going to get modified to not do that anymore.
Should be "If there's more deployment episodes than training episodes, ...".
[Epistemic status: Piecemeal wild speculation; not the kind of reasoning you should gamble the future on.]
Some things that make me think suffering (or 'pain-style suffering' specifically) might be surprisingly neurologically conditional and/or complex, and therefore more likely to be rare in non-human animals (and in subsystems of human brains, in AGI subsystems that aren't highly optimized to function as high-fidelity models of humans, etc.):
1. Degen and Finlay's social account of suffering above.
Pain management is one of the main things hypnosis appears to be useful for. Ability to cognitively regulate suffering is also one of the main claims of meditators, and seems related to existential psychotherapy's claim that narratives are more important for well-being than material circumstances.
Even if suffering isn't highly social (pace Degen and Finlay), its dependence on higher cognition suggests that it is much more complex and conditional than it might appear on initial introspection, which on its own reduces the probability of its showing up elsewhere: complex things are relatively unlikely a priori, are especially hard to evolve, and demand especially strong selection pressure if they're to evolve and if they're to be maintained.
(Note that suffering introspectively feels relatively basic, simple, and out of our control, even though it's not. Note also that what things introspectively feel like is itself under selection pressure. If suffering felt complicated, derived, and dependent on our choices, then the whole suite of social thoughts and emotions related to deception and manipulation would be much more salient, both to sufferers and to people trying to evaluate others' displays of suffering. This would muddle and complicate attempts by sufferers to consistently socially signal that their distress is important and real.)
3. When humans experience large sudden neurological changes and are able to remember and report on them, their later reports generally suggest positive states more often than negative ones. This seems true of near-death experiences and drug states, though the case of drugs is obviously filtered: the more pleasant and/or reinforcing drugs will generally be the ones that get used more.
Sometimes people report remembering that a state change was scary or disorienting. But they rarely report feeling agonizing pain, and they often either endorse having had the experience (with the benefit of hindsight), or report having enjoyed it at the time, or both.
This suggests that humans' capacity for suffering (especially more 'pain-like' suffering, as opposed to fear or anxiety) may be fragile and complex. Many different ways of disrupting brain function seem to prevent suffering, suggesting suffering is the more difficult and conjunctive state for a brain to get itself into; you need more of the brain's machinery to be in working order in order to pull it off.
4. Similarly, I frequently hear about dreams that are scary or disorienting, but I don't think I've ever heard of someone recalling having experienced severe pain from a dream, even when they remember dreaming that they were being physically damaged.
This may be for reasons of selection: if dreams were more unpleasant, people would be less inclined to go to sleep and their health would suffer. But it's interesting that scary dreams are nonetheless common. This again seems to point toward 'states that are further from the typical human state are much more likely to be capable of things like fear or distress, than to be capable of suffering-laden physical agony.'
Thanks! :) I'm currently not planning to polish it; part of the appeal of cross-posting from Facebook for me is that I can keep it timeboxed by treating it as an artifact of something I already said. I guess someone else could cannibalize it into a prettier stand-alone post.
A second question is why horror films and games seem to be increasingly converging on the creepy/uncanny/mysterious cluster of things, rather than on the overtly physically threatening cluster -- assuming this is a real trend. Some hypotheses about the second question:
A: Horror games and movies are increasingly optimizing for dread instead of terror these days, maybe because it's novel -- pure terror feels overdone and out-of-fashion. Or because dread just lends itself to a more fun multi-hour viewing/playing experience, because it's more of a 'slow burn'. Or something else.
B: Horror games aren't optimizing for dread to the exclusion of terror; rather, they've discovered that dread is a better way to maximize terror.
Why would B be true?
One just-so story you could tell is that humans have multiple responses to possible dangers, ranging from 'do some Machiavellian scheming to undermine a political rival' to 'avoid eating that weird-smelling food' to 'be cautious near that precipice' to 'attack' to 'flee'. Different emotions correspond to different priors on 'what reaction is likeliest to be warranted here?', and different movie genres optimize for different sets of emotions. And optimizing for a particular emotion usually involves steering clear of things that prime a person to experience a different emotion -- people want a 'purer' experience.
So one possibility is: big muscular agents, lion-like agents, etc. are likelier to be dangerous (in reality) than a decrepit corpse or a creepy child or a mysterious frail woman; but the correct response to hulking masculine agents is much more mixed between 'fight / confront' and 'run away / avoid', whereas the correct response to situations that evoke disgust, anxiety, uncertainty, and dread is a lot more skewed toward 'run away / avoid'. And an excess of jumpscare-ish, heart-pounding terror does tend to incline people more toward running away than toward fighting back, so it might be that both terror and dread are better optimized in tandem, while 'fight back' partly competes with terror.
On this view, 'ratchet up the intensity of danger' matters less for fear intensity than 'eliminate likely responses to the danger other than being extra-alert or fleeing'.
... Maybe because movie/game-makers these days just find it really easy to max out our danger-intensity detectors regardless? Pretty much everything in horror movies is pretty deadly relative to the kind of thing you'd regularly encounter in the ancestral environment, and group sizes in horror contexts tend to be smaller than ancestral group sizes.
People who want to enjoy the emotions corresponding purely to the 'fight' response might be likelier to watch things like action movies. And indeed, action movies don't make much use of jumpscares or terror (though they do like tension and adrenaline-pumping intensity).
Or perhaps there's something more general going on, like:
Hypothesis C: Dread increases 'general arousal / sensitivity to environmental stimuli', and then terror can piggy-back off of that and get bigger scares.
Perhaps emotions like 'disgust' and 'uncertainty' also have this property, hence why horror movies often combine dread, disgust, and uncertainty with conventional terror. In contrast, hypothesis B seems to suggest that we should expect disgust and terror to mostly show up in disjoint sets of movies/games, because the correct response to 'disease-ish things' and the correct response to 'physical attackers' is very different.
[Epistemic status: Thinking out loud, just for fun, without having done any scholarship on the topic at all.]
It seems like a lot of horror games/movies are converging on things like 'old people', 'diseased-looking people', 'psychologically ill people', 'women', 'children', 'dolls', etc. as particularly scary.
Why would that be, from an evolutionary perspective? If horror is about fear, and fear is about protecting the fearful from threats, why would weird / uncanny / out-of-evolutionary-distribution threats have a bigger impact than e.g. 'lots of human warriors coming to attack you' or 'a big predator-looking thing stalking you', which are closer to the biggest things you'd want to worry about in our environment of evolutionary adaptedness? Why are shambling, decrepit things more of a horror staple than big bulky things with claws?
(I mean, both are popular, so maybe this isn't a real phenomenon. I at least subjectively feel as though those uncanny things are scarier than super-lions or super-snakes.)
Maybe we should distinguish between two clusters of fear-ish emotions:
Terror. This is closer to the fight-or-flight response of 'act quick because you're in imminent danger'. It's a panicky 'go go go go go!!' type of feeling, like when a jumpscare happens or when you're running from a monster in a game.
Dread. This is more like feeling freaked out or creeped out, and it can occur alongside terror, or it can occur separately. It seems to be triggered less by 'imminent danger' than by ambiguous warning signs of danger.
So, a first question is why uncanny, mysterious, 'unnatural' phenomena often cause the most dread, even though they thereby become less similar to phenomena that actually posed the largest dangers to us ancestrally. (E.g., big hulking people with giant spears or snakes/dragons or werewolves correlate more with 'things dangerous to our ancestors' than decrepit zombies. Sure, creepiness maybe requires that the threat be 'ambiguous', but then why isn't an ambiguous shadow of a maybe-snake or maybe-hulking-monster creepier than an obviously-frail/sickly monster?)
Plausibly part of the answer is that more mysterious, inexplicable phenomena are harder to control, and dread is the brain's way of saying something like 'this situation looks hard to control in a way that makes me want to avoid situations like this'.
Terror-inspiring things like jumpscares have relatively simple triggers corresponding to a relatively simple response -- usually fleeing. Dread-inspiring things like 'the local wildlife has gotten eerily quiet' have subtler and more context-sensitive triggers corresponding to responses like 'don't necessarily rush into any hasty action, but do pay extra close attention to your environment, and if you can do something to get away from the stimuli that are giving you these unpleasant uneasy feelings, maybe prioritize doing that'.
Why are online political discussions perceived to contain elevated levels of hostility compared to offline discussions? In this manuscript, we leverage cross-national representative surveys and online behavioral experiments to [test] the mismatch hypothesis regarding this hostility gap. The mismatch hypothesis entails that novel features of online communication technology induce biased behavior and perceptions such that ordinary people are, e.g., less able to regulate negative emotions in online contexts. We test several versions of the mismatch hypothesis and find little to no evidence in all cases. Instead, online political hostility is committed by individuals who are predisposed to be hostile in all contexts. The perception that online discussions are more hostile seemingly emerges because other people are more likely to witness the actions of these individuals in the large, public network structure of online platforms compared to more private offline settings.
It seems to me like guilt and shame function surprisingly poorly as motivators for good work in the modern world. Not only do they often not result in people getting things done at the time, they can create a positive feedback loop that makes people depressed and unproductive for months.
But then why have we evolved to feel them so strongly?
One possibility is that guilt and shame do work well, but their function is to stop us from doing bad things. In a world where there's only a few things you can do, it's clear how to do them all, and the priority is to stay away from a few especially bad options, that's helpful.
But to do good skilled work, it's not enough to know what you shouldn't do — e.g. procrastinate. The main problem is figuring out what out of the million things you might do is the right one, and staying focussed on it. And for that curiosity or excitement or pride are much more effective. You need to be pulled in the right direction, not merely pushed away from doing nothing, or severely violating a social norm.
Another second variation on the same theme would be that modern work is different from the tasks our hunter gatherer ancestors did in all sorts of ways that can make it less motivating. In the past just feeling guilt about e.g. being lazy, was enough to get us to go gather some berries, but now for most of us, it isn't. So guilt fails, and then we feel even more guilty, and then we're even less energetic, so it fails again, etc.
A third possibility is that shame and guilt are primarily about motivating you to fit into a group and go along with its peculiar norms. But in a modern workplace that's not the main thing most of us are lacking. Rather we need to be inspired by something we're working on and give it enough focussed attention long enough to produce an interesting product.
Any other theories? Or maybe you think guilt and shame do work well?
I think one strong argument in favor of eating meat is that beef cattle (esp. grass-fed) might have net positive lives. If this is true, then the utilitarian line is to 1) eat more beef to increase demand, 2) continue advocating for welfare reforms that will make cows' lives even more positive.
Beef cattle are different than e.g. factory farmed chicken in that they live a long time (around 3 years on average vs 6-7 weeks for broilers), and spend much of their lives grazing on stockers where they might have natural-ish lives.
Another argument in favor of eating beef is that it tends to lead to deforestation, which decreases total wild animal habitat, which one might think are worse than beef farms.
... I love how EA does veganism / animal welfare things. It's really good.
[... Note that in posting this I'm not intending] to advocate for a specific intervention; it's more that it makes me happy to see thorough and outside-the-box reasoning from folks who are trying to help others, whether or not they have the same background views as me.
Jonathan Salter: Even if this line of reasoning might technically be correct in a narrow, first order effects type way, my intuition tells me that that sort of behaviour would lessen EAs credibility when pushing animal welfare messages, and that spreading general anti-speciesist norms and values to be more important in the long run. Just my two cents though.
Rob Bensinger: My model of what EA should be shooting for is that it should establish its reputation as
'that group of people that engages in wonkish analyses and debates of moral issues at great length, and then actually acts on the conclusions they reach'
'that group of people that does lots of cost-benefit analyses and is willing to consider really counter-intuitive concerns rather than rejecting unusual ideas out of hand'
'that group of people that seems to be super concerned about its actual impact and nailing down all the details, rather than being content with good PR or moral signaling'
I think that's the niche EA would occupy if it were going to have the biggest positive impact in the future. And given how diverse EA is and how many disagreements there already are, the ship may have already sailed on us being able to coordinate and converge on moral interventions without any public discussion of things like wild animal suffering.
This is similar to a respect in which my views have changed about whether EAs and rationalists should become vegan en masse. In the past, I've given arguments like [in Inhuman Altruism and Revenge of the Meat People]:
A lot more EAs and rationalists should go vegan, because it really does seem like future generations will view 21st-century factory farming similar to how we view 19th-century slavery today. It would be great to be "ahead of the curve" for once, and to clearly show that we're not just 'unusually good on some moral questions' but actually morally exemplary in all the important ways that we can achieve.
I think this is really standing in for two different arguments:
First, a reputational argument saying 'veganism is an unusually clear signal that we're willing to take big, costly steps to do the right thing, and that we're not just armchair theorists or insular contrarians; so we should put more value on paying that signal in order to convince other people that we're really serious about this save-the-world, help-others, actually-act-based-on-the-abstract-arguments thing'.
Second, an ideal-advisor-style argument saying 'meat-eating is probably worse than it seems, because the analytic arguments strongly support veganism but social pressure and social intuitions don't back up those arguments, so we probably won't emotionally feel their full moral force'.
One objection I got to the first argument is that it seems like the marginal effort and attention of a lot of EAs could save a lot more lives if it were going to things that can have global effects, rather than small-scale personal effects. The reputational argument weighs against this, but there's a reputational argument going in the other way (I believe due to Katja Grace [update: Katja Grace wrote an in-depth response at the time, but this particular argument seems to be due to Paul Christiano and Oliver Habryka]):
'What makes EA's brand special and distinctive, and puts us in an unusual position to have an outsized impact on the world, is that we're the group that gets really finicky and wonkish about EV and puts its energy into the things that seem highest-EV for the world. Prioritizing personal dietary choices over other, better uses of our time, and especially doing so for reputational or signaling reasons, seem like it actively goes against that unique aspect of EA, which makes it very questionable as a PR venture in the first place.'
This still left me feeling, on a gut level, like the 'history will view us as participants in an atrocity' argument is a strong one -- not as a reputational argument, just as an actual argument (by historical analogy) that there's something morally wrong with participating in factory farming at all, even if we're (in other aspects of our life) trying to actively oppose factory farming or even-larger atrocities.
Since then, a few things have made me feel like the latter argument's force isn't so strong. First, I've updated some on the object level about the probability that different species are conscious and that different species in particular circumstances have net-negative lives (though I still think there's a high-enough-to-be-worth-massively-worrying-about probability that farmed chicken, beef, etc. are all causing immense amounts of suffering).
Second, I've realized that when I've done my 'what would future generations think?' ideal-advisor test in the past, I've actually been doing something weird. I'm taking 21st-century intuitions about which things matter most and are most emotionally salient, and projecting them forward to imagine a society where salience works the same way on the meta level, but the object-level social pressures/dynamics are a bit different. But it seems like that heuristic might have been the wrong one for past generations to use, if they wanted to make proper use of this ideal-advisor heuristic.
Jeremy Bentham's exemplary forward-thinking moral views, for example, seem like a thing you'd achieve by going 'imagine future generations that are just super reasonable and analytical about all these things, and view things as atrocities in proportion to what the strongest arguments say', rather than by drawing analogies to things that present-day people find especially atrocious about the past.
(People who have read Bentham: did Bentham ever use intuition pumps like either of these? If so, did either line of thinking seem like it actually played a role in how he reached his conclusions, as opposed to being arguments for persuading others?)
Imagine instead a future society that's most horrified, above all else, by failures of reasoning process like 'foreseeably allocating attention and effort to something other than the thing that looks highest-EV to you'. Imagine a visceral gut-level reaction to systematic decision-making errors (that foreseeably have very negative EV) even more severe than modernity's most negative gut-level reactions to world events. Those failures of reasoning process, after all, are much more directly in your control (and much more influencable by moral praise and condemnation) than action outcomes. That seems like a hypothetical that pushes in a pretty different direction in a lot of these cases. (And one that converges more with the obvious direct 'just do the best thing' argument, which doesn't need any defense.)
Julia Galef: Another one of your posts that has stayed with me is a post in which you were responding to someone's question -- I think the question was, “What are your favorite virtues?” And you described three. They were compassion for yourself; creating conditions where you'll learn the truth; and sovereignty. [...] Can you explain briefly what sovereignty means?
Kelsey Piper: Yeah, so I characterize sovereignty as the virtue of believing yourself qualified to reason about your life, and to reason about the world, and to act based on your understanding of it.
I think it is surprisingly common to feel fundamentally unqualified even to reason about what you like, what makes you happy, which of several activities in front of you you want to do, which of your priorities are really important to you.
I think a lot of people feel the need to answer those questions by asking society what the objectively correct answer is, or trying to understand which answer won't get them in trouble. And so I think it's just really important to learn to answer those questions with what you actually want and what you actually care about. [...]
Julia Galef: One insight that I had from reading your post in particular was that maybe a lot of debates over whether you should "trust your gut” are actually about sovereignty. [...]
Kelsey Piper: Yeah, I definitely think -- maybe replace “trust your gut” with --
Julia Galef: Consult?
Kelsey Piper: Yeah, check in with your gut. Treat your gut as some information.
Julia Galef: Yeah.
Kelsey Piper: And treat making your gut more informative as an important part of your growth as a person. [...] I’ve stewed over lots of hard questions. And I got a sense of when I've tended to be right, and when I tended to be wrong, and that informs my gut and the extent to which I feel able to trust it now.
Spencer Mulesky: Why is this good content? Im not getting it.
Rob Bensinger: That seems hard to summarize!
The "trust your gut" portion maybe obscures the thing I think is important, because it seems more banal and specific. The important thing I think is being pointed at with "sovereignty" is more general than just "notice how you feel about things, and hone your intuitions through experience", though that's certainly a core thing people need to do.
One way of pointing at the more basic thing I have in mind is: by default, humans are pretty bad at being honest with themselves and others; are pretty bad at thinking clearly; are pretty bad at expressing and resolving disagreements, as opposed to conformity/mimicry or unproductive brawls; are pretty bad at taking risks and trying new things; are pretty bad at attending to argument structure, as opposed to status/authority/respectability.
We can build habits and group norms that make it a lot easier to avoid those problems, and to catch ourselves when we slip up. But this generally requires that people see past abstractions like "what I should do" and "what's normal to do" and "what's correct to do" and be able to observe and articulate what concrete things are going on in their head. A common thing that blocks this is that people feel like there's something silly or illegitimate or un-objective about reporting what's really going on inside their heads, so they feel a need to grasp at fake reasons that sound more normal/objective/impartial. Giving EAs/rationalists/etc. more social credit for something like "sovereignty", and giving them language for articulating this ideal, is one way of trying to fight back against this epistemically and instrumentally bad set of norms and mental habits.
Spencer Mulesky: Thanks!
Rob Bensinger: It might also help to give some random examples (with interesting interconnections) where I've found this helpful.
'I'm in a longstanding relationship that's turned sour. But I feel like I can't just leave (or make other changes to my life) because I'm not having enough fun / my life isn't satisfying as many values as I'd like; I feel like I need to find something objectively Bad my partner has done, so that I can feel Justified and Legitimate in leaving.' People often feel like they're not "allowed" to take radical action to improve their lives, because of others' seeming claims on their life.
A lot of the distinct issues raised on https://www.facebook.com/robbensinger/posts/10160749026995447, like Jessica Taylor's worry about using moral debt as a social lever to push people around. In my experience, this is not so dissimilar from the relationship case above; people think about their obligations in fuzzy ways that make it hard to see what they actually want and easy to get trapped by others' claims on their output.
People feel like they're being looked down on or shamed or insufficiently socially rewarded/incentivized/respected for things about how they're trying to do EA. Examples might include 'starting risky projects', 'applying to EA jobs', 'applying to non-EA jobs', 'earning to give', 'not earning to give', 'producing ideas that aren't perfectly vetted or write-ups that aren't perfectly polished'. (See e.g. the comments on https://www.facebook.com/robbensinger/posts/10161249846505447; or for the latter point, https://www.lesswrong.com/posts/7YG9zknYE8di9PuFg/epistemic-tenure and a bunch of other recent writings share a theme of 'highly productive intellectuals are feeling pressure to not say things publicly until they're super super confident of them').
Initial (naive) estimates of CFR are always overstated because of selection bias for the most serious cases. So our bayesian prior should be a non-trivial proportion of asymptomatic cases. And ignoring this is why we routinely overestimate severity of new outbreaks.
That's not to say that we don't need to be very concerned, but policymakers and public health officials need to be cautious about damaging credibility by repeatedly crying wolf. But the balance between avoiding alarm and ensuring sufficient response is a very difficult one.
I definitely would not have read this as saying "we should be very concerned", if that's one of the things you meant to communicate.
I also followed the herd too much from expert circles, and my twitter feed from infectious disease epidemiology circles was behind even my slow self in recognizing that this was a incipient disaster back in March.
Woah, this is interesting and really alarming.
Because I was slow to move from the base-rate, I underestimated the severity of COVID-19 for too long. I'm unsure how to fix that, since most of the time it's the right move, and paying attention to every new event is very expensive in terms of mental energy. (Suggestions welcome!)
Zeynep Tufekci is an even clearer example. She’s a sociologist and journalist who was writing about how it was “our civic duty” to prepare for coronavirus as early as February. She was also the first mainstream media figure to spread the word that masks were probably helpful.
Totally at random today, reading a blog post on the Mongol Empire like all normal people do during a crisis, I stumbled across a different reference to Zeynep. In a 2014 article, she was sounding a warning about the Ebola pandemic that was going on at the time. She was saying the exact same things everyone is saying now – global institutions are failing, nobody understands exponential growth, travel restrictions could work early but won’t be enough if it breaks out. She quoted a CDC prediction that there could be a million cases by the end of 2014. “Let that sink in,” she wrote. “A million Ebola victims in just a few months.”
In fact, this didn’t happen. There were only about 30,000 cases. The virus never really made it out of Liberia, Sierra Leone, and Guinea.
I don’t count this as a failed prediction on Zeynep’s part. First of all, because it could have been precisely because of people like her sounding the alarm that the epidemic was successfully contained. But more important, it wasn’t really a prediction at all. Her point wasn’t that she definitely knew this Ebola pandemic was the one that would be really bad. Her point was that it might be, so we needed to prepare. She said the same thing when the coronavirus was just starting. If this were a game, her batting average would be 50%, but that’s the wrong framework.
Zeynep Tufecki is admirable. But her admirable skill isn’t looking at various epidemics and successfully predicting which ones will be bad and which ones will fizzle out. She can’t do that any better than anyone else. Her superpower is her ability to treat something as important even before she has incontrovertible evidence that it has to be.
The whole article seems worth reading, especially if it's true that epidemiologists under-reacted to this. It's clearly correct that most people shouldn't follow every pandemic closely -- even most epidemiologists shouldn't follow every pandemic closely. But it's important that we get the base level of alarm correct -- it might be correct to overreact somewhat to the vast majority of pandemics, if that's what it takes to avoid underreacting to the big one. And it's important that people be very explicit about how carefully they've been looking into this or that specific pandemic, so that we can collectively know which epidemiologists and other observers to pay the most attention to.
Why should this have been obvious? Invasive mechanical ventilation is much more helpful for typical ARDS than for COVID-19-style ARDS and other COVID-19 dysfunction. What's the earliest evidence that should have strongly updated us in that direction?
I think on the whole the US' lockdown was pretty weak and had low compliance; I think compliance was to a large extent dependent on things already looking bad enough locally to feel dangerous, at which point it's too late to get case levels low with the duration and severity of lockdown people have been up for. ('Compliance' might even be the wrong word for it, since I think people were mostly just avoiding things based on how dangerous things looked to them personally, not based on any top-down rules or guidelines.)
World 3 doesn't strike me as a thing you can get in the critical period when AGI is a new technology. Worlds 1 and 2 sound approximately right to me, though the way I would say it is roughly: We can use math to better understand reasoning, and the process of doing this will likely improve our informal and heuristic descriptions of reasoning too, and will likely involve us recognizing that we were in some ways using the wrong high-level concepts to think about reasoning.
I haven't run the characterization above by any MIRI researchers, and different MIRI researchers have different models of how the world is likeliest to achieve aligned AGI. Also, I think it's generally hard to say what a process of getting less confused is likely to look like when you're still confused.
Commenting on the general case, rather than GPT-7 in particular: my background view on this kind of thing is that there are many different ways of reaching AGI in principle, and the vast majority of paths to AGI don' t result in early-generation AGI systems being alignable in a reasonable amount of time. (Or they're too slow/limited/etc. and end up being irrelevant.)
The most likely (and also the most conservative) view is that (efficient, effective) alignability is a rare feature -- not necessarily a hard-to-achieve feature if you have a broad-strokes idea of what you're looking for and you spent the years leading up to AGI deliberately steering toward the alignable subspace of AI approaches, but still not one that you get for free.
I think your original Q is a good prompt to think about and discuss, but if we're meant to assume alignability, I want to emphasize that this is the kind of assumption that should probably always get explicitly flagged. Otherwise, for most approaches to AGI that weren't strongly filtered for alignability, answering 'How would you reduce risk (without destroying it)?' in real life will probably mostly be about convincing the project to never deploy, finding ways to redirect resources to other approaches, and reaching total confidence that the underlying ideas and code won't end up stolen or posted to arXiv.
We aren't working on decision theory in order to make sure that AGI systems are decision-theoretic, whatever that would involve. We're working on decision theory because there's a cluster of confusing issues here (e.g., counterfactuals, updatelessness, coordination) that represent a lot of holes or anomalies in our current best understanding of what high-quality reasoning is and how it works.
[...] The idea behind looking at (e.g.) counterfactual reasoning is that counterfactual reasoning is central to what we're talking about when we talk about "AGI," and going into the development process without a decent understanding of what counterfactual reasoning is and how it works means you'll to a significantly greater extent be flying blind when it comes to designing, inspecting, repairing, etc. your system. The goal is to be able to put AGI developers in a position where they can make advance plans and predictions, shoot for narrow design targets, and understand what they're doing well enough to avoid the kinds of kludgey, opaque, non-modular, etc. approaches that aren't really compatible with how secure or robust software is developed.
"The reason why I care about logical uncertainty and decision theory problems is something more like this: The whole AI problem can be thought of as a particular logical uncertainty problem, namely, the problem of taking a certain function f : Q → R and finding an input that makes the output large. To see this, let f be the function that takes the AI agent’s next action (encoded in Q) and determines how 'good' the universe is if the agent takes that action. The reason we need a principled theory of logical uncertainty is so that we can do function optimization, and the reason we need a principled decision theory is so we can pick the right version of the 'if the AI system takes that action...' function."
The work you use to get to AGI presumably won't look like probability theory, but it's still the case that you're building a system to do probabilistic reasoning, and understanding what probabilistic reasoning is is likely to be very valuable for doing that without relying on brute force and trial-and-error.
[...] Eliezer adds: "I do also remark that there are multiple fixpoints in decision theory. CDT does not evolve into FDT but into a weirder system Son-of-CDT. So, as with utility functions, there are bits we want that the AI does not necessarily generate from self-improvement or local competence gains."
When I think about key distinctions and branching points in alignment, I usually think about things like:
Does the approach require human modeling? Lots of risks can be avoided if the system doesn't do human modeling, or if it only does small amounts of human modeling; but this constrains the options for value learning and learning-in-general.
Is the goal to make a task-directed AGI system, vs. an open-ended optimizer? When you say "there's a natural and historical relationship here with what was in the past termed 'seed AI', even if this is not an approach anyone is actively pursuing", it calls to mind for me the transition from MIRI thinking about open-ended optimizers to instead treating task AGI as the place to start.
Error-reductionism: the idea that errors are reducible, i.e., explainable in terms of causes and parts such as cognitive biases and bad micro-habits.
Error-reductionism: philosophical reductionism (the world is physical, decomposable, lawful, understandable, not inherently mysterious or magical) combined with an error theory about non-reductionist ideas. We have Bayesianism as a principled (reductive) account of science; we don't need to call Thor mean names like "meaningless" or say he's in a separate magisterium from science. We're allowed to say those ideas were just wrong. We learn about the world by looking at the world and seeing what stuff happens and what methods work — not by applying a priori definitions of "what hypotheses sound sciencey to me".
You say "the political sentiment of lesswrong" and "persuade on politics"; if we replace "politics" with "a model of world affairs" or "a view about the state of the world's main decisionmaking institutions" or the like, that changes my intuitive response to your comment a fair bit.
There are risks to talking about world affairs or the state of the US government on LW, and the risks may outweigh the benefits. But in a relatively utopian version of LW, at least, in a world where it was possible to do so without a bunch of bad side-effects, I think there would be a lot of curated "politics" content in the sense of "content that aids in understanding the current state of the world and its institutions", even though there are other interpretations of "politics" according to which politics doesn't belong on the LW front page.
In this utopian version of LW, I think some curated posts would focus on defending models, while others would focus on presenting new models for evaluation or summarizing previously-defended models.
(This abstract point seems more important to me than the question of whether Zvi's post in particular would be curated in utopian-LW.)
I think it's relevant that this isn't Zvi's first COVID-19 post on LW; it's his seventeenth, and he's been showing his work to an exceptional degree.
Since some important stuff isn't hyperlinked in the post itself, putting it all in a sequence (or the mods just picking out a few of the relevant past discussions of these topics and linking them in a pinned comment) might be helpful.
Sorry, those two weren't the only answers I imagined you might give, I just didn't want to make the comment longer before letting you respond.
My next guess was going to be that your objection was stylistic — that Zvi was using a lot of hyperbole and heated rhetoric that's a poor fit for curated LW content, even if a more boring and neutral version of the same core claims would be fine.
I think that's part of what's going on (in terms of why the two of us disagree). I think another part of what's going on is that I feel like I have good context for ~all the high-level generalizations and institutional criticisms Zvi is bringing in, and why one might hold such views, from reading previous Zvi-posts, reading lots of discussion of COVID-19 over the last few months, and generally being exposed to lots of rationalist and tech-contrarian-libertarian arguments over the years, such that it doesn't feel super confusing or novel as a package and I can focus more on particular parts I found interesting and novel. (Like critiques of particular organizations, or new lenses I can try out and see whether it causes a different set of actions and beliefs to feel reasonable/'natural', and if so whether those actions and beliefs seem good.)
This isn't to say that Zvi's necessarily right on all counts and you're wrong, and I think a discussion like this is exactly the way to properly bridge different people's contexts and priors about the world. And given the mix of 'this seems super wrong' and 'the style seems bad' and 'there aren't even hyperlinks I can use to figure out what Zvi means or where he's coming from', I get why you'd think this isn't curation-worthy content. I don't want to go down all the object-level discussion paths necessarily to reach consensus about this myself, though if someone else wants to, I'll be happy about that.
Jai Dhyani: This seems like an extremely overconfident prediction and I don't think it accurately reflects popular opinion regarding pandemic response.
Rob Bensinger: What are the main things you think Zvi's wrong about? What do you think will happen?
Jai Dhyani: A series of predictions to which I assign each individually 75%+ chance: Social distancing is going to remain popular. Reopening will continue at a slow and steady pace. Large indoor gatherings will continue to be mostly avoided. Continued increases in testing capacity will slow spread and dramatic outbreaks, if they happen, will trigger the return of more aggressive measures with popular support. Masks will range between commonplace to mandatory in potentially risky contexts, and this will significantly slow spread. Coronavirus will return to being the dominant news story by the fall. US infections will continue to increase, but slowly. We will not approach herd immunity in the US in 2020.
Would you feel similarly concerned about a hypothetical curated essay that instead said 'the WHO has done a reasonably good job and should have its funding increased' (in the course of a longer discussion almost entirely focused on other points) while providing just as little evidence?
If so, then I disagree: in a dialogue about world affairs with people I respect, where someone has thirty important and relevant beliefs but only has time to properly defend five of them, I'd usually rather that they at least mention a bunch of the other beliefs than that they stay silent. I think it's good for those unargued beliefs to then be critiqued and hashed out in the comments, but curation-level content shouldn't require that the author conceal their actual epistemic state just because they don't expect to be able to convince a skeptical reader.
If not, then I think I'd disagree a lot more strongly, and I'm a bit confused. Suppose we have a scale of General Institutional Virtue, where a given institution might deserve an 8/10, or a 5/10, or a 1/10. I don't see a reason to concentrate our prior on, say, 'most institutions are 8/10' or 'most institutions are 3/10'; claiming that something falls anywhere on the scale should warrant similar skepticism.
Perhaps the average person on the street thinks the WHO's Virtue is 7/10; but by default I don't think LessWrong should put much weight on popular opinion in evaluating arguments about institutional efficacy, so I don't think we should demand less evidence from someone echoing the popular view than from someone with a different view. (Though it does make sense to be more curious about an unpopular view than a popular one, because unpopular views are likely to reflect less widely known evidence.)
> SUMMARY: Scientists don't know whether high vitamin D intake is harmful when vitamin K intake is inadequate. Evidence suggests it might be a concern, but a definite conclusion cannot be reached at this point.
Of course, Covid may not care about this, and I don't know how to quantify the relative risks, but they should probably be acknowledged.
In response to this, I've deleted the section on Vitamin K, which was already really speculative. The deleted section is copied below:
Per 3G below, COVID-19 seems to be causing blood clots in many people. Vitamin K apparently has a large pro-clotting effect, so I recommend reducing intake of foods rich in vitamin K now, and cutting them out altogether for a few weeks if you think you may have recently contracted COVID-19. Cleveland Clinic says high-vitamin-K foods include:
Other vegetables are generally fine and great.
Eating healthy, exercising regularly, avoiding smoking, etc. also normally reduces clotting risk. (But excessive clotting caused by COVID-19 may not work like normal: the Washington Post says COVID-19 is causing strokes in young people "mostly without risk factors", and CNN suggests they often have "no past medical history" and have mild or otherwise-asymptomatic cases of COVID-19, all of which suggest the clotting issues may be disproportionately affecting healthy people, even as COVID-19's more typical symptoms disproportionately affect less healthy people.)
Mainly I want to make it easier to track when I retracted or modified something potentially decision-relevant. If something was deleted, I both note when that happened and (so people don't miss it) italicize all mentions below and put the original addition in angle brackets < >.
Feb. 27 — First deprecated post. Added: stocking up on supplies like 2+ weeks of food and water, surgical masks, and personal medications; printing out health records; washing hands more often, <for 20 seconds>, using medical protocol; avoiding touching eyes, nose, or mouth; not adjusting mask while wearing it; <never re-using masks>; going to the hospital only if really necessary; <signing up for cryonics> [but I do recommend this for many people]; <informing friends and family and making sure everyone can contact someone if they need help> [but I do recommend this]; minimizing exposure to crowded places and places like offices and grocery stores; using pedialyte powder, acetaminophen, <aspirin>, over-the-counter inhalers if you get sick; using finger pulse oximeter and going to hospital <if it gives a number below 92% at sea level>; using thermometer <as an earlier warning sign than the oximeter>; <worrying a lot about overwhelmed hospitals in the US>
Deleted: washing hands for 20 seconds
Mar. 15 — Second deprecated post. Added: Americans who haven't self-quarantined doing so ASAP; trying to minimize initial viral load if exposed; being wary of transmission even from people who aren't showing symptoms; being wary of people coughing, sneezing, talking<, or breathing>; <being wary of people ≤ 6 feet away> [but I do recommend this]; being wary of indoor interactions; wearing home-made masks or scarves if you don't have surgical masks; <using sleeve instead of hand if your eye itches>; <carrying a handkerchief for touching your nose or mouth without using your hands>; minimizing exposure to surfaces lots of people touch; not using hand sanitizer as a substitute for hand-washing; stocking up on 1+ months of non-perishable food; probably sanitizing package (with sunlight, 70% isopropyl alcohol, or many household cleaners) if it's easy or if you're unusually at risk; copper-taping commonly touched surfaces; <not using NSAIDs>; supplementing Vitamin D; eating well, sleeping well, exercising; using zinc if you may be getting sick; maybe aquiring chloroquine, though this is more speculative and risky and you should be sure to read https://docs.google.com/document/d/160RKDODAa-MTORfAqbuc25V8WDkLjqj4itMDyzBTpcc/edit; <noting average time from infection to symptom onset is 5 days>; short symptom overview; treating fever with fluids, baths (but not ice bath or cold bath), cool washcloths under armpits and in the groin area (not icepacks); going to hospital if finger pulse oximeter repeatedly gives numbers below ~90-94% at sea level
Deleted: using aspirin if you get sick; going to hospital if finger pulse oximeter gives a number below 92% at sea level; signing up for cryonics [but I do recommend this for many people]; informing friends and family and making sure everyone can contact someone if they need help [but I do recommend this]
Deleted but then re-added later: stocking up on water
Mar. 16 — Added: <not buying medical-grade masks since health care providers need as many as possible>
Deleted but then re-added later: buying medical-grade masks
Mar. 20 — Added: using mucinex and humidifier if you get sick
Mar. 28 — Google Doc and partial LessWrong mirror. Added: wearing face coverings in general; sanitizing face coverings as an alternative to throwing them away; disinfecting surfaces you touch a lot; running an air filter; <running an air purifier>; <quitting smoking> [but I do recommend this]; taking pseudoephedrine for sinus pressure; using oral thermometers (rather than skin-surface thermometers); maybe considering hydroxychloroquine as an alternative to chloroquine (but read https://docs.google.com/document/d/160RKDODAa-MTORfAqbuc25V8WDkLjqj4itMDyzBTpcc/edit first); maybe getting a home oxygen concentrator; testing thermometer and oximeter while healthy to get baseline numbers; figuring out who can help take care of you if you get sick; detailed symptom overviews; <using trouble breathing as an early warning sign for hypoxia>; <targeting ~94-96% PaO2 if using home oxygen concentrator>; being ready to go to hospital on very short notice, even if first-week symptoms seem mild; talking to a doctor over phone/video first; taking things to soothe throat and prevent coughing if sick; not trying to lower your fever unless it gets to 103°F or higher; considering postural drainage for severe symptoms; if sick and in an at-risk group, considering signing up for nearby clinical trials
Deleted: being wary of people breathing; never re-using masks
Mar. 31 — Added: lying on your front if you get sick
Deleted: using sleeve instead of hand if your eye itches; carrying a handkerchief for touching your nose or mouth without using your hands
Apr. 1 — Added: using UVC light to disinfect surfaces
Deleted: running an air purifier
Apr. 5 — Added: noting average time from infection to symptom onset is 7 days
Deleted: noting average time from infection to symptom onset is 5 days
Apr. 26 — Added: tips for homemade masks; stocking up on 1+ months of water; watching out for (possibly unconscious) unusually fast and deep breathing as an extremely serious warning COVID-19 sign, even if not accompanied by discomfort or other symptoms; using oximeter every few days even if you feel fine; using oximeter a lot and otherwise monitoring symptoms carefully if might have recently been exposed, or if you start showing lower oximeter readings, other "silent hypoxia" signs, or cold/flu symptoms, rather than relying on fever as the first warning sign
Deleted: using thermometer as an earlier warning sign than the oximeter; using trouble breathing as an early warning sign for hypoxia
Apr. 27 — Added: <keeping Vitamin K intake low/moderate>; avoiding motorcycle races and reproduction; watching for signs of heart attack, stroke, or pulmonary embolism even if you have no other symptoms and no known risk factors; <taking aspirin prophylactically if you get sick or may get sick soon>; maybe using home coagulation tests if sick
Apr. 29 — Added: noting Haagen Dasz refreezes better than other ice cream brands
May 3 —
Deleted: taking aspirin prophylactically if you get sick or may get sick soon
May 8 — Added: noting Erin Bromage's "Successful Infection = Exposure to Virus x Time" guidelines; avoiding public restrooms; avoiding rooms where someone might have recently coughed, sneezed, or yelled
May 10 —
Deleted: worrying a lot about overwhelmed hospitals in the US
May 12 — Added: noting other forms of oxygen supplementation may be much better than invasive mechanical ventilation
Deleted: targeting ~94-96% PaO2 if using home oxygen concentrator
Jun. 2 — Added: focusing more on large-droplet transmission, less on aerosol or surface transmission; avoiding talking in public; being quiet and aiming down if forced to talk in public; avoiding facing nearby people in public; relying on physical models rather than the "6-Foot Rule"; prioritizing small reductions to larger risks over large reductions to very small risks; meal delivery being much safer than going grocery shopping; disinfecting delivered meals
Deleted: being wary of people ≤ 6 feet away [but I do recommend this]; not buying medical-grade masks since health care providers need as many as possible; not using NSAIDs; quitting smoking [but I do recommend this]
[Copy of the Apr. 26 section "Maybe stop taking ibuprofen/advil?", deleted from the main text Jun. 2. A still earlier version just said that some sources were warning about NSAIDs and to therefore avoid them out of an abundance of caution.]
Human pathogenic coronaviruses (severe acute respiratory syndrome coronavirus [SARS-CoV] and SARSCoV-2) bind to their target cells through angiotensin-converting enzyme 2 (ACE2), which is expressed by epithelial cells of the lung, intestine, kidney, and blood vessels. The expression of ACE2 is substantially increased in patients with type 1 or type 2 diabetes, who are treated with ACE inhibitors and angiotensin II type-I receptor blockers (ARBs). Hypertension is also treated with ACE inhibitors and ARBs, which results in an upregulation of ACE2. ACE2 can also be increased by thiazolidinediones and ibuprofen. These data suggest that ACE2 expression is increased in diabetes and treatment with ACE inhibitors and ARBs increases ACE2 expression. Consequently, the increased expression of ACE2 would facilitate infection with COVID-19. We therefore hypothesise that diabetes and hypertension treatment with ACE2-stimulating drugs increases the risk of developing severe and fatal COVID-19.
Qiao et al. conclude that ibuprofen enhances ACE2 in diabetic rats. Qiao et al. is the only study I've seen on 'ibuprofen increases ACE2', and this claim is uncited in Fang et al. The ibuprofen-ACE2-COVID link doesn't seem to be widely known / accepted / cared about, based on the discussion on Science Translational Medicine (which argues increased ACE2 might reduce COVID-19 severity; see also the comment section) and Snopes. I also don't know whether I should expect other NSAIDs to interact with ACE2 in the same way as ibuprofen.
On Mar. 14, Samira Jeimy wrote: "In Germany and France, ICU physicians have noticed that the common thread amongst young patients needing #COVIDー19 related ICU admission is that they had been using NSAIDS (Advil, Motrin, Aleve, Aspirin)." She cites the Lancet paper and Day:
Scientists and senior doctors have backed claims by France’s health minister that people showing symptoms of covid-19 should use paracetamol (acetaminophen) rather than ibuprofen, a drug they said might exacerbate the condition.
The minister, Oliver Veran, tweeted on Saturday 14 March that people with suspected covid-19 should avoid anti-inflammatory drugs. “Taking anti-inflammatory drugs (ibuprofen, cortisone . . .) could be an aggravating factor for the infection. If you have a fever, take paracetamol,” he said.
His comments seem to have stemmed in part from remarks attributed to an infectious diseases doctor in south west France. She was reported to have cited four cases of young patients with covid-19 and no underlying health problems who went on to develop serious symptoms after using non-steroidal anti-inflammatory drugs (NSAIDs) in the early stage of their symptoms. The hospital posted a comment saying that public discussion of individual cases was inappropriate.
But Jean-Louis Montastruc, a professor of medical and clinical pharmacology at the Central University Hospital in Toulouse, said that such deleterious effects from NSAIDS would not be a surprise given that since 2019, on the advice of the National Agency for the Safety of Medicines and Health Products, French health workers have been told not to treat fever or infections with ibuprofen.
Experts in the UK backed this sentiment. Paul Little, a professor of primary care research at the University of Southampton, said that there was good evidence “that prolonged illness or the complications of respiratory infections may be more common when NSAIDs are used—both respiratory or septic complications and cardiovascular complications.”
He added, “The finding in two randomised trials that advice to use ibuprofen results in more severe illness or complications helps confirm that the association seen in observational studies is indeed likely to be causal. Advice to use paracetamol is also less likely to result in complications.”
Ian Jones, a professor of virology at the University of Reading, said that ibuprofen’s anti-inflammatory properties could “dampen down” the immune system, which could slow the recovery process. He added that it was likely, based on similarities between the new virus (SARS-CoV-2) and SARS I, that covid-19 reduces a key enzyme that part regulates the water and salt concentration in the blood and could contribute to the pneumonia seen in extreme cases. “Ibuprofen aggravates this, while paracetamol does not,” he said.
Charlotte Warren-Gash, associate professor of epidemiology at the London School of Hygiene and Tropical Medicine, said: “For covid-19, research is needed into the effects of specific NSAIDs among people with different underlying health conditions. In the meantime, for treating symptoms such as fever and sore throat, it seems sensible to stick to paracetamol as first choice.” [...]
In the UK, paracetamol would generally be preferred over non-steroidal anti-inflammatory drugs (“NSAIDS”) such as ibuprofen to relieve symptoms caused by infection such as fever. This is because, when taken according to the manufacturer’s and/or a health professional’s instructions in terms of timing and maximum dosage, it is less likely to cause side effects. Side effects associated with NSAIDs such as ibuprofen, especially if taken regularly for a prolonged period, are stomach irritation and stress on the kidneys, which can be more severe in people who already have stomach or kidney issues. It is not clear from the French Minister’s comments whether the advice given is generic ‘good practice’ guidance or specifically related to data emerging from cases of Covid-19 but this might become clear in due course.
So most sources seem to agree that acetaminophen is at least a bit better than ibuprofen for treating fever-causing illnesses in general; but there's confusion and disagreement about whether ibuprofen is unusually good or bad for COVID-19 in particular, and I don't get the sense using ibuprofen is widely seen as a terrible idea. Elizabeth van Nostrand writes, "France is recommending against NSAIDs and against ibuprofen in particular. I will be very surprised if that ends up being born out (and WHO agrees with me)".
Overall, the evidence is such that I'm avoiding ibuprofen right now, but I wouldn't recommend going to huge lengths to avoid ibuprofen.
See 3E below on whether and when it's a good idea to manually reduce fevers at all.
According to the binary model established in the 1930s, droplets typically are classified as either (1) large globules of the Flüggian variety—arcing through the air like a tennis ball until gravity brings them down to Earth; or (2) smaller particles, less than five to 10 micrometers in diameter (roughly a 10th the width of a human hair), which drift lazily through the air as fine aerosols.
[...] Despite the passage of four months since the first known human cases of COVID-19, our public-health officials remain committed to policies that reflect no clear understanding as to whether it is one-off ballistic droplet payloads or clouds of fine aerosols that pose the greatest risk—or even how these two modes compare to the possibility of indirect infection through contaminated surfaces (known as “fomites”).
Gaining such an understanding is absolutely critical to the task of tailoring emerging public-health measures and workplace policies, because the process of policy optimization depends entirely on which mechanism (if any) is dominant:
1. If large droplets are found to be a dominant mode of transmission, then the expanded use of masks and social distancing is critical, because the threat will be understood as emerging from the ballistic droplet flight connected to sneezing, coughing, and laboured breathing. We would also be urged to speak softly, avoid “coughing, blowing and sneezing,” or exhibiting any kind of agitated respiratory state in public, and angle their mouths downward when speaking.
2. If lingering clouds of tiny aerosol droplets are found to be a dominant mode of transmission, on the other hand, then the focus on sneeze ballistics and the precise geometric delineation of social distancing protocols become somewhat less important—since particles that remain indefinitely suspended in an airborne state can travel over large distances through the normal processes of natural convection and gas diffusion. In this case, we would need to prioritize the use of outdoor spaces (where aerosols are more quickly swept away) and improve the ventilation of indoor spaces.
3. If contaminated surfaces are found to be a dominant mode of transmission, then we would need to continue, and even expand, our current practice of fastidiously washing hands following contact with store-bought items and other outside surfaces; as well as wiping down delivered items with bleach solution or other disinfectants.
Identified Super Spreader Events are Primarily Large Droplet Transmission
The article makes a strong case that in identified super spreader events [SSEs] the primary mode of transmission is large droplets. And that large droplets are spread in close proximity, by people talking (basically everything) or singing (several choir/singing practices) frequently or loudly, or laughing (many parties) and crying (funerals), or otherwise exhaling rapidly (e.g. the curling match) and so on.
There is a highly noticeable absence of SSEs that would suggest other transmission mechanisms. Subways and other public transit aren’t present, airplanes mostly aren’t present. Performances and showings of all kinds also aren’t present. Quiet work spaces aren’t present, loud ones (where you have to yell in people’s faces) do show up. University SSEs are not linked to classes (where essentially only the professor talks, mostly) but rather to socializing. [...]
Zvi argues that surfaces and small aerosolized droplets are unlikely to be major infection vectors for COVID-19. He discusses methods for avoiding large droplet transmission:
Large Droplets: Six Foot Rule is Understandable, But Also Obvious Nonsense
For large droplets, there is essentially zero messaging about angling downwards or avoiding physical actions that would expel more droplets, or avoiding being in the direct path of other people’s potential droplets.
Instead, we have been told to keep a distance of six feet from other people. We’ve told them that six feet apart is safe, and five feet apart is unsafe. Because the virus can only travel six feet.
That’s obvious nonsense. It is very clear that droplets can go much farther than six feet. Even more than that, the concept of a boolean risk function [i.e., one that sharply divides everything into either "risky" or "risk-free", with no shades of grey] is insane. People expel virus at different velocities, from different heights, under different wind conditions and so on. The physics of each situation will differ. The closer you are, the more risk.
Intuitively it makes sense to think about something like an inverse square law until proven otherwise, so six feet away is about 3% of the risk of one foot away. That’s definitely not right, but it’s the guess I feel comfortable operating with.
Alas, that’s not the message. The message is 72 inches safe, 71 inches unsafe.
Unlike the previous case of obvious nonsense, there is a reasonable justification for this one. I am sympathetic. You get about five words. “Always stay six feet apart” is a pretty good five words. There might not be a better one. Six feet is a distance that you can plausibly mandate and still allow conversations and lines that are moderately sane, so it’s a reasonable compromise.
It’s a lie. It’s not real. As a pragmatic choice, it’s not bad.
The problem is it is being treated as literally real.
Joe Biden and Bernie Sanders met on a debate stage. The diagram plans had them exactly six feet apart.
In an article, someone invites the author, a reporter, to their house to chat. Says he’s prepared two chairs, six feet apart. “I measured them myself,” he says. [...]
And so on. People really are trying to make the distance exactly six feet as often as possible.
[...] This is society sacrificing bandwidth to get a message across. Again, I get it. The problem is we are also sacrificing any ability to convey nuance. We are incapable, after making this sacrifice, of telling people there is a physical world they might want to think about how to optimize. There is only a rule from on high, The Rule of Six Feet.
Thus, we may never be able to get people to talk softly into the ground rather than directly looking at each other and loudly and forcefully to ‘make up for’ the exact six foot distance, which happens to be the worst possible orientation that isn’t closer than six feet.
In theory, we can go beyond this. You get infected because droplets from an infected person travel out of their face and touch your face.
Thus, a line is remarkably safe if everyone faces the same way, modulo any strong winds. The person behind you has no vector to get to your face. And we can extend that. We can have one sidewalk where people walk north, and another on the other side of the street where people walk south. If you see someone approaching from the other direction, turn around and walk backwards while they ensure the two of you don’t collide. If necessary, stand in place for that reason. Either way, it should help – if this is the mechanism we are worried about.
[...] Yes, it’s annoying to not face other people, but you absolutely can have a conversation while facing away from each other. It’s a small price to pay.
In similar fashion, it seems a small price to pay to shut the hell up whenever possible, while out in public. Talking at all, when around those outside your household, can be considered harmful and kept to a bare minimum outright (and also it should be done while facing no one).
Zvi emphasizes that there’s much more benefit to slightly reducing the risk from the largest infection sources (including large droplets as a category), than from hugely reducing the risk from fairly unlikely infection sources:
Focus Only On What Matters
[...] Within those big risks, small changes matter. They matter more than avoiding small risks entirely.
A single social event, like a funeral, birthday party or wedding, might well by default give any given person a 30%+ rate to infect any given other person at that event if the event is small, and a reasonably big one even if large. You only need one. Keeping slightly more distance, speaking slightly less loudly, and so on, at one such event, is a big risk reduction. [...]
Whereas a ‘close contact’ that doesn’t involve talking or close interaction probably gives more like (spitballing a guess, but based on various things) an 0.03% rate of infection if the other person is positive, and likely with a lower resulting viral load. Certainly those contacts add up, but not that fast. Thus, a subway car full of “close contact” might give you 10 of them per day, most of whom are not, at any given time, infectious. If this model is correct.
[...] Slight reductions in the frequency and severity of your very risky actions is much more important than reducing the frequency of nominally risky actions.
The few times you end up talking directly with someone in the course of business, the one social gathering you attend, the one overly crowded store you had to walk through, will dominate your risk profile. Be paranoid about that, and think how to make it less risky, or ideally avoid it. Don’t sweat the small stuff.
And think about the physical world and what’s actually happening around you!
My best guess is there is something like 5-10 times as much risk indoors versus the same activity outdoors. [...]
The combination of quick and outdoors and not-in-your-face probably effectively adds up to safe, especially if you add in masks. During the peak epidemic in New York things were so intense that it would have been reasonable to worry about miasma. Now, I would do my best to keep my distance and avoid talking at each other, but mostly not worry about incidental interactions.
I do expect there to be a spike in cases as the result of protests and civil unrest.. To not see one would be surprising, and would update me in favor of outdoor activities being almost entirely harmless.
2C. If you do need to be around people, wear something over your mouth and nose.
[Deleted the advice for individuals not to buy medical-grade masks because it doesn't currently seem like good advice (and I'm not particularly convinced it was ever good advice).]
Zvi Mowshowitz writes, “[E]ven cloth masks on both ends of an interaction are almost certainly good for a 25% reduction in risk and probably 50%-75%.”
It’s been months. We don’t have concrete examples of infection via surfaces. At all. It increasingly seems like while such a route is possible, and must occasionally happen, getting enough virus to cause an infection, in a live state, via this route, is very hard. When you wash your hands and don’t touch your face, it’s even harder than that.
Meanwhile, those who refuse to touch surfaces like a pizza delivery box end up in more crowded locations like grocery stores, resulting in orders of magnitude more overall risk.
[...] Until I get very unexpected evidence, surfaces are mostly not a thing anymore. If lots of people touch stuff and then you touch it, sure, wash your hands after and be extra careful to not touch your face in the interim. Otherwise, stop worrying about it.
[... Food] is at most minimally risky, even if it doesn’t get heated enough to reliably and fully kill the virus. You don’t have to ruin all your food. People are often avoiding foods that seem risky. Once again, it makes sense that it could be risky, but in practice it’s been months and it does not seem to work that way. The precautions people are taking will incidentally be more than good enough to guard against contamination of food at sufficient levels to be worth worrying about. I mean, sure, don’t eat at a buffet, but it’s not like any of them are going to be open, and even then the (also mostly safe) surfaces are likely scarier than the food.
[...] Your risk is from the waiter, or from the other diners, being in that room with you for a while. Thus, takeout, delivery and/or eating outdoors.
I agree with Zvi that it seems increasingly likely that surface transmission is rare, though he seems to be wrong that there are no examples (see comments), and I haven’t seen a clear argument for whether the number of COVID-19 cases caused by surface transmission is closer to 1/10 of all cases versus, say, 1/10,000. Given my own circumstances, I’m likely to do things like “order delivery pizza” more often in the weeks to come, but I’ll also likely make use of Yao Lu’s tips while infections are still commonplace in my part of the US:
I’m a chemo nurse, this is what I tell my high-risk patients:
I personally don’t trust takeout that much because I think a lot of restaurant workers don’t have sick leave, so it’s more likely your food was prepared by someone symptomatic. But you can cut the risk to near zero by doing this:
1. Wash your hands well
2. Put your own bowl on your kitchen counter
3. Pick up the restaurant container, and pour the food into your own bowl
4. Throw away the restaurant container
5. Wash your hands well
6. Thoroughly heat up the food. (at least 70C for a minute, or whatever the best current guideline says)
If you do this, in this order, you are extremely safe even if someone coughed viruses all over the food and the container. Heat would kill the virus, and handwashing would prevent indirect transmission from the bag/container.
[Deleted this section because the post is getting long and this seems messy and uncertain. I'll reproduce the Apr. 26 version of the section in a reply to this comment for when I want to link directly to it.]
2J. Run an air filter.
[Provisionally deleted "Also, quit smoking." because I haven't been keeping up with the debate about cigarettes or nicotine.]
3A. Prepare in advance.
Plan in advance what hospital you’ll go to if necessary, and be ready to call a doctor if you have troubling symptoms. Zvi Mowshowitz writes:
Medical care matters [for fatality rates]. Total breakdown of medical care in practice leads to several times the fatality rate under regular circumstances. High quality treatment at current knowledge levels can probably drive death rates down further, so the ratio between full success and complete breakdown can be rather large – something like an order-of-magnitude difference between 0.2% and 2%.
I don’t have a strong opinion at this point about particular medical treatments beyond the above.
3G. Monitor for clotting problems.
Jim Babcock said on Apr. 28,
My first-pass literature review turned up some claimed mechanisms by which platelets and clotting may serve an immune purpose. I don't know if that's what happening here, but there's a possibility that this works like fever reduction: helpful in extreme cases, bad in minor cases and early in the progression.
Low-dose heparin seems to be common hospital protocol now, so data should be forthcoming for that scenario. I don't know what recommendation to give to minor cases self-treating at home, though.
[I meant to add the above before, but forgot about it. This is part of why I withdrew the recommendation that people take aspirin at home whenever they start showing COVID-19 symptoms.]
Yeah, I've seen that photo before; I'm glad we have a record of this kind of thing! It doesn't cause me to think that the thing I said in 2017 was false, though it suggests to me that most FHI staff overall in 2014 (like most 80K staff in 2017) probably would have assigned <10% probability to AGI-caused extinction (assuming there weren't a bunch of FHI staff thinking "AGI is a lot more likely to cause non-extinction existential catastrophes" and/or "AGI has a decent chance of destroying the world, but we definitely won't reach AGI this century").
In September 2017, based on some conversations with MIRI and non-MIRI folks, I wrote:
I think that at least 80% of the AI safety researchers at MIRI, FHI, CHAI, OpenAI, and DeepMind would currently assign a >10% probability to this claim: "The research community will fail to solve one or more technical AI safety problems, and as a consequence there will be a permanent and drastic reduction in the amount of value in our future."
People may have become more optimistic since then, but most people falling in the 1-10% range would still surprise me a lot. (Even excluding MIRI people, whose current probabilities I don't know but who I think of as skewing pessimistic compared to other orgs.)
80,000 Hours, based on interviews with multiple AI risk researchers, said "We estimate that the risk of a serious catastrophe caused by machine intelligence within the next 100 years is between 1 and 10%."
I complained to 80K about this back in 2017 too! :) I think 1-10% here was primarily meant to represent the median view of the 80,000 Hours team (or something to that effect), not the median view of AGI safety researchers. (Though obviously 80,000 Hours spent tons of time talking to safety researchers and taking those views into account. I just want to distinguish "this is our median view after talking to experts" from "this is our attempt to summarize experts' median view after talking to experts".)
1. Bay Area lockdown (eg restaurants closed) will be extended beyond June 15: 60%
I sold to 40%.
Since this section is called "Prediction Updates", I initially thought you were saying you'd predicted 60% on May 1, and are now predicting 40%. You may want to clarify that 60% was Scott's April 29 prediction, and 40% was your May 1 prediction.
It’s been months. We don’t have concrete examples of infection via surfaces. At all. It increasingly seems like while such a route is possible, and must occasionally happen, getting enough virus to cause an infection, in a live state, via this route, is very hard.
On April 23 I wrote on Facebook:
For most things, I'm just leaving it to air out for 3+ days. If I need to put it in the fridge or freezer, I disinfect it first.
My gestalt amateur sense is that surface transmission from people you aren't spending a bunch of in-person time with seems really low. (Do we even know of any clear examples of this, for COVID-19?) But I'm in a dusty/moldy house in a remote area with no nearby ICUs, so I'd rather err on the side of caution.
(The dust thing seems less important to me now that I'm thinking of COVID-19 as disproportionately endangering people who have cardiovascular issues, not so much people who have respiratory issues.)
Eloise Rosen replied:
> Do we even know of any clear examples of this, for COVID-19?
"A woman aged 55 years (patient A1) and a man aged 56 years (patient A2) were tourists from Wuhan, China, who arrived in Singapore on January 19. They visited a local church the same day and had symptom onset on January 22 (patient A1) and January 24 (patient A2). Three other persons, a man aged 53 years (patient A3), a woman aged 39 years (patient A4), and a woman aged 52 years (patient A5) attended the same church that day and subsequently developed symptoms on January 23, January 30, and February 3, respectively. *Patient A5 occupied the same seat in the church that patients A1 and A2 had occupied earlier that day (captured by closed-circuit camera)* (5). Investigations of other attendees did not reveal any other symptomatic persons who attended the church that day."
I haven't seen other examples, though, so I remain skeptical that surface transmission is a big deal. Erin Bromage previously claimed that the South Korean call center outbreak occurred "roughly 6% from fomite transfer" but then retracted the claim; not sure what happened there.
How socially isolated was your new housemate in their previous living arrangement? And how hard are you trying not to catch it? I'm thinking about how to minimize risk while swapping housemates in the Boston area myself, and I notice I interpret your example pretty differently if (e.g.) I imagine 'the housemate was super isolated vs. the housemate was living with an essential worker' or 'we're living with a diabetic and are trying to be ultra-cautious, vs. we're all low-risk and aren't personally worried about long-term complications if we get sick'.
To get infected, you plausibly need to be exposed to ~1000+ SARS-CoV2 viral particles (source) — either all at once, or over minutes or hours. The more viral particles are present, and the more time you spend exposed to them, the greater your infection risk.
“[T]he droplets in a single cough or sneeze may contain as many as 200,000,000 virus particles.” A single cough releases ~3000 (mostly large) droplets traveling at 50 mph (source). A single sneeze releases ~30,000 (mostly small) droplets traveling up to 200 mph (source). Smaller particles hang in the air longer. If someone sneezes or coughs, "even if that cough or sneeze was not directed at you, some infected droplets — the smallest of small — can hang in the air for a few minutes, filling every corner of a modest sized room with infectious viral particles. All you have to do is enter that room within a few minutes of the cough/sneeze and take a few breaths and you have potentially received enough virus to establish an infection."
In contrast, "a single breath releases 50 - 5000 droplets", most of which fall to the ground quickly; nose-breathing releases even fewer droplets (source). "We don't have a number for SARS-CoV2 yet, but we [...] know that a person infected with influenza releases about 3-20 virus RNA copies per minute of breathing" (source). This suggests that if your only exposure to an infected person is via them silently breathing on the other side of a room, it will probably take an hour or longer for them to infect you.
Speaking releases “~200 copies of virus per minute. Again, [pessimistically] assuming every virus is inhaled, it would take ~5 minutes of speaking face-to-face to receive the required dose” (source).
A toilet flush aerosolizes droplets (which might contain viable virus), so “treat public bathrooms with extra caution (surface and air)” (source).
“[P]lease don't forget surfaces. Those infected respiratory droplets land somewhere. Wash your hands often and stop touching your face!”
“We know that at least 44% of all infections — and the majority of community-acquired transmissions — occur from people without any symptoms (asymptomatic or pre-symptomatic people) (source). You can be shedding the virus into the environment for up to 5 days before symptoms begin. [...] Viral load generally builds up to the point where the person becomes symptomatic. So just prior to symptoms showing, you are releasing the most virus into the environment.”
“The main sources for infection are home, workplace, public transport, social gatherings, and restaurants. This accounts for 90% of all transmission events. In contrast, outbreaks spread from shopping appear to be responsible for a small percentage of traced infections.” (source). “The biggest outbreaks are in [nursing homes,] prisons, religious ceremonies, and workplaces, such [as] meat packing facilities and call centers.” Outbreaks seem to happen disproportionately often in colder indoor environments, and at larger and more social gatherings like weddings, funerals, birthdays, and networking events.
“Indoor spaces, with limited air exchange or recycled air and lots of people, are concerning from a transmission standpoint. We know that 60 people in a volleyball court-sized room (choir) results in massive infections. Same situation with the restaurant and the call center. Social distancing guidelines don't hold in indoor spaces where you spend a lot of time, as people on the opposite side of the room were infected.
“The principle is viral exposure over an extended period of time. In all these cases, people were exposed to the virus in the air for a prolonged period (hours). Even if they were 50 feet away (choir or call center), even a low dose of the virus in the air reaching them, over a sustained period, was enough to cause infection and in some cases, death.
“Social distancing rules are really to protect you with brief exposures or outdoor exposures. In these situations there is not enough time to achieve the infectious viral load when you are standing 6 feet apart or where wind and the infinite outdoor space for viral dilution reduces viral load. The effects of sunlight, heat, and humidity on viral survival, all serve to minimize the risk to everyone when outside.”
You shouldn’t worry especially about “[brief visits to] grocery stores, bike rides, inconsiderate runners who are not wearing masks”. “[F]or a person shopping: the low density, high air volume of the store, along with the restricted time you spend in the store, means that the opportunity to receive an infectious dose is low.” If you have to work in a grocery store, or spend lots of time in an office or classroom — especially one with more people sharing the same space and/or air, or one that requires "face-to-face talking or even worse, yelling" — you should be much more worried.
Bromage says grocery stores aren't "places of concern", but I gather he means they’re relatively safe if you’re keeping a good distance from everyone, going when the store is pretty empty, etc. If a single cough from someone a few feet away who isn’t facing in my direction can give me COVID-19 within a few seconds, that still seems “concerning” to me!
Additionally, Jim Babcock comments on “infections while shopping appear to be responsible for 3-5% of infections”:
The source for this covers Ningbo from January 21 to March 6. My main worry, when looking at this number, is that Ningbo's mitigations may have been more effective for stores than they were for other places, in ways that don't generalize to the US. For example, I'm pretty sure they would have been screening people for fever on entry, and enforcing mask usage. I haven't been hearing of fever-screens in Berkeley (though I haven't really been out of the house), and while we do now have a mask ordinance, it's mostly cloth masks (which are less effective) and compliance doesn't seem to be very good.
The specific number for the ~1000 virus-particle claim is cited to a pretty sketchy source; it leads to a couple epidemiologists speculating with no data, and what they actually say is:
> "The actual minimum number varies between different viruses and we don’t yet know what that ‘minimum infectious dose’ is for COVID-19, but we might presume it’s around a hundred virus particles."
> "For many bacterial and viral pathogens we have a general idea of the minimal infective dose but because SARS-CoV-2 is a new pathogen we lack data. For SARS, the infective dose in mouse models was only a few hundred viral particles. It thus seems likely that we need to breathe in something like a few hundred or thousands of SARS-CoV-2 particles to develop symptoms. This would be a relatively low infective dose and could explain why the virus is spreading relatively efficiently."
So there's uncertainty of about an order of magnitude, here. On the other hand, the broader claim--that exposure size matters--is almost certainly true, and the implications of the specific number 1000 are mostly screened off by more-specific observations of which places people are getting it.
Bromage has softened the grocery store claim to “In contrast, outbreaks spread from shopping appear to be responsible for a small percentage of traced infections.”, and now cites an additional two studies for the 1000-particle claim: 1, 2.
2I. Consume 2,000-6,000 IU of Vitamin D daily, in the morning.
From Jim Babcock:
Wikipedia summarizes https://pubmed.ncbi.nlm.nih.gov/21419266/ as "vitamin D functions to activate the innate and dampen the adaptive immune systems". Assuming that's true (I haven't verified it, and vitamin D is a subject known for attracting sketchy claims), then deficiency would lower the minimum infectious dose. On a population scale, this would be a better explanation for the infections-latitude correlation than temperature is, and would suggest that mass-distributing vitamin D supplements would be a good R-lowering strategy.
3D. Start monitoring your oxygen more often at the smallest warning sign.
[Removed or de-emphasized information suggesting hospitals will be overrun, which never really happened in the US and probably never will. This includes cutting the long Connor Flexman comment, though I still link it in the context of home oxygen concentrators.]
I now think “respiratory involvement... is the pathway by which it kills” is wrong, or at least very incomplete. I’m updating toward thinking of COVID-19 as a vascular or clotting disease at least as much as a respiratory one. Asthmaisn’t a major risk factor for COVID-19 death; age, obesity, diabetes, heart disease, and hypertension are.
In the initial days of the outbreak, most efforts focused on the lungs. SARS-CoV-2 infects both the upper and lower respiratory tracts, eventually working its way deep into the lungs, filling tiny air sacs with cells and fluid that choke off the flow of oxygen.
But many scientists have come to believe that much of the disease’s devastation comes from two intertwined causes. The first is the harm the virus wreaks on blood vessels, leading to clots that can range from microscopic to sizable. [...] The second is an exaggerated response from the body’s own immune system, a storm of killer “cytokines” that attack the body’s own cells along with the virus as it seeks to defend the body from an invader.
[...] “What this virus does is it starts as a viral infection and becomes a more global disturbance to the immune system and blood vessels — and what kills is exactly that,” Mehra said. “Our hypothesis is that covid-19 begins as a respiratory virus and kills as a cardiovascular virus.”
[...] ACE2 receptors, which help regulate blood pressure, are plentiful in the lungs, kidneys and intestines — organs hit hard by the pathogen in many patients. That also may be why high blood pressure has emerged as one of the most common preexisting conditions in people who become severely ill with covid-19.
[...] pathologists who did autopsies on these 21 people who died of #COVID19 think lung damage & blood clots in the smallest blood vessels (capillaries) of the lungs were the major cause of death. They found clots even in [patients] on blood thinners, which should’ve prevented them.
ACE2 is expressed on endothelial cells lining blood vessels. If you get bad viremia the inner sheath of blood vessels, especially in heavily infected organs, probably just gets all messed up.
[...] The virus may be causing abnormal inflammation and a whole-body, but especially concentrated in the lungs, hyper-coagulable state that is triggering microscopic blood clots in the lungs that are one of the main contributors to morbidity and mortality and ineffectiveness of ventilation.
[...] This hyper-coagulable state might explain the reports of anomalously low oxygen measurements in people that would ordinarily indicate death or [unconsciousness]. They might have small clots in the finger the sensor is on triggering temporary sporadic low blood flow. It also could explain more of the fact that ventilators are less useful than they thought - some people going on them probably didn't actually need them.
[...] Additionally, there are two bits of immunology that explain parts of this virus's behavior and suggest ways of hurting it. First, the virus evolved in bats in which the interferon response is on an absolute hair trigger, and accordingly in human cells it almost completely escapes the interferon response. This allows it to replicate to absurd viral loads before the immune system notices it, explaining the extreme infectiousness shortly before symptoms develop. Then when the immune system notices it, it goes all out on a huge viral infection, triggering an inflammatory response that is all out of whack and can do a lot of damage. This means that it is vulnerable to inhaled interferon pretreatment (https://www.biorxiv.org/content/10.1101/2020.03.07.982264v1). On top of this, it may be that anything that reduces the replication of the virus in this period before the adaptive immune system mounts a robust response could reduce the probability of progression to severe disease. If antivirals work out or if chloroquine is effective (given the biochemistry I am very hopeful!), they will probably be most effective early via reducing the fraction of patients that progress to severe disease.
Second, there is evidence that the virus is able to enter and destroy (but not replicate within) T-cells using the same receptor it uses everywhere else, triggering immune suppression and altering the inflammatory profile (https://www.nature.com/articles/s41423-020-0424-9). It lacks HIV's obscene dirty tricks and isn't actually replicating within them, so this would be a temporary thing until recovery.
[...] The new analysis [... suggests that] unusual features of the disease can make mechanical ventilation harmful to the lungs.
[...] “In our personal experience, hypoxemia … is often remarkably well tolerated by Covid-19 patients,” the researchers wrote, in particular by those under 60. “The trigger for intubation should, within certain limits, probably not be based on hypoxemia but more on respiratory distress and fatigue.”
Absent clear distress, they say, blood oxygen levels of coronavirus patients don’t need to be raised above 88%, a much lower goal than in other causes of pneumonia.
[...] Covid-19 affects the lungs differently than other causes of severe pneumonia or acute respiratory distress syndrome, the researchers point out, confirming what physicians around the world are starting to realize.
For one thing, the thick mucus-like coating on the lungs developed by many Covid-19 patients impedes the lungs from taking up the delivered oxygen.
For another, unlike in other pneumonias the areas of lung damage in Covid-19 can sit right next to healthy tissue, which is elastic. Forcing oxygen-enriched air (in some cases, 100% oxygen) into elastic tissue at high pressure and in large volumes can cause leaks, pulmonary edema (swelling), and inflammation, among other damage, contributing to “ventilator-induced injury and increased mortality” in Covid-19, the researchers wrote.
[...] There is a growing recognition that some Covid-19 patients, even those with severe disease as shown by the extent of lung infection, can be safely treated with simple nose prongs or face masks that deliver oxygen.The latter include CPAP (continuous positive airway pressure) masks used for sleep apnea, or BiPAP (bi-phasic positive airway pressure) masks used for congestive heart failure and other serious conditions. CPAP can also be delivered via hoods or helmets, reducing the risk that patients will expel large quantities of virus into the air and endanger health care workers.
[...] “We use CPAP a lot, and it works well, especially in combination with having patients lie prone,” Schultz said.
[...] Rotating memories and other electromechanical computing devices have been fully replaced with electronic devices. Three-dimensional nanotube lattices are now a prevalent form of computing circuitry.
The majority of "computes" of computers are now devoted to massively parallel neural nets and genetic algorithms.
Significant progress has been made in the scanning-based reverse engineering of the human brain. It is now fully recognized that the brain comprises many specialized regions, each with its own topology and architecture of interneuronal connections. The massively parallel algorithms are beginning to be understood, and these results have been applied to the design of machine-based neural nets. It is recognized that the human genetic code does not specify the precise interneuronal wiring of any of the regions, but rather sets up a rapid evolutionary process in which connections are established and fight for survival. The standard process for wiring machine-based neural nets uses a similar genetic evolutionary algorithm. [...]