Covid 6/17: One Last Scare
post by Zvi · 2021-06-17T18:40:01.502Z · LW · GW · 42 commentsContents
The Numbers Predictions Deaths Vaccinations Delta Variant Please Stop Asking Me About That Guy In Other News Not Covid None 42 comments
The one last scare, from America’s perspective, is the Delta variant. If we can remain stable or improving once Delta takes over, then barring another even more infectious variant, we’ve won. If and where we can’t do that, it’s not over.
That’s the question. How much Delta is out there already, how much worse is it, and will that be enough to undo our work? If it is, how long until further vaccinations can turn things around again?
Before I look into that, let’s run the numbers.
The Numbers
Predictions
Prediction from last week: Positivity rate of 1.8% (down 0.2%), deaths fall by 9%.
Results: Positivity rate of 1.9% (up 0.1%), deaths fall by 20%.
Prediction: Positivity rate of 1.8% (down 0.1%), deaths fall by 9%.
I think the deaths drop likely reflects variance in reporting, so while it is definitely good news I do not want to predict a further large drop from a number that is likely to be somewhat artificially low. For positivity rate, we seem to have shifted to a regime of rapidly declining numbers of tests, especially in areas with declining infection rates. Thus, it’s quite possible that the positive test rate will stop reflecting the state of the pandemic. I still expect the number to keep dropping a bit, but it wouldn’t be that surprising if it stabilized around 2% from here on in with improvements reflected mainly elsewhere.
Deaths
Date | WEST | MIDWEST | SOUTH | NORTHEAST | TOTAL |
May 6-May 12 | 826 | 1069 | 1392 | 855 | 4142 |
May 13-May 19 | 592 | 1194 | 1277 | 811 | 3874 |
May 20-May 26 | 615 | 948 | 1279 | 631 | 3473 |
May 27-June 2 | 527 | 838 | 1170 | 456 | 2991 |
June 3-June 9 | 720 | 817 | 915 | 431 | 2883 |
Jun 10-Jun 16 | 368 | 611 | 961 | 314 | 2254 |
It seems like we can safely say that last week’s measured death count was too high and should have been more in the 2500-2700 range, and we remain roughly on the previous pace of decline from May. This week’s number is likely slightly lower than its true value, with some deaths shifted from this week into last week versus when they would usually be measured. Note that the raw total was even lower, as I added back some deaths in California when they reported negative numbers on multiple days, which I cancelled back up to zero.
Cases
Date | WEST | MIDWEST | SOUTH | NORTHEAST | TOTAL |
Apr 29-May 5 | 52,984 | 78,778 | 85,641 | 68,299 | 285,702 |
May 6-May 12 | 46,045 | 59,945 | 70,740 | 46,782 | 223,512 |
May 13-May 19 | 39,601 | 45,030 | 63,529 | 34,309 | 182,469 |
May 20-May 26 | 33,890 | 34,694 | 48,973 | 24,849 | 142,406 |
May 27-June 2 | 31,172 | 20,044 | 33,293 | 14,660 | 99,169 |
Jun 3-Jun 9 | 25,987 | 18,267 | 32,545 | 11,540 | 88,339 |
Jun 10-Jun 16 | 23,700 | 14,472 | 25,752 | 8,177 | 72,101 |
[Chart note: Given our current situation, while they provide good perspective on where we’ve been, the full charts are not that enlightening anymore and are growing unreadable for all the right reasons. Unless people think it’s a bad idea, I’m thinking the charts should be narrowed to only show movement after some reasonable time. Maybe start on 1 April 2021?]
These are very good numbers. We see declines across the board. It’s not back on the old pace, but given the lifting of restrictions and the rise of Delta, that should be expected, and we still made more progress than last week by a large amount.
They reflect declining numbers of tests rather than a big decline in positive test percentage, but at some point… that makes sense at equilibrium because you don’t need as many tests? There’s an argument that a 2% or so positivity rate is reasonable, in the sense that it means that a marginal additional test would have a much lower chance than that of detecting an infection that was likely asymptomatic, so it’s not clear more testing would be worth the trouble.
It also likely means we are doing substantially less ‘surveillance testing’ where we check people without any reason to think they are positive, either out of an abundance of caution or to give them entry into events or travel. And we’re doing less testing in the places that are winning, versus the areas that are still struggling. Thus, we are taking the least likely to be positive tests out of the pool, which should raise the positive test rate. If we’re stable on that front, we’re making great progress.
Vaccinations
Compared to what I expected, this continues to be good news. First doses do not appear to be declining much, which is impressive given how many of those eager to be vaccinated have already had their shots, and much of the remaining population is under 12.
If we can sustain this pace indefinitely, with an additional 1% vaccinated every week, we will be home free soon in most places, regardless of how bad Delta is. The effects compound increasingly quickly over time.
Delta Variant
Delta is rapidly taking over. How worried should we be?
First of all, if you are fully vaccinated, you should not be personally worried. Here’s the latest data (link to paper).
The 94% number for one dose of Pfizer is almost certainly higher than the real value due to random error, but the other numbers are very consistently saying that the vaccinations are at least as effective against Delta as they were against Alpha. The symptomatic disease numbers are more worrisome, but remain what I’d consider acceptable, especially for mRNA.
I don’t agree with every individual point, but this thread is mostly the reasonable bear case that Delta is sufficiently bad that we should be worried restrictions may come back and the pandemic might not be over in America.
As I noted last week, statements like ‘watch the Delta percentage in your region’ have the implication it’s going to be less than 99% for all that long, unless there’s another variant we don’t know about that’s even worse – at this point seeing a larger Delta percentage, conditional on knowing the growth rate of cases, is good news. Last week the report was that we were at 6% Delta cases overall, so there was still uncertainty at what level of cases we could stabilize against it.
The consensus seems to roughly be that Delta is twice as infectious as the pre-London strain. This thread, for example, puts its likely R0 at around 7, versus 3.3 for the original strain, an ~50-60% increase over the currently dominant London strain. There are places in the country where that may be enough to cancel out the current vaccination rate, despite vaccinations ‘overperforming’ their headline numbers due to previous infections and the existence of young children.
Where are we right now in terms of Delta? Here’s the best data point I could find. Delta is the light green area in the first graph.
English strain peaked here around 70% of the pool and is now down around 45%. That implies that the average strain is more infectious than the English strain already. Which makes sense, if you assume the ‘Other’ group includes strains similar to P1 or Delta that have now crowded out the others, and that both P1 and Delta are more dangerous than the English strain. The major legacy pre-Alpha strains are gone so it stands to reason similar minor ones were also wiped out.
This leaves us with a current mix of roughly (these are all approximations):
45% Alpha
25% Delta
15% P1
15% Other (that I will treat as P1)
How much more infectious will this pool be when it’s 100% Delta? Let’s accept the thread above estimates for now and assign points as follows:
1.4 Alpha
1.5 P1/Other
2.2 Delta
Total current infectiousness of pool as of collection of these sequences: 30%*1.5 + 45%*1.4 + 25%*2.2 = 1.63 = 63% more infectiousness than pre-Alpha values
Future infectiousness of pool = 2.2
Final outcome (assuming Delta is worst variant) = 2.2 / 1.63 = 35% additional infectiousness
On April 1, what was the pool like? At that point, we can put it at something like
Epsilon% Delta
55% Alpha
3% P1
3% Other small things similar to P1
39% Other similar to old baseline (1.0 infectiousness)
Total infectiousness of April 1 pool of sequences = 25% more infectious than baseline from mid-2020.
Total increase in infectiousness since April 1 pool = 1.63/1.25 = 30% additional infectiousness.
Thus, if we ‘believe the hype’ here, we have to survive another 35% increase, after a previous 63% total increase, the majority of which has happened since April 1.
Currently we are vaccinating about 1% of people each week. Given we are already 50%+ vaccinated if you discount children, and what vaccines we are using, that’s something like a 2% improvement each week. If we can keep that up as a share of the remaining population, that will be cumulative.
Right now, we are cutting cases in half every 3 weeks or so, which is about 4 cycles, so I’d estimate R0 at 0.84. Increasing that by 35% puts us at 1.14. That’s presumably high, because sequencing is delayed and thus the current Delta share is higher than in the above calculation. I don’t consider 35% a strict upper bound, but my mean estimate is more like 30%, and every little bit helps.
Thus, it looks clear to me that most places in America are going to make it given the additional vaccinations that will take place, but some places with low vaccination rates will fall short.
Alas, that does mean that it might be a while before the final restrictions can be lifted, and life returns 100% fully back to normal. We will be stuck in a kind of hybrid limbo, mostly involving performative mask wearing and other such annoyances. But that’s actually pretty close to fully normal, in terms of practical consequences for adults. For the kids, things could stay rough for a while, because people are really stupid about such things. I hope we can get the vaccines approved for the Under 12 crowd soon.
Please Stop Asking Me About That Guy
That guy’s name is Bret Weinstein. If you already weren’t asking me about that guy, you can and probably should skip this section.
Enough people keep asking me about him, and there’s been enough discussion in which the things he’s claiming have been taken seriously, that I need write this section anyway, for the explicit purpose of Please Stop Asking Me About That Guy.
Bret has made a huge number of overlapping extraordinary claims, with an extraordinary degree of both confidence and magnitude. It seems like most of the time that I get asked about ‘hey there’s this theory that sounds plausible what do you think?’ the source of that theory is exactly Bret Weinstein. He’s become the go-to almost monopolistic guy for presenting these things in a way that seems superficially plausible, which of course means he’s here for all of it.
It’s a variety of different theories about how everyone’s covering up The Truth Which Is Out There (if you think that link is unfair, his core claims include UFOs, a broad based conspiracy to censor and cover up The Truth, and many monsters of the week, so I dunno what to tell you). Some of his claims such as the lab leak are plausibly correct, but are stated with absurd levels of confidence. Others, stated with similarly absurd confidence, are… less plausible. This includes claims I do not think it would be responsible for me to repeat, such as so-called his “Crime of the Century.” Then he cries censorship and further conspiracy when he gets (entirely predictably and entirely according to clear established policies) censored on platforms like YouTube and Facebook.
Look. I totally get it. After everything that’s happened, in the words of one commenter, if it’s a choice between the people who claimed masks didn’t work handing verdicts out from on high or ‘those three guys on that podcast’ and the podcast guys seem to have models containing gears, why wouldn’t you go with the podcast guys? At this point, Fox Mulder would have the world’s most popular podcast and be a regular on Joe Rogan, he wouldn’t have Scully as a co-host, and damn if he still wouldn’t be making a lot of good points. Seriously, it makes total sense.
Except that one can decline to take either side’s word for it, and think about the proposed gears, including the other gears and claims coming from the same sources, derive your guess as to what algorithms both are using to decide what to claim, and decide that neither of these options is going to be much of a source.
I can’t even at this point. I finally unfollowed him when I noticed that every time I saw his tweets my mood got worse in anticipation, yet I wasn’t learning anything useful except how to answer people asking about his claims. I have wasted far too much of my life trying to parse ludicrous stuff buried in hours-long videos and figure out how to deal with all the questions people ask about such things, or to convince people that he’s spouting obvious nonsense when he’s clearly spouting obvious nonsense, and I will neither be paid for that time nor will I be getting any of that time back.
That does not mean that all of Bret’s claims are implausible or wrong. Some aren’t. It simply means that the whole thing is exhausting and exasperating, many of the claims being made are obvious nonsense, and the whole exercise of engaging has for me been entirely unfruitful. One is not obligated to explain exactly why any given thing is Wrong on the Internet, and my life is ending one minute at a time.
It definitely does not mean I agree with the decision by some platforms to censor his claims. While I do think many of his censored claims are wrong, it would be a better world if such claims were not censored.
If you want to engage with Bret’s claims after getting the information above, and feel that is a good use of your time, by all means engage with his claims and build up your own physical model of the world and what is happening. The right number of people doing that isn’t zero. The set of such people simply is not going to include me.
If you still want to ask me about that guy, my cheerful price [LW · GW] for further time spent on ‘investigating, writing about and/or discussing claims by Bret Weinstein or his podcast guests’ is $500/hour, which also is my generic cheerful price for non-commercial intellectual work. If you’re paying, I’ll check it out. Otherwise, Please Stop Asking Me About That Guy.
In Other News
Perspective thread on Covid from Sam Bankman-Fried, EA billionaire founder of crypto exchange FTX. Includes this bit of excellent practical advice:
And this slice of post-vaccination life, while waiting an extra day for a flight due to his negative Covid test having his SSN and his middle initial but not his full middle name on it:
Headline seems not to require further comment (WaPo).
California lifts restrictions on workers as of today. Will the masks actually come off?
New York lifts most remaining Covid restrictions.
Things are slow, but my experience so far is that nothing has changed. As usual, children get the shaft. Public transit (in particular, my ride to/from the city) continues to be much less pleasant in ways that make no physical sense. That’s the way it goes.
NFL isn’t forcing its players to get vaccinated, but it’s also not not forcing them to get vaccinated (WaPo). They’re going to make unvaccianted life quite inconvenient.
Mastercard pledges 1.3 billion to buy vaccines for 50 million Africans. In response, America calls it ‘significant step,’ indicating we indeed remain funding constrained, then makes smaller pledge:
Another person does math on smaller and delayed doses, and reaches the same conclusion that actual calculations always do, that they would save many lives.
Zeynep thread about how vaccinating the vulnerable changes things, you should know this already.
Not Covid
Words of wisdom thread from Sarah Constantin.
Front runner in NYC mayor’s race Eric Adams comes out in favor of multi-hundred person virtual classrooms, since in-person learning is unnecessary. Prediction markets and polls did not budge. He’s more likely to win than ever. Nothing matters.
Dominic Cummings wants to know the probability China will take over Taiwan in the next five years, because private conversations suggest 50%+, and asks what he should read. A relevant Metaculus market is here, but the right answer is of course a prediction market with actual money involved. I’ve asked Polymarket to throw up a market, hopefully we’ll have something soon, but it’s always tough getting interest in markets that don’t resolve for years.
This week in how we all actually die news, two apt observations:
And:
This matches my recent experiences. Among smart people who have looked seriously at the problem and are acting in good faith, I see broad near-universal agreement that this is a huge problem, with the main differences being between the ‘this is a huge problem but we can probably figure something out that holds onto the majority of the future’s value, and it’s unlikely that we all die’ camp and the ‘this is a huge problem and we are all very doomed with no apparent way out, we’re all very likely going to die’ camp, with a small not-that-reassuring third camp of ‘our civilization is sufficiently inadequate that the tech won’t get that far, so we’re all going to die but not from this’ and perhaps a fourth even less reassuring camp of ‘we are all going to die but have you considered that humanity dying out is good actually?’
For what it’s worth, I am in the second camp, and think the probability of doom is currently high, partly for the reason explained in this thread: Not only do we have to do a hard thing, we have to do the hard thing correctly on the first try, or there won’t be a second try.
Here’s some survey results asking workers in the field for how likely we are to be doomed [LW · GW], and how often we are doomed due to ‘we didn’t get it to do what we intended and thus were doomed’ versus ‘we did get it to do what we intended and we were doomed anyway.’ There’s a wide range of probabilities.
The thing that all the camps have in common is that no one has great ideas as to what to do about this. If I had great ideas that I could implement, I’d be working on pivoting to working on them, and there are people eager to help me with that if I did find great ideas, but I don’t have any so far, hence I’m not doing that.
One reason to be very concerned is that when we look at our failures in Covid, including especially our inability to not do obviously terrible Gain of Function research, we see the exact kind of failure modes that are likely to get us all killed (Eliezer Yudkowsky), as the local incentives push people towards highly unsafe actions for such mundane reasons as getting a paper published. Thus, one way to work on the problem could be to do so indirectly, by changing such incentives and cultures more generally, and providing proof-of-concept examples of how such things can be done. For example, by getting Gain of Function research banned, and noting what it took to do that so we can do it again.
This dive into the weeds [LW · GW] gives an up-to-date picture of what is generally considered the most central and scary challenge. Our best current AI techniques are radically opaque, current interpretability work is making very very little progress relative to the size of the challenge, and things like deception and mesa-optimizers are hopeless to address if you can’t understand your AGI’s internal cognition at all. Work on interpretability is urgently needed and is one of the things one could usefully do.
Anthropic was founded recently by former members of OpenAI for that explicit purpose of interpretability work, and we need as much such work as possible. The fear is that any such organization often does what OpenAI did, and turns mostly into an engine for creating more AI capabilities while giving us one more player to worry about in terms of avoiding a race situation where everyone builds their AI as fast as possible without time for safety work lest someone else do the same and get there first, and thus making the problem doubly worse.
Here are some other ideas, not related to MIRI, if you’d like to look through more of the field. Whatever the solution, it’s likely going to require understanding the AIs a lot better than we currently understand the AIs we have now.
Supporting organizations such as MIRI in the hopes that such work will figure out something worth doing, or doing one’s own study of the problems involved, or helping more people understand the situation, all seems net useful, by all means do that, but none of it constitutes the kind of plan we would like. Figuring out a good plan, if you are capable of it, would be immensely valuable, and there is a lot of support standing by to help with a good plan if one is found. Working on the problem directly, even without starting with a plan, also seems worthwhile.
42 comments
Comments sorted by top scores.
comment by Pattern · 2021-06-18T00:34:31.743Z · LW(p) · GW(p)
Supporting organizations such as MIRI
Is there a longer list somewhere?
Replies from: Larks↑ comment by Larks · 2021-06-18T04:36:29.940Z · LW(p) · GW(p)
Replies from: Pattern↑ comment by Pattern · 2021-06-18T20:27:59.588Z · LW(p) · GW(p)
(From the link above, to 2020 AI Alignment Literature Review and Charity Comparison [LW · GW], by Larks.)
Research Organisations
- FHI: The Future of Humanity Institute
- CHAI: The Center for Human-Aligned AI
- MIRI: The Machine Intelligence Research Institute
- GCRI: The Global Catastrophic Risks Institute
- CSER: The Center for the Study of Existential Risk
- OpenAI
- Google Deepmind
- BERI: The Berkeley Existential Risk Initiative
- Ought
- GPI: The Global Priorities Institute
- CLR: The Center on Long Term Risk
- CSET: The Center for Security and Emerging Technology
- AI Impacts
- Leverhulme Center for the Future of Intelligence
- AI Safety camp
- FLI: The Future of Life Institute
- Convergence
- Median Group
- AI Pulse
Other Research [LW · GW]
Capital Allocators
- LTFF: Long-term future fund
- OpenPhil: The Open Philanthropy Project
- SFF: The Survival and Flourishing Fund
Other Organisations
- 80,000 Hours
comment by countingtoten · 2021-06-26T19:30:24.937Z · LW(p) · GW(p)
we can probably figure something out that holds onto the majority of the future’s value, and it’s unlikely that we all die’ camp
This disturbs me the most. I don't trust their ability to distinguish "the majority of the future's value," from "the Thing you just made thinks Thamiel is an amateur."
Hopefully, similar reasoning accounts for the bulk of the fourth camp.
comment by Annapurna (jorge-velez) · 2021-06-17T22:15:10.667Z · LW(p) · GW(p)
Zvi, what do you think of this market, given the rise in the delta variant?
https://polymarket.com/market/will-the-us-have-fewer-than-1000-covid-19-cases-on-any-day-before-september-1
comment by Lukas_Gloor · 2021-06-17T20:29:22.128Z · LW(p) · GW(p)
Thus, it looks clear to me that most places in America are going to make it given the additional vaccinations that will take place, but some places with low vaccination rates will fall short.
I found myself intuitively skeptical about this claim and tried evaluating it via a different line of reasoning than the one you used (but relying on some of your figures). After going through this, I mostly updated that it will be a close race with the vaccinations. Overall, I find it 65% likely that places with a roughly average vaccination coverage in the US won't be able to avoid large surges in case numbers (defined by either lockdowns or really strong new restrictions, or 3% of unvaccinated people infected at the same time.) (This could be compatible with your estimates, because the death rate in well-vaccinated areas would still be relatively low if vaccination uptake is high amongst the elderly.) What seems very clear is that locations with below-average vaccination coverage will be in trouble.
My approach and estimates:
I think a crude lower bound for when you get Delta variant under control is when you have a substantially larger percentage of the population vaccinated than the UK currently has. (Because R is 1.1-1.35 in the UK now and that's before the full reopening.)
Current vaccination percentages for the UK (all age groups):
63.3% first dose
46.0% second dose
Current vaccination percentages for the US (all age groups, I think):
52.7% first dose
44.1% second dose
You say there's about 25% Delta variant in the US now.
5 weeks ago, I commented [LW(p) · GW(p)] that the UK had >50% Delta variant in some areas. With a doubling time of roughly 11 days in the UK, it must have been at 25% roughly 7 weeks ago. Meaning, assuming that the infection levels in the US currently are comparable to what they were in the UK 7 weeks ago, then the US is roughly 7 weeks behind the UK timeline.
7 weeks ago, the UK was reporting around 2k Covid cases (with a population of 66 million). The US population is 5x larger. The US is presently reporting around 13k cases. That's similar enough! Therefore, I'm going to operate under the assumption that the US is "7 weeks behind the UK timeline."
The situation in the UK is concerning and getting worse still, but the case numbers are substantially below the previous peaks. I'd say the UK is about 3-4 weeks ahead of things getting very bad.
By that reasoning, the US has roughly 10 weeks to get R below 1 for the Delta variant.
You say, "Currently [the US] are vaccinating about 1% of people each week."
I'm assuming that's both doses?
Continuing with that, in 10 weeks, the US should have the following vaccination percentage:
62.7% first dose
54.1% second dose
And here the present UK numbers again:
63.3% first dose
46.0% second dose
The UK is not fully reopened yet, and R is at 1.1-1.35. Most UK experts are pessimistic about things getting better anytime soon, despite vaccinations progressing quite quickly.
That said, the second dose may matters more than the first dose, especially if the first dose is Astra Zeneca. So, 54% second dose instead of 46% should make quite a large difference. I think (?) the US also relies slightly more on Pfizer and Moderna than the UK, which should add a bit of extra protection. Summer temperatures also help out. But is all of this enough to put R below 1 (for the Delta variant, specifically) early enough?
The UK isn't even fully opened yet. Some US states may go ahead with the full reopening now, in which case they'll have less than the projected 10 weeks until they catch up with the UK timeline.
Then again, there's room for the vaccinations to speed up (the vaccination rate used to be higher at points in the past).
Note that my definition of "large surges in case numbers" isn't necessarily that bad. 3% of unvaccinated people infected – the UK is almost there already, and deaths are extremely low because the unvaccinated people are mostly really young.
Update: I'm realizing that country-wide infection counts are driven mostly by the places with the worst vaccination uptake, so a location with an average vaccination rate wouldn't be hit that badly compared to the country average infection rate. This means I'd now change my operationalization to something like "worst 25th percentile." And maybe make it 60% instead of 65%.
↑ comment by Zvi · 2021-06-17T21:53:20.169Z · LW(p) · GW(p)
I think you're not factoring in that a lot of UK vaccinations are AZ, which is a lot less effective at stopping spread?
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2021-06-17T22:08:09.913Z · LW(p) · GW(p)
I mentioned it as a consideration, but yeah, I'm probably underestimating the effect of that by a lot, now that I think about it. I wasn't sure how much the US has so far relied on the J&J vaccine, which is also less effective. But it looks like it's a low amount of it.
↑ comment by Lukas_Gloor · 2021-06-17T20:46:39.153Z · LW(p) · GW(p)
Regarding the estimate that Delta is 40% more infectious than Alpha: I've seen 50-60% mentioned a lot in the last couple of days from UK expert sources. If true, this would probably make a big difference to your calculations.
Replies from: Zvicomment by TAG · 2021-06-18T11:54:20.592Z · LW(p) · GW(p)
For what it’s worth, I am in the second camp, and think the probability of doom is currently high, partly for the reason explained in this thread: N
Unstated assumptions: ASI will be achieved by a sudden jump, not incremental improvement. Corrigibility won't work. ASI will agentive.
Replies from: daniel-kokotajlo, evhub↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-06-18T15:27:12.953Z · LW(p) · GW(p)
I think it's not that extreme. More like "The various non-agenty AIs won't be enough to make aligning the agenty ones substantially easier" and "Alignment failures won't become obvious and scary at stages prior to N before they happen at stage N, where N is the first stage that we have to get right or else." (Analogy: We got humans to the moon safely on the first try, but this was because we had various tests beforehand to iron out the kinks, including ones that in fact blew up catastrophically. The assumption is that there won't be good opportunities to test things out beforehand. Though I guess you could say that's not an assumption, it's the claim itself.) As for corrigibility... I mean it might work, but the claim is that we shouldn't expect it to work on the first try.
Replies from: TAG↑ comment by TAG · 2021-06-18T16:20:24.603Z · LW(p) · GW(p)
The various non-agenty AIs won’t be enough to make aligning the agenty ones substantially easier”
Fortunately, it doesn' have to, so long as the agenty ones aren't the most powe rful.
corrigibility… I mean it might work, but the claim is that we shouldn’t expect it to work on the first try.
Fortunately , it doesn't have to. You just need to get it working in AIs that aren't superintelligent.
↑ comment by evhub · 2021-06-20T21:12:43.203Z · LW(p) · GW(p)
There are other models than the discontinuous/fast takeoff model under which alignment of the first advanced AI is critical, e.g. a continuous/slow but homogenous [AF · GW] takeoff.
comment by danohu · 2021-06-18T09:46:28.805Z · LW(p) · GW(p)
What do we know about the nature of infection of vaccinated people, or re-infection of people who have recovered from Covid? It seems to me, this will have a big impact on Covid epidemiology in a mostly-vaccinated country.
I have three mental models:
- Immunocompromise: some people will never build up Covid resistance, no matter how much they get vaccinated or infected
- Exposure: a person has resistance, but is exposed to Covid in a way that overwhelms them. High viral load, maybe, or their immune system is somehow having a bad day
- Vaccination failure: for some reason the vaccination doesn't 'take'. If the sufferer catches Covid, they will build up more resistance and be better protected next time round
[I'm leaving out the impact of variants and the decline of immunity over time]
(1) means that we'll permanently have some segment of the population where Covid circulates, but could conceivably identify and protect those people
(2) is the same, but with less we can do in reaction.
(3) would imply things getting gradually better over time, as those people are exposed to endemic covid. Booster shots might help speed this along
Does anybody have a sense of which of these models (or something else) is closer to reality?
comment by p.s.carter · 2021-06-18T01:47:49.990Z · LW(p) · GW(p)
I don't believe supporting MIRI is a good use of money.
Replies from: Kenny, matejsuchy↑ comment by Kenny · 2021-06-18T23:35:13.476Z · LW(p) · GW(p)
Why do you think support MIRI isn't a good use of money?
Replies from: p.s.carter↑ comment by p.s.carter · 2021-06-21T03:50:39.361Z · LW(p) · GW(p)
My comment is based on this post, which seems to cover the matter thoroughly.
Replies from: Kenny, GeneSmith↑ comment by Kenny · 2021-06-22T17:54:41.868Z · LW(p) · GW(p)
Thanks!
Do you know the person that wrote that post? Or anyone else supposedly involved in the events it describes? I'm not sure I could adjudicate the claims in that post, for my own judgement, given my remove from everyone supposedly involved.
I'm also still unsure how any of that, assuming it's true, should be weighed against the 'official' work MIRI has done or is doing. Surely AI safety has to be balanced against those (unrelated) claims somehow, as terrible as they are (or might be), and as terrible as it is to think about 'balancing' these 'costs' and (potential) 'benefits'.
Some of the claims in that post also aren't obviously terrible to me, e.g. MIRI reaching a legal settlement with someone that 'blackmailed' them.
And if the person "Ziz" mentioned in the post is the same person I'm thinking of, I'm really confused as to what to think about the other claims, given the conflicting info about them I've read.
The post quotes something (about some kind of recollection of a conversation) about "a drama thing" and all of this seems very much like "a drama thing" (or several such 'drama things') and it's really hard to think of any way for me, or anyone not involved, or even anyone that is or was involved, to determine with any confidence what's actually true about whatever it is that (may have) happened.
Replies from: p.s.carter↑ comment by p.s.carter · 2021-06-24T17:47:52.886Z · LW(p) · GW(p)
I know a few people involved, and I trust that they're not lying, especially given that some of my own experiences overlap. I lived in the Bay for a couple years, and saw how people acted, so I'm fairly confident that the main claims in the open letter are true.
I've written myself a bit about why the payout was so bad here, which the author of the open letter appears to reference.
MIRI wrote this paper: https://arxiv.org/abs/1710.05060 The paper is pretty clear that it's bad decision theory to pay out to extortion. I agree with the paper's reasoning, and independently came to a similar conclusion, myself. MIRI paying out means MIRI isn't willing to put their money where their mouth is. Your ability to actually follow through on what you believe is necessary when doing high-stakes work.
Like, a lot of MIRI's research is built around this claim about decision theory. It's fundamental to MIRI's approach. If one buys that FDT is correct, then MIRI's failure to consistently implement it here undermines one's trust in them as an institution. They folded like wet cardboard. If one doesn't buy FDT, or if one generally thinks paying out to extortionists isn't a big deal, then it wouldn't appear to be a big deal that they did. But a big part of the draw towards rationalist spaces and MIRI is that they claim to take ideas seriously. This behaviour indicates (to me) that they don't, not where it counts.
As for Ziz, from what I understand she's been the victim of a rather vicious defamation campaign chiefly organized by a determined stalker who is angry with her for not sleeping with him. If you reach out to some rationalist discord mods, you should be able to get a hold of sufficient evidence to back the claims in that post.
Replies from: Kenny↑ comment by Kenny · 2021-06-24T22:09:13.834Z · LW(p) · GW(p)
Thanks!
I'm still not sure what to think as an outsider, but I appreciate the details you shared.
With respect to the "extortion" specifically, I'd (charitably) expect that MIRI is somewhat constrained by their funders and advisors with respect to settling a (potential) lawsuit, i.e. making a "pay out to extortion".
I still think all of this, even if it's true (to any significant extent), isn't an overwhelming reason not to support MIRI (at all), given that they do seem to be doing good technical work.
Is there another organization that you think is doing similarly good work without being involved in the same kind of alleged bad behavior?
Replies from: p.s.carter↑ comment by p.s.carter · 2021-07-06T16:36:33.140Z · LW(p) · GW(p)
From what I understand, some of their funders were convinced MIRI would never pay out, and were quite upset to learn they did. For example, one of the people quoted in that open letter was Paul Crowley, a long time supporter who has donated almost $50k. Several donors were so upset they staged a protest.
I still think all of this, even if it's true (to any significant extent), isn't an overwhelming reason not to support MIRI (at all), given that they do seem to be doing good technical work.
I disagree. I've written a bit about why here.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2021-07-07T02:18:26.039Z · LW(p) · GW(p)
You write
MIRI should’ve been an attempt to keep AGI out of the hands of the state
Eliezer several times expressed the view that it's a mistake to focus too much on whether "good" or "bad" people are in charge of AGI development. Good people with a mistaken methodology can still produce a "bad" AI, and a sufficiently robust methodology (e.g. by aligning with an idealized abstract human rather than a concrete individual) would still produce a "good" AI from otherwise unpromising circumstances.
Replies from: Pattern, p.s.carter↑ comment by Pattern · 2021-08-15T18:38:18.644Z · LW(p) · GW(p)
Eliezer several times expressed the view
Can you link to 3 times?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2021-08-16T06:46:47.701Z · LW(p) · GW(p)
Unequivocal example from 2015: "You can’t take for granted that good people build good AIs and bad people build bad AIs."
A position paper from 2004. See the whole section "Avoid creating a motive for modern-day humans to fight over the initial dynamic."
↑ comment by p.s.carter · 2021-07-25T07:16:53.275Z · LW(p) · GW(p)
That's an artificially narrow example. You can have...
a good person with good methodology
a good person with bad methodology
a bad person with good methodology
a bad person with bad methodology
A question to ask is, when someone aligns an AGI with some approximation of "good values," whose approximation are we using?
↑ comment by matejsuchy · 2021-06-18T02:51:47.812Z · LW(p) · GW(p)
Can you refer me to a textbook or paper written by the AGI crowd which establishes how we get from GPT-n to an actual AGI? I am very skeptical of AI safety but want to give it a second hearing.
Replies from: Pattern, daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-06-18T07:48:19.119Z · LW(p) · GW(p)
It sounds like you think that actual AGI won't happen? And your argument is that we don't have a convincing roadmap for how to get there?
Replies from: matejsuchy↑ comment by matejsuchy · 2021-06-18T15:35:27.819Z · LW(p) · GW(p)
Well, I didn't mean to propose an argument.
My impression is that there is not a convincing roadmap. I certainly haven't seen one. However, I recognize that there is a healthy possibility that there is one, and I just haven't seen it.
Which is why I'm asking for the white paper / textbook chapter that presumably has convinced everyone that we can expect AGI in the coming decades. I would be very grateful for anyone who could provide it.
Obviously, AGI is feasible (more than could be said for things like nanotech or quantum computing). However, it's feasible in the sense that a rocket ship was feasible in the time of the ancient Greeks. Obviously a lot of knowledge was necessary to get from Greeks to Armstrong.
Right now all we have are DL models that are basically kernel machines which convert noise into text, right? My intuition is that there is no path from that to AGI, and that AGI would need to come from some sort of dynamic system, and that we're nowhere near creating such. I would like to be proven wrong though!
↑ comment by Donald Hobson (donald-hobson) · 2021-06-18T20:00:16.465Z · LW(p) · GW(p)
I think this post sums up the situation.
https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence [LW · GW]
If you know how to make an AGI, you are only a little bit of coding before making it. We have limited AI's that can do some things, and aren't clear what we are missing. Experts are inventing all sorts of algorithms.
There are various approaches like mind uploading, evolutionary algorithms etc that fairly clearly would work if we threw enough effort at them. Current reinforcement learning approaches seem like they might get smart, with enough compute and the right environment.
Unless you personally end up helping make the first AGI, then you personally will probably not be able to see how to do it until after it is done (if at all). The fact that you personally can't think of any path to AGI does not tell us where we are on the tech path. Someone else might be putting the finishing touches on their AI right now. Once you know how to do it, you've done it.
Replies from: jaspax, matejsuchy↑ comment by jaspax · 2021-06-19T09:22:05.883Z · LW(p) · GW(p)
FWIW, I think that mind uploading is much less likely to work than a purely synthetic AI, at least in reasonably near-term scenarios. I have never read any description of how mind uploading is going to work which doesn't begin by assuming that the hard part (capturing all of the necessary state from an existing mind) is already done.
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2021-06-20T10:44:19.049Z · LW(p) · GW(p)
I agree that purely synthetic AI will probably happen sooner.
↑ comment by matejsuchy · 2021-06-18T22:06:49.229Z · LW(p) · GW(p)
Of course mind uploading would work hypothetically. The question is, how much of the mind must be uploaded? A directed graph and an update rule? Or an atomic-level simulation of the entire human body? The same principle applies to evolutionary algorithms, reinforcement learning (not the DL sort imo tho, it's a dead end), etc. I actually don't think it would be impossible to at least get a decent lower bound on the complexity needed by each of these approaches. Do the AI safety people do anything like this? That would be a paper I'd like to read.
I don't know whether to respond to the "Once you know how to do it, you've done it" bit. Should I claim that this is not the case in other fields? Or will AI be "different"? What is the standard under which this statement could be falsified?
↑ comment by Donald Hobson (donald-hobson) · 2021-06-20T11:12:39.374Z · LW(p) · GW(p)
For the goal of getting humans to mars, we can do the calculations and see that we need quite a bit of rocket fuel. You could reasonably be in a situation where you had all the design work done, but you still needed to get atoms into the right places, and that took a while. Big infrastructure projects can be easier to design. For a giant damm, most of the effort is in actually getting all the raw materials in place. This means you can know what it takes to build a damm, and be confident it will take at least 5 years given the current rate of concrete production.
Mathematics is near the other end of the scale. If you know how to prove theorem X, you've proved it. This stops us being confident that a theorem won't be proved soon. Its more like a radioactive decay of an fairly long lived atom more likely to be next week than any other week.
I think AI is fairly close to the maths, most of the effort is figuring out what to do.
Ways my statement could be false.
If we knew the algorithm, and the compute needed, but couldn't get that compute.
If AI development was an accumulation of many little tricks, and we knew how many tricks were needed.
But at the moment, I think we can rule out confident long termism on AI. We have no way of knowing that we aren't just one clever idea away from AGI.
↑ comment by ChristianKl · 2021-06-18T22:38:32.829Z · LW(p) · GW(p)
The question is not just "how much is needed" but also "what's a reasonable difference between the new digital mind and the biological sustance".
Replies from: donald-hobson↑ comment by Donald Hobson (donald-hobson) · 2021-06-20T11:19:29.829Z · LW(p) · GW(p)
We can get a rough idea of this by considering how much physical changes have a mental effect. Psychoactive chemicals, brain damage etc. Look at how much ethanol changes the behaviour of a single neuron in a lab dish. How much it changes human behaviour. And that gives a rough indication of how sensitively dependant human behaviour is on the exact behaviour of its constituent neurons.