Posts

COVID and the holidays 2021-12-08T23:13:56.097Z
Sci-Hub sued in India 2021-11-13T23:12:40.559Z
Choice Writings of Dominic Cummings 2021-10-13T02:41:44.291Z
Weird models of country development? 2021-09-22T17:39:57.596Z
Delta Strain: Fact Dump and Some Policy Takeaways 2021-07-28T03:38:34.455Z
Causes of a Debt Crisis—Economic 2021-07-01T22:46:37.571Z
Intro to Debt Crises 2021-06-28T23:50:15.634Z
Let's Go Back To Normal 2021-05-06T03:10:02.447Z
Treatments correlated with harm 2020-04-16T21:02:58.126Z
Ways you can get sick without human contact 2020-04-08T22:41:07.384Z
Self-priming 2020-04-07T00:54:40.469Z
Connor_Flexman's Shortform 2019-10-16T21:31:55.112Z

Comments

Comment by Connor_Flexman on Thomas Kwa's MIRI research experience · 2023-10-06T08:00:39.352Z · LW · GW

Re rockets, I might be misunderstanding, but I’m not sure why you’re imagining doubling the number of molecules. Isn’t the idea that you hold molecules constant and covalent energy constant, then reduce mass to increase velocity? Might be worth disambiguating your comparator here: I imagine we agree that light hydrogen would be better than heavy hydrogen, but perhaps you’re wondering about kerosene?

Comment by Connor_Flexman on Noting an error in Inadequate Equilibria · 2023-06-14T02:38:48.298Z · LW · GW

I'm aware this is kind of hard to justify, but I'm basically an apologist on this one. I think he was mostly right and just exaggerated the measurable magnitude. It's just so damn hard to come up with examples that are not only true and illustrative and compelling and not confusing, but also very measurably true. I do wish he had provided a more justifiable core example and overstated the result less, but I do basically the same when I'm trying to make a point. On my list of metrics I think he satisficed basically fine—I can't think of any better examples off the top of my head from pre-COVID. 

[ETA: someone asked why I thought the effect size was more than 0, which is a good question that I was trying to dodge... Here's my attempt at some justification.

First: the “reality drives straight lines on graphs” thing. The line of your economy growing stays straight because you keep doing things to make the economy keep growing. Every time someone does something to boost the economy, that line gets a little straighter. If they didn’t fix the money supply they probably would have started growing less, but they did fix it because it was the next bottleneck and that’s how lines stay straight. I’ve seen a lot of times where an intervention that has to have a clear effect by any model of the world just doesn’t show a clear effect on graphs. So at this point I’m not that convinced by being unable to pick signal out of the noise.

Second, as someone else pointed out, they once again didn’t print enough money. So while Eliezer did exaggerate to say that they had actually fixed the problem and seen his preferred result, I think he was still directionally right: they did a small intervention, it helped some, and doesn’t really show up because they didn’t do enough. It wasn’t a Volcker situation where he really drilled it into people.

Third, least convincingly, I’m just schizo enough to be able to eyeball those graphs and say sure, does look a bit like prime LFPR was up faster. And after a crunch you’re supposed to see catch-up growth, and in this case it does seem like the catch-up growth of Japan was slower than I’d expect and the post-catch-up was equally fast but relatively faster (compare Germany’s RGDP, the first non-US country I looked at). Also, I hear there was some sort of economic problem around 2014-2015 maybe? Anyways this is of course in the context of point 1, where normally it’s hard to see anything on a graph.

Fourth, again in the context of point 1, in cases like this I'd lean more on models and historical context for what we know must be going on, rather than actually expecting to see the results clearly in any given case. And I feel pretty confident that increasing the liquidity available in a faltering economy HAS to increase GDP—like, decreasing it surely decreases GDP, right? So by the reversal test…]

Comment by Connor_Flexman on Connor_Flexman's Shortform · 2023-01-14T18:45:56.177Z · LW · GW

Clinton's campaign was against Bush, so they were throwing these words back at him.

Comment by Connor_Flexman on Connor_Flexman's Shortform · 2023-01-08T02:55:08.925Z · LW · GW

A Few Lessons from Dominic Cummings on Politics

Barbell model of voters (or "delusion of the centre"), where many in the electorate are far to the left of politicians on white collar crime and higher taxes on the rich but far to the right of politicians on violent crime, anti-terrorism, and immigration. 

You want to be empirical in a way almost all in politics aren't: run tons of focus groups and really listen to how your voters think, not just what policies they want.

Use a best-in-class data model. Polls naturally swing all over, much polling is bad; if you use these, make them Bayesian and get great people who really know what they're doing to figure them out. Then use these models to focus relentlessly on whatever has the largest effect size, which is swing voters. [Some other tricks here that seem worth not being as explicit about.]

Don't be patronizing, do have integrity—very hard in politics. 

Stay on message. Bill Clinton's campaign had 3 talking points, each phrased to maximize punch. "It's the economy, stupid", "read my lips", and another that I forget. Carville was incredible at focusing relentlessly on turning every interview question into a response on one of these three. People won't care about most of the stuff you could talk about, and you can't optimize everything, so just choose the few best messages that are most powerful to people and drive everything back to them. Watch The War Room about the Clinton campaign if you haven't yet.

Comment by Connor_Flexman on Visible Homelessness in SF: A Quick Breakdown of Causes · 2022-06-09T09:35:50.621Z · LW · GW

Note that
A) zooming in on most city hubs will find you monetary concentrations like this, e.g. Manhattan has a GDP pc of $370k
B) I have never actually heard anyone argue that making the city richer is the path to solving homelessness despite living there for a long time, so suspect this might be an error—are you conflating this with deregulating the housing market? Or do people actually argue somewhere that more money would solve homelessness?

Comment by Connor_Flexman on Experience on Meloxicam · 2022-05-08T11:37:03.415Z · LW · GW

~same. I use a Kinesis Freestyle with 20" cord, that finally ~fixed my wrists after 4 years, and I'm extremely excited for the Kinesis Advantage360 coming out some time this year.

Comment by Connor_Flexman on Home Antigen Tests Aren’t Useful For Covid Screening · 2022-05-08T11:23:05.772Z · LW · GW

I think my current expectation of risk reduction from antigen tests is more like 20-60% than <10%, but I'll also note that it matters a lot what your population is. In Elizabeth's social circle my guess is that most people aren't coming to parties if they've had any suspected positive contact, have any weak symptoms, etc, such that there's a strong selection effect screening out the clearly-positive people. (Or like, imagine everyone with these risk factors takes an antigen test anyways—then requiring tests doesn't add anything.)

I haven't read this whole thread but for the record, I often agree with Michael Mina and think he does great original thinking about these topics, yet think in this case he's just wrong with his extremely high estimates of antigen test sensitivity during contagion. I think his model on antigen tests specifically is theoretically great and a good extrapolation from a few decent assumptions, but just doesn't match what we see on the ground.

For example, I've written before about how even PCRs seem to have 5-10% FNR in the hospitalized, and how PCR tests look even worse from anecdata. Antigen tests get baselined against PCR so will be at least this bad.

We also see things like a clinical trial on QuickVue tests that shows only ~83% sensitivity. Admittedly other studies of antigen tests show ~98% sensitivity, but I think publication bias and results-desirability bias here means that if the clinical trial only shows 83%, then that's decent evidence that studies finding higher are a bit flawed. I would not have guess they could get to 98% though so there's something that doesn't make sense here.

I know the standard heuristic is to trust scientific findings over anecdata, but I think in this case that should be reversed if you're extremely scientifically literate and closely tracking things on the ground. Knowing all the things that can go wrong with even very careful scientific findings, I just don't trust these studies claiming very high sensitivity much—I think they also contradict FDA data on Cue tests, data/anecdata about nasal+saliva tests working better than just nasal, etc. 

(Maybe I'm preaching to the choir and you know most of this, given your range was 25-90%. But I guess I see pretty good evidence it can't possibly be at the high end of that range.)

Comment by Connor_Flexman on Connor_Flexman's Shortform · 2022-03-26T00:25:43.259Z · LW · GW

Reminder that US is crossing 50% BA.2 in the next few days, CA and NY have started to uptick, so probably in 4 weeks it will be a serious wave peaking in like 6-8ish weeks. Plan accordingly!

(So ~4 weeks where things are fineish, then ~7 weeks where rates are higher, then 4 weeks to come back down. I.e. plan for May and June to have lots of COVID, and potential restrictions to continue into July.)

Comment by Connor_Flexman on Connor_Flexman's Shortform · 2022-03-25T20:11:22.308Z · LW · GW

I at least partially agree with this. I'm less interested in virtue signaling per se than I am in using it as a brief exploration to highlight a common misconception about how signaling works. Plausibly virtue signaling isn't the clearest example of this, but I do think it's a pretty good case of the broader point: people tend to talk about signals mostly when they are deficient in various ways, but then that tarnish rubs off onto all signaling universally. I think it's really important that signals are extremely good in general, except ones that are dumb because they're costly to implement or goodharted or what-have-you.  This really does not come through when people talk about signaling. 

Comment by Connor_Flexman on Connor_Flexman's Shortform · 2022-03-23T23:02:07.337Z · LW · GW

Remember remember remember, costly signaling is supposed to be about cost-to-fake, not cost-burnt-to-signal. It is not like Bitcoin. If you own an original Picasso, it is costless to show that you own it, but very costly for someone to fake owning it (have to commission an elaborate copy).

“Virtue signaling” should be thought of with this in mind. If you or someone else is frowning upon a virtue signal, that’s not because of the inherent structure of signaling. It means either it’s a corrupted signal, they’re being annoying with their signal, or it’s not a signal to begin with. For example, if someone can post a bunch of tweets about Latest Crisis costlessly, that’s not really a costly signal to begin with. If someone volunteers for many hours at soup kitchens to be a politician even though they hate it, that’s a corrupted signal. If you casually drop all your volunteering accolades in conversation apropos of nothing, that’s a real signal but really annoying.

In many ways this structure mirrors force projection! Cf Luttwak's Grand Strategy of the Roman Empire. In the same way that good force projection doesn’t require costly forces to be applied, good signaling doesn’t require cost to be burnt on a signal. The adept will signal perfectly fine through various proofs provided, without breaking social norms or splurging resources.

Comment by Connor_Flexman on Connor_Flexman's Shortform · 2022-03-21T07:38:16.085Z · LW · GW

Mimesis has re-revealed its awesome and godly power to me over the last few months. Not Girardian mimesis, but hominid mimesis. Best way to do almost anything is to literally copy others, especially the best people but really the triangulation between any few people will do. Don’t know how to write an email? Copy it from an email you received. Don't know how to do any chore, cooking, dance, etc? Just look it up on youtube. This is a long ways from Connor of 2018, who fastidiously avoided watching youtube videos of poi so I could explore it all on my own for months.

Mimesis has a bad rap in my local culture. But, huge postulate: mimesis is ONLY bad when coupled with such tight need for approval that it is a hard constraint on what you can do. That's the combination that results in whole segments of society that can't innovate, can't fix basic problems, general cheems mindset. In our scene of non-conformists, there is essentially no downside, I postulate!

You can make arguments like “thinking things through for yourself first can help avoid anchoring”, or “you can genuinely learn better if you take a first stab yourself and then see the diff”. Sure, but I think these are the exception that proves the rule. Holding off on mimesis is very useful in a few contexts, and all the time for a few occupations; for most people, 99% of stuff is best to do starting from the shoulders of giants. If you like thinking for yourself, trust me that you will do that just the same while cooking from a recipe compared to trying to derive it yourself. If I had just started learning poi as the experts do it, I would have much more quickly gotten to a place where  creative energy and first principles yielded interesting new results, rather than just new results. 

Comment by Connor_Flexman on Six economics misconceptions of mine which I've resolved over the last few years · 2022-01-05T22:45:36.823Z · LW · GW

I think I basically mean it straightforwardly. In my mind it is pretty similar to other moral injunctions like "tell the truth" or "speak up for the bullied"—it is important to resolve to do it ahead of time, because in the moment it might be quite hard and costly to do so. So if someone were to start talking about how actually the bullied need to learn to stick up for themselves, etc etc, I would want to remind myself and others that while this is true, it shouldn't change my moral resolution to stand up to bullying. (It's perfectly fine for people to discuss whether maybe we shouldn't stand up for them, but if someone gives an argument that doesn't apply, or evidence that later turns out to be false, I want to again reiterate the resolution.)

Maybe this is overkill or something but I think it feels pretty straightforward to me. I think sometimes my moral resolutions do in fact get eroded by people questioning them, and not "re-committing" afterward.

Comment by Connor_Flexman on Six economics misconceptions of mine which I've resolved over the last few years · 2022-01-03T20:33:30.267Z · LW · GW

There are plenty of things you should still resolve to do. You don't want to throw the baby out with the bathwater by maintaining maximum irresolution so you'll never have difficulty changing your mind. Just change your mind when important evidence comes in—and in this case, I'm trying to point out that it is not important evidence against internalizing externalities. (It is evidence against levying the full externality cost and failing to try mitigating trades that reduce that externality cost.)

Comment by Connor_Flexman on Omicron Post #7 · 2021-12-27T05:50:34.340Z · LW · GW

Given that serial interval is a very effective way to amplify an already-high R0 to great evolutionary advantage, and common colds/flu etc all have developed serial interval of more like 1-2 days showing there's plenty of room for COVID to climb in this regard, I would expect lots of new dominant strains to show shortened serial interval regardless of heritage.

Not sure if there's publication bias or measurement bias here but the first link on google shows a study estimating Omicron serial interval at 2.2 days in South Korea.

Comment by Connor_Flexman on COVID and the holidays · 2021-12-20T06:49:28.743Z · LW · GW

Ah, good point.

The main reason I don't automatically make a huge adjustment for this is that it seemed that there were still ~25% false negatives at peak contagiousness by PCR according to some studies. And 40% false negatives a few days later. All sort of things are possible, like that these studies included many asymptomatic and acontagious cases, but having seen anecdotal corroboration of this phenomenon, I'm inclined to think something weird is going on.

But I should give some weight to it though—perhaps 2x less ineffective when compared to contagiousness, so maybe 60% efficacy for rapid and 75% for PCR?

Comment by Connor_Flexman on COVID and the holidays · 2021-12-20T04:57:25.480Z · LW · GW

Good point re the urban centers, was pretty dumb of me to forget to adjust for that. I've added two ETAs to the post to account for this.

Comment by Connor_Flexman on Omicron Post #7 · 2021-12-20T04:42:45.967Z · LW · GW

I can't confirm you didn't talk about this, but the only thing of importance I haven't seen you mention in these is that the serial interval might be much smaller (eg as mentioned here), so that R0 and number of secondary infections is not so vastly higher even as we see incredibly fast doubling times. I think people don't realize how much this effect played a part in Delta's containability, even while Delta overtook other variants very quickly and it seemed pretty concerning at the beginning. 

Specifically, if Omicron is tearing through vaccinated populations because it has 5x higher immune escape than Delta, 2/3 the serial interval, and 1.5x the transmission (which I think roughly fits the SA data), then we might see the Omicron wave ending up surprisingly containable—only needing behavior to curb the 1.5x transmission and to go back to pre-vaccine (Jan 2021) measures to fix the immune escape.

I don't put much probability on this since I don't know a ton about omicron and I haven't heard people talking about it much, but it seems like it's still an overlooked phenomenon re Delta and so I wouldn't be surprised if it made omicron a lot more containable than I've seen implied. Admittedly people seem very done with countermeasures at this point, so we still might be screwed (and screwed quickly if serial interval is in fact a big driver), but I wouldn't actually be all that surprised if the control system kicked in in a few weeks and the wave peaked fairly quickly.

Comment by Connor_Flexman on What Caplan’s "Missing Mood" Heuristic Is Really For · 2021-12-20T03:08:43.069Z · LW · GW

I think you're right that the initial blog post is repeatedly making a big mistake: running into a values difference and using the mood argument as evidence against the position.

I think to turn this into a useful heuristic, the patch is to reverse the order of operations. If you go from Noticing Missing Mood -> Be Suspicious you can easily used this to tar your political opposition. But if you go from Intuition of Suspicion -> Look for Missing Mood, it can be a highly productive heuristic for determining what seems to be wrong with someone’s argument, whether or not that actually makes it wrong. You can immediately find cruxier issues between you and your political opponent (like the values differences above). This post seems to do that well.

Doing this on your own beliefs is arguably the most productive! "Huh, shouldn't I also feel sympathy towards the people my proposed policy hurts?"

Comment by Connor_Flexman on COVID and the holidays · 2021-12-19T10:20:19.869Z · LW · GW

Yes, it definitely will, and yes that will be unacceptable. Will that be because of vaccinated scrupulous LessWrong-reading mask-wearing 30yos during the holidays? No. That will contribute much less harm-to-benefit than many, many other actions.

Comment by Connor_Flexman on COVID and the holidays · 2021-12-19T10:14:38.120Z · LW · GW

First, I think we are all still pretty far from living as normal. Many things in our past lives would have been more than 1k microcovids.

Second, even the most informal versions of test and trace (telling your friends if you develop any symptoms, so they can tell their friends) can significantly reduce transmission rate.

Third, all this is in the context of the holidays. Fourth, 30yos are not the only segment of society. Fifth, the health care system is not yet close to capacity in almost all places (if your local hospitals are overwhelmed, obviously do not act normally). Etc

Comment by Connor_Flexman on COVID and the holidays · 2021-12-17T21:34:37.574Z · LW · GW

Ah, stocking up on $2 tests would be awesome! That I would certainly endorse.

My reasoning on antigen false negatives is coming from a few lines of evidence. Perhaps I can share some later. But in short, 1) lots of studies have found much higher than average false negative rates, so results are high-variance/heterogeneous 2) my anecdotal counts of people around me concords with the above studies 3) my prior is fairly high on studies overestimating the efficacy of tests, based on BOTH lab conditions being extra controlled and on scientists being biased toward finding higher efficacy (and this affecting studies in a real way that is hard to control for). Thus my preferred resolution of the mystery between anecdotal efficacy and average study efficacy is that studies overestimate efficacy.

Comment by Connor_Flexman on COVID and the holidays · 2021-12-17T00:17:34.661Z · LW · GW

Regarding timing, I think you want to test the day of. The day before is probably fine too. But Delta seemed like it was progressing fast enough that a 1-day lag would lose you a large chunk of the effectiveness.

Regarding mentioning testing in general—I think it helps a little but not enough to matter in most cases. I'm under the impression that PCR tests have a false negative rate of about 50% and antigen tests 70%, which basically translates into risk of .5x and .7x for an event. But if you're home for the holidays, you'd have to keep testing repeatedly if you wanted this risk multiplier to extend, otherwise you might just develop sickness later. 

So a $30 test for people only matters if you are going to an event where the average person is losing more than an hour of life, which is maybe 1000+ microcovids. I don't think this comes up super often. However, I guess it's not crazy to take 7 tests over a week to save people several days of life over the holidays. Maybe I should have added this actually. I guess it just seems like it won't be cost-effective for the overcautious people (and will be overused!) and won't be attention-effective for the undercautious people. But probably I should have thought through and been clear about this.

Comment by Connor_Flexman on COVID and the holidays · 2021-12-16T01:48:47.703Z · LW · GW

Using a counterfactual of "getting COVID a few years later and you balance out" is certainly tempting, but I don't think that's really how it would go down. Based on how vaccine efficacy wanes, reinfections occur, and new variants are introduced, my guess is that you lose all your immunity and more within 2 years, plus in the next decade we probably will develop increasingly effective drugs against it. Hard to sum everything up but my guess is that getting COVID causes a benefit that is less than half the badness. Probably I should make a best guess here and add it at some point in time but this is the type of factor of <2 that occasionally pop up on either side that I typically ignore.

Comment by Connor_Flexman on COVID and the holidays · 2021-12-15T01:31:29.385Z · LW · GW

But also you might lose less than two days. I think two days in expectation is actually quite conservative and it's actually more like 1 day lost in expectation

Comment by Connor_Flexman on COVID and the holidays · 2021-12-09T22:42:03.560Z · LW · GW

Oh no, I only meant to recommend masks in the lead-up to the gathering, not the actual gathering itself. You are absolutely right and I've edited to make this clear.

Comment by Connor_Flexman on Let's buy out Cyc, for use in AGI interpretability systems? · 2021-12-08T01:26:01.204Z · LW · GW

(In a P/E ratio, the "earnings" is profit, which in Cyc's case is probably negative. Gwern is using a P/S ratio, price to sales where sales=revenue, since these are usually used for startups since they're scaling and earnings are still negative. 5 seems reasonable because, while P/S can go much higher for startups rapidly scaling, Cyc doesn't seem to be rapidly scaling.)

Comment by Connor_Flexman on Sci-Hub sued in India · 2021-11-17T03:14:14.055Z · LW · GW

Hopefully my comment above makes this more clear now, but the 37% is supposed to imply the extremely strong pricing power / oligopoly position / lack of competition and that the true cost of production is more like 1-10% than 63% of their revenue. Perhaps I should have made this more clear. 

Anyways, I think this is in fact a major aspect of my true objection; if there were a bunch of small journals of academics competing and universities couldn't afford them, it would be less obvious the first step to take.

Hopefully also clear now: I'm not trying to use arguments as soldiers here, but I am trying to quickly summarize the state of things from my perspective, and am not being incredibly careful with all my wordings. E.g. that whole section is not even accurately "why Sci-Hub exists"—Sci-Hub exists because Alexandra believes in open science. But it's trying to gesture at background reasons why Sci-Hub still exists. Many other mistakes like this—if I had a year I would have taken the time to write a better post. 

Comment by Connor_Flexman on Sci-Hub sued in India · 2021-11-17T03:03:08.857Z · LW · GW

What I meant here was not that the problem was a 37% surcharge, it was that the problems were all the ones associated with a 37% OPM oligopoly in science.

First, Viliam had the right idea in the comment below—the costs rise to meet the revenue, and much of the "expenses" are going to be useless administrative bloat in a thousand different ways. The non-profit version could be run at about literally 1000x less cost: https://twitter.com/jeremyphoward/status/1219365213201264640.

But again, the problem isn't so much the money wasted as the practices implied. To maintain 37% OPM as a large company with bloative force means you have some serious pricing power from lack of competition, which means inefficient monopoly pricing. And beyond the econ 101 case, their attempts at bundling mean even more monopoly inefficiency.

I'm not especially clear on what should be done in the academic publishing world as a whole because I haven't been on the ground in the many attempts to change things. But I think most other options involve costs coming down by more like 90% than like 20%. 

Comment by Connor_Flexman on Sci-Hub sued in India · 2021-11-15T06:19:14.566Z · LW · GW

This is good, thanks!

Comment by Connor_Flexman on Sci-Hub sued in India · 2021-11-15T06:17:59.139Z · LW · GW

That issue is a good point; I think one variant that gets around it is one focused on pre-prints. As I understand it, some journals allow pre-prints and others don't. This basically fixes the problem for all fields with a pre-print server.

Comment by Connor_Flexman on Sci-Hub sued in India · 2021-11-15T06:08:50.531Z · LW · GW

Money just isn't really a priority/bottleneck on this so nowhere is set up to take donations, except generic Sci-Hub. And that actually might be strategically bad at the moment because Elsevier, like Wormtongue himself, is claiming in the lawsuit that Sci-Hub has commercialized its works through the donations it accepts. Best to have that number stay low.

Comment by Connor_Flexman on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-24T01:07:47.943Z · LW · GW

Yeah, ideally would have lampshaded this more. My bad.

The part that gets extra complex is that I personally think ~2/3+ of people who say totalization is fine for them are in fact wrong and are missing out on tons of subtle things that you don't notice until longer-term. But obviously the mostly likely thing is that I'm wrong about this. Hard to tell either way. I'd like to point this out more somehow so I can find out, but I'd sort of hoped my original comment would make things click for people without further time. I suppose I'll have to think about how to broach this further.

Comment by Connor_Flexman on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T22:46:36.209Z · LW · GW

I agree with most of this point. I've added an ETA to the original to reflect this. My quibble (that I think is actually important) is that I think it should be less of a tradeoff and more of an {each person does the thing that is right for them}. 

Comment by Connor_Flexman on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T22:42:41.442Z · LW · GW

(I would not take this modus tollens, I don't think the "community" is even close to fundamentally bad, I just think some serious reforms are in order for some of the culture that we let younger people build here.)

Comment by Connor_Flexman on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T22:16:54.086Z · LW · GW

But the "community" should not be totalizing.

(Also, I think rationality should still be less totalizing than many people take it to be, because a lot of people replace common sense with rationality. Instead one should totalize themselves very slowly, over years, watching for all sorts of mis-steps and mistakes, and merge their past life with their new life. Sure, rationality will eventually pervade your thinking, but that doesn't mean at age 22 you throw out all of society's wisdom and roll your own.)

Comment by Connor_Flexman on Choice Writings of Dominic Cummings · 2021-10-23T10:09:36.810Z · LW · GW

Ah yeah, I should have thought more about what you meant there. Sorry. I'm still not sure I agree though—I feel like the public can be convinced of all sorts of things. 

I do think growth may end up being decent evidence. I guess I'm trying to point at why I might be so agnostic without going through a 10-paragraph essay explicitly stating a bunch of scenarios.

So for example, I think people are fairly unconcerned about whether they have a 20% versus a 30% GDP growth over the next 15 years, but rightly concerned about whether there's then a pandemic that kills a bunch of people and curtails quality of life drastically (just outside the bounds of our growth measurement, arguendo). So, especially as the world gets more and more crazy and plausibly near end-game, I'm willing to trade off increasingly more GDP growth for other things like liberties, nimble government, less-partisan politics, literal political experimentation, etc, that increase quality of life and general or political sanity and decrease likelihood of disasters. I could imagine a world where those things also cashed out immediately in enough economic growth to pay for themselves, but I could also imagine a world where there were ways to get some of these that required real economic sacrifices.

Comment by Connor_Flexman on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T09:45:38.760Z · LW · GW

I want to bring up a concept I found very useful for thinking about how to become less susceptible to these sorts of things.

(NB that while I don't agree with much of the criticism here, I do think "the community" does modestly increase psychosis risk, and the Ziz and Vassar bubbles do so to extraordinary degrees. I also think there's a bunch of low-hanging fruit here, so I'd like us to take this seriously and get psychosis risk lower than baseline.)

(ETA because people bring this up in the comments: law of equal and opposite advice applies. Many people seem to not have the problems that I've seen many other people really struggle with. That's fine. Also I state these strongly—if you took all this advice strongly, you would swing way too far in the opposite direction. I do not anticipate anyone will do that but other people seem to be concerned about it so I will note that here. Please adjust the tone and strength-of-claim until it feels right to you, unless you are young and new to the "community" and then take it more strongly than feels right to you.)

Anyways, the concept: I heard the word “totalizing” on Twitter at some point (h/t to somebody). It now seems fundamental to my understanding of these dynamics. “Totalizing” was used in the sense of a “totalizing ideology”. This may just be a subculture term without a realer definition, but it means something like “an ideology that claims to affect/define meaning for all parts of your life, rather than just some”—and implicitly also that this ideology has a major effect and causes some behaviors at odds with default behavior.

This definition heavily overlaps with the stuff people typically associate with cults. For example, discouraging contact with family/outside, or having a whole lot hanging on whether the leaders approve of you. Both of these clearly affect how much you can have going on in your "outside" life.

Note that obviously totalization is on an axis. It's not just about time spent on an ideology, but how much mental space that ideology takes up.

I think some of the biggest negative influences on me in the rationality community also had the trait of pushing towards totalization, though were unalike in many other ways. One was ideological and peer pressure to turn socializing/parties/entertainment into networking/learning, which meant that part of my life also could become about the ideology. Another was the idea of turning my thoughts/thinking/being into more fodder to think about thinking processes and self-improve, which cannibalized more of my default state.

I think engaging with new, more totalizing versions of the ideology or culture is a major way that people get more psychotic. Consider the maximum-entropy model of psychosis, so named because you aren't specifying any of the neural or psychological mechanisms, you're taking strictly what you can verify and being maximally agnostic about it. In this model, you might define psychosis as when “thought gets too far away from normal, and your new mental state is devoid of many of the guardrails/protections/negative-feedback-loops/sanity-checks that your normal mental states have." (This model gels nicely with the fact that psychosis can be treated so well via drinking water, doing dishes, not thinking for awhile, tranquilizers, socializing, etc. (h/t anon).) In this max-ent model of psychosis, it is pretty obvious how totalization leads to psychosis. Changing more state, reducing more guardrails, rolling your own psychological protections that are guaranteed to have flaws, and cutting out all the normal stuff in your life that resets state. (Changing a bunch of psychological stuff at once is generally a terrible idea for the same reason, though that's a general psychosis tip rather than a totalization-related one.)

I still don't have a concise or great theoretical explanation for why totalization seems so predictive of ideological damage. I have a lot of reasons for why it seems clearly bad regarding your belief-structure, and some other reasons why it may just be strongly correlated with overreach in ways that aren't perfectly causal. But without getting into precisely why, I think it's an important lens to view the rationalist "community" in.

So I think one of the main things I want to see less of in the rationalist/EA "communities" is totalization.

This has a billion object-level points, most of which will be left as an exercise to the reader:

  • Don’t proselytize EA to high schoolers. Don’t proselytize other crazy ideologies without guardrails to young people. Only do that after your ideology has proven to make a healthy community with normal levels of burnout/psychosis. I think we can get there in a few years, but I don't think we're there yet. It just actually takes time to evolve the right memes, unfortunately.
  • To repeat the perennial criticism... it makes sense that the rationality community ends up pretty insular, but it seems good for loads of reasons to have more outside contact and ties. I think at the very least, encouraging people to hire outside the community and do hobbies outside the community are good starting points.
  • I've long felt that at parties and social events (in the Bay Area anyways) less time should be spent on model-building and networking and learning, and more time should be spent on anything else. Spending your time networking or learning at parties is fine if those are pretty different than your normal life, but we don't really have that luxury.
  • Someone recently tried to tell me they wanted to put all their charitable money into AI safety specifically, because it was their comparative advantage. I disagree with this even on a personal basis with small amounts. Making donations to other causes helps you take them seriously, in the way that trading with real-but-trivial amounts of money instead of paper trading moves you strongly from Far Mode into Near Mode. I think paying 10% overhead of charitable money to lower-EV causes is going to be much better for AI safety in the long-run due to seriousness-in-exploration, AND I shouldn’t even have to justify it as such—I should be able to say something like “it’s just unvirtuous to put all eggs in one basket, don’t do it”. I think the old arguments about obviously putting all your money into the highest-EV charity at a given time are similarly wrong.
  • I love that Lightcone has a bunch of books outside the standard rationalist literature, about Jobs, Bezos, LKY, etc etc.
  • In general, I don’t like when people try to re-write social mechanisms (I’m fine with tinkering, small experiments, etc). This feels to me like one of the fastest ways to de-stabilize people, as well as the dumbest Chesterton’s fence to take down because of how socializing is in the wheelhouse of cultural gradient descent and not at all remotely in the wheelhouse of theorizing.
  • I’m much more wary of psychological theorizing, x-rationality, etc due to basically the exact points in the bullet above—your mind is in the wheelhouse of gradient descent, not guided theorizing. I walk this one—I quit my last project in part because of this. Other forms of tinkering-style psychological experimentation or growth are likely more ok. But even “lots of debugging” seems bad here, basically because it gives you too much episteme of how your brain works and not enough techne or metis to balance it out. You end up subtly or not-subtly pushing in all sorts of directions that don’t work, and it causes problems. I think the single biggest improvement to debugging (both for ROI and for health) is if there was a culture of often saying “this one’s hopeless, leave it be” much earlier and explicitly, or saying “yeah almost all of this is effectively-unchangeable”. Going multiple levels down the tree to solve a bug is going too far. It’s too easy to get totalized by the bug-fixing spirit if you regard everything as mutable.
  • As dumb as jobs are, I’m much more pro-job than I used to be for a bunch of reasons. The core reasons are obv not because of psychosis, but other facets of totalization-escape seems like a major deal.
  • As dumb as regular schedules, are, ditto. Having things that you repeatedly have to succeed in doing leaves you genuinely much less room for going psychotic. Being nocturnal and such are also offenders in this category.
  • I'd like to see Bay Area rationalist culture put some emphasis on real holidays rather than only rolling their own. E.g. Solstice instead of Christmas seems fine, but also we should have a lot of emphasis on Christmas too? I had a housemate who ran amazing Easter celebrations in Event Horizon that were extremely positive, and I loved that they captured the real spirit of Easter rather than trying to inject the spirit of Rationality into the corpse of Easter to create some animated zombie holiday. In this vein I also love Petrov Day but slightly worry that we focus much less on July 4th or Thanksgiving or other holidays that are more shared with others. I guess maybe I should just be glad we haven't rationalized those...
  • Co-dependency and totalizing relationships seem relevant here although not much new to say.

Anna's crusade for hobbies over the last several years has seemed extremely useful on this topic directly and indirectly.

I got one comment on a draft of this about how someone basically still endorsed years later their totalization after their CFAR workshop. I think this is sort of fine—very excitable and [other characterizations] people can easily become fairly totalized when entering a new world. However, I still think that a culture which totalized them somewhat less would have been better.

Also, lots of people totalize themselves—I was one of those people who got very excited about rationality and wanted to push it to new heights and such, unendorsed by anyone in the "community" (and even disendorsed). So this isn't a question of "leadership" of some kind asking too much from people (except Vassar)—it's more a question of building a healthy culture. Let us not confuse blame with seeking to become better.

Comment by Connor_Flexman on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-23T05:46:22.950Z · LW · GW

Regarding Eliezer's tweets, I think the issue is that he is joking about the "never stop screaming". He is using humor to point at a true fact, that it's really unfortunate how unreliable neural nets are, but he's not actually saying that if you study neural nets until you understand them then you will have a psychotic break and never stop screaming.

Comment by Connor_Flexman on Choice Writings of Dominic Cummings · 2021-10-20T04:00:12.091Z · LW · GW

Not sure why you think domestic pressure / public agreement is strong evidence. Public pressure for all sorts of things seems hardly correlated with whether they're beneficial.

I think the strongest arguments for Brexit are pretty orthogonal to the economy. Things like "can the government react to crises on the order of weeks instead of months". I do think enough crises would give us data on this but I'm not even sure it will be reasonable to extract counterfactuals from several. Other reasons to do Brexit seem similarly hard to measure compared to myopic economic impact.

Comment by Connor_Flexman on Choice Writings of Dominic Cummings · 2021-10-18T22:01:33.893Z · LW · GW

FWIW, I personally don't have much evidence to determine whether Brexit was good. Seems plausible to me that you're right that they now just have different bureaucratic downsides. I've read a few things about being able to make ARIA (UK version of ARPA) and some other things from 2019 that make me lean somewhat positive, but I'm extremely agnostic. I have a bunch of thoughts on quality of evidence here, but suffice it to say I am not sure whether we will ever get much Bayesian evidence on goodness or badness. So my interest in DC is relatively orthogonal to whether Brexit turns out to be object-level good or bad (even though ideally I would know this and be able to include it in my model of how much to believe his beliefs).

Comment by Connor_Flexman on Choice Writings of Dominic Cummings · 2021-10-15T21:47:41.357Z · LW · GW

Oh awesome, you already made the important argument here. Thanks. I'll leave up my comment above saying similarly, though.

Comment by Connor_Flexman on Choice Writings of Dominic Cummings · 2021-10-15T21:46:22.614Z · LW · GW

First, I already agreed this was true. But if you write about the urgent need for planning for biosecurity a year before a pandemic, quote a biosecurity report that mentions 8ish diseases, you cut a few from your block quote for concision, and then one of the 8 that you didn't specific use in your block quote (but which you were definitely writing about!) occurs in a global pandemic... I just think it's pretty reasonable to say "I wrote about this". I might not do it per se, but if a friend of mine did it, I wouldn't bat an eyelid. If a random acquaintance did it, I would stop for a second, think about it, decide it seemed fine. If you write a fair amount about a report warning of some things, and then one of those things happens, you get to say "you wrote about that". 

Second, I think there's a very important distinction between truth-seeking and truth-telling, as comes up regarding Cummings. I understand this is a pretty apologist stance but I think it's super important here. Normally people have neither, and sometimes people have both. But I think it's pretty consistent to have a model of him where he is truth-seeking but not always truth-telling.

For example, he talks about reading a lot of history in a truth-seeking way and how he's literally trying to piece together what actually happened around Bismarck because everything is so untrustworthy, and he waxes extensively on how one should interact in government in a way that continuously excises the oft-repeated political narratives and seeks truth instead. BUT he also runs campaigns that make public slogans that are slightly misleading in an unimportant way (350M), or publicly says he warned of X when instead he wrote a warning about everything in a report that included X.

These statements are a little fishy, I agree, and you should flag him as someone who you might not want to directly believe all public statements from. It also seems fine if you go more extreme, and say he's the type of person who you can't trust, though others might personally still trust him and different people should have different takes on this.

BUT I still think we should describe him as truth-seeking, especially if I flag it as "from a different perspective". I imagine you won't want to read much of his writing, but I think probably if you read a few posts of it you would see that he's making a weird distinction between {how you yourself think, and how you talk to your friends and colleagues while hugging the truth} and {communicating publicly, where everybody is constantly lying, in part because you can only get across a snippet of information to people}. I don't really know how to reconcile these personally, because I don't know much about public communication. And I personally am not happy that he does it—in fact, it's super annoying, because it makes it so much harder for his allies to claim moral high ground, and in fact causes a bunch of people to distrust him. But I still think pretty strongly that we should describe him as truth-seeking, because A) it's true, and B) hardly any people in politics are truth-seeking and I think we can learn a ton from him.

ETA: ryan_b makes the same point below more concisely, and provides a better example: that "the VoteLeave campaign applied basic epistemics, at his direction". I think this is a great example, and there are other similar anecdotes, like their success in the referendum on the Northwest Regional Assembly. 

Comment by Connor_Flexman on Choice Writings of Dominic Cummings · 2021-10-15T21:08:59.659Z · LW · GW

Sorry, I deliberated for a while on whether to include it, but for a number of reasons decided I wanted to just ignore the politics-as-mindkiller and focus on everything else. Ideally I would have mentioned something about this, I just felt like addressing it in any respect would immediately lead to discussion about politics-as-mindkiller and not help. Also I didn't think this post would get much publicity. Still don't really regret it.

I will say though, here, I think >90% of the value I got from his writings was orthogonal to ideology-level politics. I think operational-level politics is super interesting and with some effort we should be able to talk about it orthogonally to ideology-level politics, even if we are not yet at the level of being able to talk about ideology-level politics without being mind-killed.

Comment by Connor_Flexman on Choice Writings of Dominic Cummings · 2021-10-15T21:02:50.978Z · LW · GW

(Gove and Boris agreed in 2016 that Boris would be their push for PM, then at the last minute Gove withdrew his support and announced his own candidacy, splitting support, causing Boris to withdraw, and neither got PM. [1, 2] By a few years later, they seem to have mended things significantly.)

Comment by Connor_Flexman on Choice Writings of Dominic Cummings · 2021-10-14T04:44:37.963Z · LW · GW

Many people have brought this up to me and I think it's extremely misleading. Basically, he wrote this blog post about the dangers of possible pandemics that governments weren't taking seriously, and heavily rested on giant block quotes from a good source, as he often does. In the block quote he included sections on like 4/8 of the pathogens they warned about, separated by ellipses. After the pandemic he went back and added to his block quote the section on coronaviruses specifically, to show that bio-risk people were already warning about this BEFORE it happened and the government was completely failing to act on it.

This seems like an extremely reasonable action to me—he probably should have used ETA or something, which is the only "dishonesty" I fault him for, but even that phrase is a little weird in a block quote. I can see how some people would be like "you changed it!" but absent political anger, I don't really imagine getting mad at a friend or acquaintance for this. If I myself had a block quote that cut some things for length but was warning of essentially the exact thing that happened, I probably wouldn't just add the section without an ETA, but I expect I would just say in an interview "I specifically warned about coronaviruses amongst other things".

Comment by Connor_Flexman on Weird models of country development? · 2021-09-22T18:02:05.523Z · LW · GW

Also, any recs on dev econ textbooks?

Comment by Connor_Flexman on Connor_Flexman's Shortform · 2021-08-24T23:20:05.604Z · LW · GW

Yes, they've made it very clear that that's the reasoning, and I am saying I disagree.

A) I still think they are not correct (long evidence below)
B) Ct values are clearly somewhat useful, and the question is how much—and I do not think the public health comms apparatus should stifle somewhat-useful medical information reaching patients or doctors just because I might be misled. That's just way too paternalistic.

As to why I think they're wrong, I'll cross-post from my fb thread against the specific pdf linked in op, though all other arguments seem isomorphic afaict. If you don't trust my reasoning but want the reasoning of medical professionals, skip to the bottom.

Basically, the pdf just highlights a bunch of ways that Ct values aren’t perfectly precise and reliable. It says nothing about the relative size of the error bars and the signal, and whether the error bars can drown it out—and, they can’t. To use a very exaggerated metaphor, it’s like the people saying we need to pull J&J because it’s not “perfectly safe” without at all looking at the relative cost/benefit.

So, they give a laundry list of factors that will produce variability in Ct values for different measurements of the same sample. But toward the end of the the doc, they proclaim that these sources of variability change the result by up to 2-3 logs, as if this is a damning argument against reporting them. The scale of Ct values is 10 logs. Hospitalized patients vary by 5 logs. That’s so much more signal than their claimed noise! So their one real critique falls very flat.

However, they do understate the noise significantly, so we can strengthen their argument. Within-patient variability is already like 2-3 logs, as you can see from data here for example: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7151491/. So variability of viral loads across different patients, different collection methods, and different analysis methods will have more like 4-6 logs of variation. That’s the stronger argument.

But even this is ultimately too weak. 

Most of the variation is on the negative side: there are lots more ways to fail to get a good sample than there are to accidentally find the virus is more concentrated than in reality. So, low Ct values indicating high viral load are still very good signals! I don’t know the exact numbers here because they won’t report them many places, but your reasoning would hypothetically go: If you get a Ct value of under 20, you better start canceling meetings and preparing for a possible hospital visit. If you get a Ct value of 38, maybe it’ll end up getting much worse, or maybe not. Not much information there. This is simple reasoning—doctors do it all the time with other tests with high falsity rates, saying “if you test positive on this you probably have X, but getting a negative doesn’t rule it out.”

And aside from this asymmetry, just the correlation is also really useful! I am not the first person to say this: googling turns up a bunch of instances of medical professionals saying similar things:

Comment by Connor_Flexman on Connor_Flexman's Shortform · 2021-08-23T21:25:41.490Z · LW · GW

Another sad regulation-induced (and likely public health comms-reinforced) inadequacy: we don't report Ct values on PCR tests. Ct value stands for cycle threshold, which means how many cycles a PCR amp has to do before the virus is detected. So, it directly measures viral load. But it isn't reported to us on tests for some reason: here's an example document saying why it shouldn't be reported to patients or used to help them forecast progression. Imo a very bad and unnecessary decision.

Comment by Connor_Flexman on Delta Strain: Fact Dump and Some Policy Takeaways · 2021-08-23T21:04:48.829Z · LW · GW

Another cool data point! I found a paper from Singapore, Jul 2020, testing tear swabs but incidentally giving a bunch of PCR tests too. I'm much more likely to trust a paper that gives PCR tests incidentally, rather than is directly testing their effectiveness with researcher bias toward better results. By counting up the squares by hand, this paper shows 24/108 PCR tests came back negative if I counted correctly: that's 22% false negative rate (FNR).

Now, for adjustments: 

  • First, these patients were recruited from a hospital. So they obviously have much higher viral load than the average person, so we'd expect higher FNR for the general population. (And we see the expected relationship between viral load and positive results: people with average low Ct values (meaning high viral load) rarely test negative, but those testing negative lots have very high Ct on their positive tests.)
  • On the other hand, only 2/17 patients test negative >50% of the time; a lot of the negatives come near the end of a patient's sickness or hospital stay. So we don't see great empirical evidence for the hypothesis that some people are consistent false-negatives. If you take out the negatives-at-the-end effect, there are far fewer false negatives, maybe 5-10%. However, this is basically moot because of the selection effect for the hospitalized as mentioned above. Of course you'll see hardly any consistent-false-negative-patients in the hospitalized!—the fact you see any macroscopic number of false negatives in the middle of progression is a terrible sign (and, if there were any fully-false-negative patients, we wouldn't see them anyways! Bad filter).
  • And we do see the requisite theoretical evidence. Because of the two patients with repeated false negatives and low viral load when positive, we can easily extrapolate that some patients just have slightly lower viral load and test negative consistently.

Overall, there isn't much easy way to convert this study into "FNR on asymptomatic individuals who get tested". However, I think if 5-10% of tests on the hospitalized came back negative, that strongly implies more than a 20% FNR on the asymptomatic. I would personally guess that this lends credence toward 10-40% FNR on the symptomatic and 20-80% FNR on asymptomatic. (Lest I double-count evidence, let it be known I'm basing these numbers in part on the above analysis of my personally-known symptomatic individuals with ~40% FNR.)

Comment by Connor_Flexman on ($1000 bounty) How effective are marginal vaccine doses against the covid delta variant? · 2021-08-07T21:30:13.344Z · LW · GW

I have a tentative answer! Some cursory googling makes me think that J&J also just replicates the spike protein in you, the same way Pfizer/Moderna do. This means it's just strictly less effective. Then you'd want to just do the Pfizer/Moderna one that you haven't yet—unless Elizabeth's comment about limited mRNA vaccine doses is decision-relevant, which I still haven't looked into.