Posts

Using smart thermometer data to estimate the number of coronavirus cases 2020-03-23T04:26:32.890Z · score: 30 (9 votes)
Case Studies Highlighting CFAR’s Impact on Existential Risk 2017-01-10T18:51:53.178Z · score: 4 (5 votes)
Results of a One-Year Longitudinal Study of CFAR Alumni 2015-12-12T04:39:46.399Z · score: 35 (35 votes)
The effect of effectiveness information on charitable giving 2014-04-15T16:43:24.702Z · score: 15 (16 votes)
Practical Benefits of Rationality (LW Census Results) 2014-01-31T17:24:38.810Z · score: 16 (17 votes)
Participation in the LW Community Associated with Less Bias 2012-12-09T12:15:42.385Z · score: 34 (34 votes)
[Link] Singularity Summit Talks 2012-10-28T04:28:54.157Z · score: 8 (11 votes)
Take Part in CFAR Rationality Surveys 2012-07-18T23:57:52.193Z · score: 18 (19 votes)
Meetup : Chicago games at Harold Washington Library (Sun 6/17) 2012-06-13T04:25:05.856Z · score: 0 (1 votes)
Meetup : Weekly Chicago Meetups Resume 5/26 2012-05-16T17:53:54.836Z · score: 0 (1 votes)
Meetup : Weekly Chicago Meetups 2012-04-12T06:14:54.526Z · score: 2 (3 votes)
[LINK] Being proven wrong is like winning the lottery 2011-10-29T22:40:12.609Z · score: 29 (30 votes)
Harry Potter and the Methods of Rationality discussion thread, part 8 2011-08-25T02:17:00.455Z · score: 8 (13 votes)
[SEQ RERUN] Failing to Learn from History 2011-08-09T04:42:37.325Z · score: 4 (5 votes)
[SEQ RERUN] The Modesty Argument 2011-04-23T22:48:04.458Z · score: 6 (7 votes)
[SEQ RERUN] The Martial Art of Rationality 2011-04-19T19:41:19.699Z · score: 7 (8 votes)
Introduction to the Sequence Reruns 2011-04-19T19:39:41.706Z · score: 6 (9 votes)
New Less Wrong Feature: Rerunning The Sequences 2011-04-11T17:01:59.047Z · score: 33 (36 votes)
Preschoolers learning to guess the teacher's password [link] 2011-03-18T04:13:23.945Z · score: 23 (26 votes)
Harry Potter and the Methods of Rationality discussion thread, part 7 2011-01-14T06:49:46.793Z · score: 7 (10 votes)
Harry Potter and the Methods of Rationality discussion thread, part 6 2010-11-27T08:25:52.446Z · score: 6 (9 votes)
Harry Potter and the Methods of Rationality discussion thread, part 3 2010-08-30T05:37:32.615Z · score: 5 (8 votes)
Harry Potter and the Methods of Rationality discussion thread 2010-05-27T00:10:57.279Z · score: 34 (35 votes)
Open Thread: April 2010, Part 2 2010-04-08T03:09:18.648Z · score: 3 (4 votes)
Open Thread: April 2010 2010-04-01T15:21:03.777Z · score: 4 (5 votes)

Comments

Comment by unnamed on Comparative Advantage is Not About Trade · 2020-09-22T20:03:28.370Z · score: 2 (1 votes) · LW · GW

I think of comparative advantage & specialization as features of production. People producing the things that they have comparative advantage at puts society on the pareto frontier in terms of the amount of each good that is produced.

I haven't been thinking of this as a theorem, but I think it could go something like: there are n people and m goods and person i will produce p*f(i,j) units of good j if they devote p fraction of their time to producing good j, and each person uses 100% of their time producing goods. Then if you want to describe the pareto frontier that maximizes the amount of goods produced, it involves each person producing a good where they have a favorable ratio of how much of that good they can produce vs. how much of other goods-being-produced they can produce.

Comment by unnamed on What's the CFAR position on how the workbook can be used? · 2020-09-12T03:11:13.934Z · score: 14 (4 votes) · LW · GW

(This is Dan from CFAR)

Yep, you're definitely free to run a reading group on the handbook.

You can basically just treat it like any other book. CFAR made the handbook as a supplement to our workshops, and we put it out there so that other people can see what's in it and make their own calls about what else to do with it.

Comment by unnamed on The Four Children of the Seder as the Simulacra Levels · 2020-09-09T00:59:02.090Z · score: 4 (2 votes) · LW · GW

I guess I'm still confused about the basics of simulacrum levels, because I'm not sure what level those sentences are on. e.g., "Please pass the potatoes" is intended to have the consequence of causing someone to pass the potatoes, rather than attempting to accurately describe the world, which (I think) matches how people have been describing level 2. But also it seems concrete and grounded, rather than involving a distortion of reality. So maybe it is level 1? Or not in the hierarchy at all?

Comment by unnamed on Escalation Outside the System · 2020-09-09T00:52:07.852Z · score: 16 (5 votes) · LW · GW

Related post by hilzoy.

Its opening section is the part that's least related, so you could skip it and begin with this part:

Back in 1983, I sat in on a conference on women and social change. There were fascinating people from all over the world, women who had been doing extraordinary things in their own countries, and who had gathered together to talk it through; and I got to be a fly on the wall.
During this conference, there was a recurring disagreement about the role of violence in fighting deeply unjust regimes.
Comment by unnamed on Can Social Dynamics Explain Conjunction Fallacy Experimental Results? · 2020-08-14T20:24:53.040Z · score: 2 (1 votes) · LW · GW

The social dynamics that you point to in your John-Linda anecdote seem to depend on the fact that John knows what happened with Linda. This suggests that these social dynamics would not apply to questions about the future, where the question was coming from someone who couldn't know what was going to happen.

Some studies have looked for the conjunction fallacy in predictions about the future, and they've found it there too. One example which was mentioned in the post that you linked is the forecast about a breakdown of US-Soviet relations. Here's a more detailed description of the study from a an earlier post in that sequence:

Another experiment from Tversky and Kahneman (1983) was conducted at the Second International Congress on Forecasting in July of 1982.  The experimental subjects were 115 professional analysts, employed by industry, universities, or research institutes.  Two different experimental groups were respectively asked to rate the probability of two different statements, each group seeing only one statement:
1. "A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."
2. "A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."
Estimates of probability were low for both statements, but significantly lower for the first group than the second (p < .01 by Mann-Whitney).  Since each experimental group only saw one statement, there is no possibility that the first group interpreted (1) to mean "suspension but no invasion".
Comment by unnamed on A Personal (Interim) COVID-19 Postmortem · 2020-06-26T20:30:50.835Z · score: 11 (4 votes) · LW · GW
It seems clear that maks wearing reduces spread somewhat, but note that this is because of reducing spread from infectious individuals, especially pre-symptomatic and asymptomatic people, not protecting mask wearers. The early skepticism was in part based on the assumption, which in March seemed to have been shared by both promoters and skeptics, that the benefits were that masks were individually protective, rather than that they helped population-level spread reduction.

The early *arguments* I saw were mainly about whether masks meaningfully reduced the wearer's chances of getting infected. But it was already conventional wisdom that masks did meaningfully reduce the wearer's chances of infecting others, people just weren't taking the next step of arguing for general mask use on these grounds. For example, the early March CDC recommendation (linked in the anti-CDC LW post) was:

CDC does not recommend that people who are well wear a facemask to protect themselves from respiratory diseases, including COVID-19.
Facemasks should be used by people who show symptoms of COVID-19 to help prevent the spread of the disease to others. The use of facemasks is also crucial for health workers and people who are taking care of someone in close settings (at home or in a health care facility).

By mid March, there were organized efforts to increase mask use on the grounds that it reduced the wearer's chances of infecting others. The Czech government (which mandated mask use on March 19) and the #Masks4All campaign were the most prominent ones that I saw - both encouraged people to make their own cloth masks and used the slogan "My mask protects you, your mask protects me" (they may also have talked about some risk-reduction benefits for the wearer). A quick search turns up this March 14 video (in Czech, with English closed captioning available) as the earliest source I could quickly find clearly making this case for widespread mask use.

Comment by unnamed on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T23:44:45.441Z · score: 19 (11 votes) · LW · GW

This reminds me of the time that Slate published hilzoy's real name, in 2009.

I think what happened there is that the Slate author was following journalistic customs of using real names and didn't realize that hilzoy wanted to stay pseudonymous online, and hilzoy had been even less vigilant than Scott about keeping her real name unfindable. And then once the article had been published, hilzoy's request to remove her name ran into Slate's policy of never changing published articles unless they contain a factual error, and this was not a factual error. (It's possible that the author also had some adversarial motives for publishing the name - it did happen in the context of a disagreement between her and hilzoy - but I don't know of any clear or direct evidence for that.)

So the main storyline here might be about the media having its own customs and not much caring about what happens to the people that they cover. The press does not hate you, nor does it love you, but you are made out of stories which it can tell to its audience. I'm not sure what implications (if any) this has about what to do now.

Comment by unnamed on Using the Quantified Self paradigma for COVID-19 · 2020-06-18T04:37:30.731Z · score: 11 (2 votes) · LW · GW

May 28: WVU Rockefeller Neuroscience Institute announces capability to predict COVID-19 related symptoms up to three days in advance using Oura rings

June 16: NBA restart plan includes using Oura rings to catch COVID-19 symptoms

Comment by unnamed on Everyday Lessons from High-Dimensional Optimization · 2020-06-14T21:37:59.440Z · score: 2 (1 votes) · LW · GW

The distance between the n-dimensional points (0,0,...,0) and (1,1,...,1) is sqrt(n). So if you move sqrt(n) units along that diagonal, you move 1 unit along the dimension that matters. Or if you move 1 unit along the diagonal, you move 1/sqrt(n) units along that dimension. 1/sqrt(n) efficiency.

If you instead move 1 unit in a random direction then sometimes you'll move more than that and sometimes you'll move less, but I figured that was unimportant enough on net to leave it O(1/sqrt(n)).

Comment by unnamed on Everyday Lessons from High-Dimensional Optimization · 2020-06-08T01:16:52.552Z · score: 2 (1 votes) · LW · GW

Seems like some changes are more like Euclidean distance while others are more like turning a single knob. If I go visit my cousin for a week and a bunch of aspects of my lifestyles shift towards his, that is more Euclidean than if I change my lifestyle by adding a new habit of jogging each morning. (Although both are in between the extremes of pure Euclidean or purely a single knob - you could think of it in terms of the dimensionality of the subspace that you're moving in.)

And something similar can apply to work habits, thinking styles, etc.

Comment by unnamed on Everyday Lessons from High-Dimensional Optimization · 2020-06-07T02:02:46.343Z · score: 10 (5 votes) · LW · GW
On the other hand, if we’re designing a bridge and there’s one particular strut which fails under load, then we’d randomly try changing hundreds or thousands of other struts before we tried changing the one which is failing.

This bridge example seems to be using a different algorithm than the e coli movement. The e coli moves in a random direction while the bridge adjustments always happen in the direction of a basis vector.

If you were altering the bridge in the same way that e coli moves, then every change to the bridge would alter that one particular strut by a little bit (in addition to altering every other aspect of the bridge).

Whereas if e coli moved in the same way that you describe the hypothetical bridge design, then it would only move purely along a single coordinate (such as from (0,0,0,0,0) to (0,0,0,1,0)) rather than in a random direction.

My intuition is that the efficiency of the bridge algorithm is O(1/n) and the e coli algorithm is O(1/sqrt(n)).

Which suggests that if you're doing randomish exploration, you should try to shake things up and move in a bunch of dimensions at once rather than just moving along a single identified dimension.

Comment by unnamed on What are objects that have made your life better? · 2020-05-21T22:08:56.925Z · score: 24 (9 votes) · LW · GW

Some lists that people have made of products that they use & recommend:

Sam Bowman, 2017

Sam Bowman, 2019

Robert Wiblin, 2019

Arden Koehler, 2019

Rosie Campbell, 2019

Comment by unnamed on What are objects that have made your life better? · 2020-05-21T21:46:39.500Z · score: 4 (3 votes) · LW · GW

The Time Timer Audible Countdown Timer.

This is the timer that I like to use when working, e.g. if I decide "alright, I'm going to spend the next half hour working on this thing." It is a visual timer, where the fraction of the circle that is red tells you what fraction of an hour is left. Ignore its bizarre name - its best feature is that it is completely inaudible.

Features that I like:
- it counts down silently, without any ticking
- I can (and do) set it to end silently, without any alarm sound
- it is easy to tell at a glance about how much time is left
- it is quick & straightforward to set the timer, without any button pressing
- it is a physical object rather than a program on a computing device

Features that it lacks which some people might miss:
- you can't choose a nice sound for the alarm, either it's silent or there's the one kinda annoying alarm sound
- it is not a program on your computing device, but rather a separate object you need to have with you
- it can't be set to more than an hour
- it can't be set precisely

Comment by unnamed on Book Review: Narconomics · 2020-05-03T05:13:58.398Z · score: 12 (8 votes) · LW · GW

The economic argument seems wrong in the "Burning coca leaves won’t win the war" section.

The total amount of a good that consumers buy must be less than or equal to the amount that is produced (and not destroyed). So if enough of the crop gets destroyed, then less of it will get consumed. And that'll happen regardless of whether the suppliers are in a competitive market or monopsony or threaten people with guns.

I framed this in terms of quantities rather than prices because the argument seems more straightforward this way. Also, it seems like reducing the quantity sold is more directly related to what anti-drug folks care about than raising the price. Also, the street price for US consumers would presumably go up if the availability went down, since the people who sell drugs to consumers would be able to make more profit by raising their prices.

If there are problems with the economic argument in the post, that doesn't necessarily mean the conclusion is wrong. "Burning lots of coca crops will have little to no effect on the price or quantity of cocaine in the US" does seem plausible, mainly because producers can just grow a lot more coca leaves than they need. Producers can predict in advance that lots of their crop might get destroyed (or their product lost in transit or similar), and growing coca leaves is not that expensive relative to their operation, so they can add a lot of slack by growing more than they need. (This doesn't depend on monopsony or violence.)

Comment by unnamed on The One Mistake Rule · 2020-04-14T09:08:38.033Z · score: 4 (2 votes) · LW · GW

One obviously mistaken model that I got a lot of use out of during a stretch of Feb-Mar is the one where the cumulative number of coronavirus infections in a region doubles every n days (for some globally fixed, unknown value of n).

This model has ridiculous implications if you extend it forward for a few months, as well as various other flaws. I was aware of those ridiculous implications and some of those other flaws, and used it anyways for several days before trying to find less flawed models.

I'm glad that I did, since it helped me have a better grasp of the situation and be more prepared for what was coming. And I don't think it would've made much difference at the time if I'd learned more about SEIR models and so on.

It's unclear how examples like this are supposed to fit with the One Mistake Rule or the exceptions in the last paragraph.

Comment by unnamed on The One Mistake Rule · 2020-04-10T21:20:12.055Z · score: 12 (7 votes) · LW · GW

This seems important.

Another feature of competitive markets is that "not betting" is always available as a safe default option. Maybe that means waiting to bet until some unknown future date when your models are good enough, maybe it means never betting in that market. In many other contexts (like responding to covid-19) there is no safe default option.

Comment by unnamed on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-04T03:55:49.340Z · score: 5 (3 votes) · LW · GW

In the broader rationality/EA community there was also a Siderea post on Jan 30 and an 80K podcast on Feb 3 (along with a followup podcast on Feb 14).

These two, plus Matthew Barnett's late Jan EA Forum post (which you linked), are the three examples I recall which look most like early visible public alarms from the rationality/EA community.

Other writing was less visible (e.g., on Twitter, Facebook, or Metaculus), less alarm-like (discussions of some aspect of what was happening rather than a call to attention), or later (like the putanumonit Seeing the Smoke post on Feb 27).

Comment by unnamed on Has LessWrong been a good early alarm bell for the pandemic? · 2020-04-03T22:34:03.635Z · score: 7 (4 votes) · LW · GW

I think this post is giving the stock market too much credit.

I'd date the start of the stock market fall as February 24 rather than February 20. The S&P close on Feb 20 & Feb 21 was roughly the same as it had been over the previous couple weeks, and higher than the close on Feb 7, 5, 4, or 3. The first notable dip happened on February 24th; that was the first day that set a low for the month of Feb 2020 (and Feb 25 was the first day that set a low for calendar year 2020).

Also, that was just the start of the crash. The stock market continued falling sharply and erratically for a couple more weeks, and didn't get within 10% of its current level until March 12th (2.5 weeks after it started its fall on Feb 24).

Comment by unnamed on April Fools: Announcing LessWrong 3.0 – Now in VR! · 2020-04-01T08:50:51.188Z · score: 38 (17 votes) · LW · GW

This is now my favorite way to read HPMOR. I love the Star Wars feel.

Comment by unnamed on mind viruses about body viruses · 2020-03-29T05:15:26.059Z · score: 2 (1 votes) · LW · GW

I think Scott linked to Pueyo's essay as an illustration of the ideas, not as the source from which the smart people got the ideas.

Which means that this post's attempt to track & evaluate the information flows is working off of an inaccurate map of how information has flowed.

Comment by unnamed on March Coronavirus Open Thread · 2020-03-25T22:34:30.279Z · score: 4 (2 votes) · LW · GW

Keep in mind that the trend in the number of confirmed cases only provides hints about the trend in new infections. The number of confirmed cases is highly dependent on the amount of testing, and increases in testing capacity will tend to lead to more confirmed cases. Also, there is a substantial delay between when a person is infected and when they test positive, typically somewhere in the range of 1-2 weeks (with the length of the delay also depending on the testing regime).

Comment by unnamed on Using smart thermometer data to estimate the number of coronavirus cases · 2020-03-23T19:59:20.799Z · score: 2 (1 votes) · LW · GW

I think that's right. Although the data still can tell us something after we get into that ambiguous range where it's hard to distinguish increasing covid and decreasing flu.

One nice thing about this pattern is that it provides some evidence that the anti-covid interventions are reducing the spread of fever-inducing diseases. And the size of the drop in total fevers tells us something about how well they're working on the whole, even if it doesn't tell us the precise trend in covid cases.

Another thing that might be possible is to find other sources of data on the actual prevalence of flu, and use that to come up with a better "baseline" which reflects actual current conditions rather than an estimate of the trendline in the counterfactual world where there was no coronavirus pandemic.

A third thing is that 0 is a lower bound on the number of non-covid fevers, so the trend in total fevers is an upper bound on the number of covid cases.

This third thing already tells us something about Seattle (King County). Their peak in excess fevers happened March 9 at 1.76 scale points (observed minus expected), and the March 22 data show the total fevers at 2.77 scale points. As an upper bound, if those are all covid fevers, that is 1.6x as many new daily cases on March 22 compared to March 9. That's 13 days, and not even a full doubling in the number of daily new fevers. Which suggests that suppression there is either working or coming very close to working (even though the number of confirmed cases has kept curving upward, at least through March 21).

Comment by unnamed on Using smart thermometer data to estimate the number of coronavirus cases · 2020-03-23T19:38:56.459Z · score: 2 (1 votes) · LW · GW

If you look at the time series for King County (Seattle area), it shows a spike peaking on March 9 with the upward trend beginning sometime around Feb 28 - Mar 2.

I think the pattern of a spike and then flattening & maybe decline (which has happened at different times in different regions) reflects a drop in the number of influenza cases, as people's anti-covid precautions also prevent flu transmission. So the baseline estimate of how many new fevers there would be if there wasn't a coronavirus pandemic doesn't actually represent the number of non-covid fevers, because there are fewer non-covid fevers than there would've been without this pandemic.

Elizabeth's comment also describes this.

Comment by unnamed on How can we estimate how many people are C19 infected in an area? · 2020-03-23T04:34:34.877Z · score: 12 (4 votes) · LW · GW

Kinsa, a company that sells smart thermometers, has a dashboard that shows which regions of the US have an unusually high number of fevers. They have previously used these methods to track regional flu trends in the US. (FitBit has done something similar.)

I wrote a post here describing my attempt to turn their data into a rough estimate of the total number of coronavirus infections in the United States. Something similar could be done for smaller regions.

Comment by unnamed on Using the Quantified Self paradigma for COVID-19 · 2020-03-23T02:42:29.678Z · score: 11 (6 votes) · LW · GW

I agree that a lot could be done with those sorts of data.

One company that already is making some use of a similar dataset is Kinsa, who sells smart thermometers. They started a few years ago, tracking trends in the flu in the US based on the temperature readings of the people using their thermometers (along with location, age, and gender). Now they have a coronavirus tracking website up. It looks like the biggest useful thing that they've been able to do so far with their data is to quickly identify hotspots - parts of the country where there has been a spike in the number of people with a fever. That used to be a sign of a local flu outbreak, now it's a sign of a local coronavirus outbreak. From the NYTimes:

Just last Saturday, Kinsa’s data indicated an unusual rise in fevers in South Florida, even though it was not known to be a Covid-19 epicenter. Within days, testing showed that South Florida had indeed become an epicenter.

Companies like Fitbit could make a similar pivot, looking to see if they can find atypical trends in their data in the Seattle area Feb 28 - Mar 9, the Miami area Mar 2-19, etc. And they might be able to take the extra step of identifying new indicators that help identify individuals who may have coronavirus (unlike Kinsa, as high body temperature was already a known indicator).

There are potentially a bunch more useful things that could be done with all of these datasets, if more researchers had access to them. For example, it might be possible to get much more accurate estimates of the number of people who have been infected with coronavirus. I may make another post about this soon.

Comment by unnamed on COVID-19's Household Secondary Attack Rate Is Unknown · 2020-03-17T00:51:08.429Z · score: 9 (5 votes) · LW · GW

Has there been research from other similarish diseases breaking down the household secondary attack rate by relevant variables? It seems like there could be large differences between:

romantic partners who sleep in the same bed vs. housemates who sleep in different rooms

circumstances where the household has heightened concerns and is taking precautions vs. unsuspecting households

situations where people are removed from the household shortly after they're infected vs. households where people continue to live after infection


Group houses are mostly in the safer of the two possibilities for the first 2 of these 3.

Comment by unnamed on A Significant Portion of COVID-19 Transmission Is Presymptomatic · 2020-03-16T05:13:24.053Z · score: 5 (3 votes) · LW · GW

I was looking at this paper (for other reasons) and saw that it estimated a mean serial interval of 6.3 days in Shenzhen while there was aggressive testing, contact tracing, and isolating. They report that the mean serial interval was 3.6 days among patients who were infected by someone who was isolated within 2 days of symptom onset, and 8.1 days among patients who were infected by someone who wasn't isolated until 3+ days after symptom onset, for an overall average serial interval of 6.3 in their population. They found R=0.4 - an average of 0.4 known transmissions from each infected person.

Comment by unnamed on March Coronavirus Open Thread · 2020-03-16T02:29:44.752Z · score: 2 (1 votes) · LW · GW

This paper looks at cases which were confirmed in Shenzhen (Guangdong, China) Jan 14 - Feb 12, which is while coronavirus was being brought under control there (by the end of the study the cases had fallen to less than 1/3 of their peak). I suspect that they qualify for point 1, a place with an unusually good testing regime.

The paper reports that "Cases detected through symptom-based surveillance were confirmed on average 5.5 days (95% CI 5.0, 5.9) after symptom onset (Figure 3, Table S2); compared to 3.2 days (95% CI 2.6,3.7) in those detected by contact-based surveillance", and also that the median incubation period was 4.8 days from infection to symptom onset (in the smaller sample where both of those dates were known).

Adding 5.5+4.8, that implies that an average of 10.3 days passed between when a person became infected and when they tested positive for cases detected based on symptoms, and 8.0 days for those detected by contact tracing. Since the paper reports that 77% of cases were detected through symptom-based surveillance, that gives an overall average of 9.8 days. (And this is only for the cases that were detected; it's not adjusting at all for people who were infected by never got a positive test.)

That means that in places where testing is as good as it was in Shenzhen, then the number of positive tests is telling us about the number of infections 9.8 days ago. If the number of cases in that region is doubling every 4 days, then that's 2.4 doublings, so the number of confirmed cases would only be 18% of the actual number of cases due to the delay in testing (again, without factoring in people who never got tested). (With a 3 day doubling period it would be 10%, with a 5 day doubling period 26%.)

So in places that don't have a good testing regime it would be significantly less than that.

Comment by unnamed on A Significant Portion of COVID-19 Transmission Is Presymptomatic · 2020-03-14T08:18:38.858Z · score: 3 (2 votes) · LW · GW

Yeah, I agree that contact tracing & testing/quarantining contacts is good, and that presymptomatic transmission is possible.

It looked to me like you were claiming that the hypothesis "stopping all symptomatic transmission is sufficient to prevent the number of COVID-19 cases from curving upwards" has been tested by some countries' measures and found to be false, and I am questioning that apparent assertion.

Comment by unnamed on A Significant Portion of COVID-19 Transmission Is Presymptomatic · 2020-03-14T07:08:21.487Z · score: 3 (2 votes) · LW · GW

I notice that the estimates of serial interval (almost?) all come from places that had pretty aggressive & successful containment measures in place, such as identifying & isolating potential carriers (including people who show symptoms, traced contacts, and high-risk travelers). That would tend to shorten the serial interval, since people who are identified early in their infection lose the opportunity to transmit during the later portion of their illness.

Are there estimates of what R was for these populations? If it's a lot less than the 2-3 that other studies have found that would be some evidence that a lot of later-stage transmissions were prevented.

Comment by unnamed on A Significant Portion of COVID-19 Transmission Is Presymptomatic · 2020-03-14T06:57:11.547Z · score: 7 (3 votes) · LW · GW
COVID-19 is successfully spreading in countries which have taken these measures ["tell people to stay home if they have those symptoms"] and other more extreme measures

How true is this? I haven't delved in that closely, but my impression is that a big part of what's been successful in containing the spread in places like Hong Kong and mainland China has involved identifying & isolating people as soon as they show symptoms.

Comment by unnamed on March Coronavirus Open Thread · 2020-03-12T21:59:42.730Z · score: 4 (2 votes) · LW · GW

Here's a method to try to estimate the number of cases in a region which I haven't seen calculations of:

1. Identify the places which have the best testing regimes

2. Try to estimate what fraction of cases are identified in those places, potentially along with other variables like how long from infection until the case is identified

3. Use those numbers to extrapolate to other places, based on other similarities between those places besides # of confirmed cases (e.g., number of deaths, or rate of infection in travelers coming from that place, or hospital utilization rate)

I have made some initially attempts to do this, which I'll try to post later today. I'm wondering if anyone has thoughts or sources on any of these 3 points (e.g., which places have the best testing regimes?), or on the method as a whole.

Comment by unnamed on March Coronavirus Open Thread · 2020-03-12T08:35:35.499Z · score: 5 (3 votes) · LW · GW

I think each little bit of curve flattening makes things a little less bad (since a smaller number of cases are beyond capacity, and a little more time is created to prepare), but the graphs tend to draw the "capacity" line unrealistically high. This graph is more realistic than many since the flattened curve still peaks above the capacity line, but it still paints too rosy a picture.

Comment by unnamed on Growth rate of COVID-19 outbreaks · 2020-03-10T08:25:22.042Z · score: 5 (3 votes) · LW · GW

Agreed that #2 could be a big issue. Rapid increase in confirmed cases could easily be due to rapid increase in testing rather than (such) rapid spread of the virus.

What would the graphs look like if they plotted the number of deaths attributed to COVID-19 rather than the number of confirmed cases? In theory the number of deaths should mostly be a lagged & noisier reflection of the number of cases, with less dependence on testing regimes.

Comment by unnamed on Model estimating the number of infected persons in the bay area · 2020-03-09T10:11:40.978Z · score: 17 (3 votes) · LW · GW

I also made an estimate of the number of cases in the bay area, based on deaths and estimated death rate. My calculations are in this spreadsheet.

Comment by unnamed on 2018 Review: Voting Results! · 2020-01-26T21:55:32.744Z · score: 9 (4 votes) · LW · GW
Pearson correlation between karma and vote count is 0.355

And it's even larger (r = -0.46) between amount of karma and ranking in the vote.

Comment by unnamed on Modest Superintelligences · 2020-01-26T00:04:10.067Z · score: 2 (1 votes) · LW · GW

Oh, you're right.

With A & B iid normal variables, if you take someone who is 1 in a billion at A+B, then in the median case they will be 1 in 90,000 at A. Then if you take someone who is 1 in 90,000 at A and give them the median level of B, they will be 1 in 750 at A+B.

(You can get to rarer levels by reintroducing some of the variation rather than taking the median case twice.)

Comment by unnamed on Modest Superintelligences · 2020-01-25T06:50:13.689Z · score: 4 (2 votes) · LW · GW

The component should have a smaller standard deviation, though. If A and B each have stdev=1 & are independent then A+B has stdev=sqrt(2).

I think that means that we'd expect someone who is +6 sigma on A+B to be about +3*sqrt(2) sigma on A in the median case. That's +4.24 sigma, or 1 in 90,000.

Comment by unnamed on Modest Superintelligences · 2020-01-25T06:43:37.322Z · score: 2 (1 votes) · LW · GW

500 seems too small. If someone is 1 in 30,000 on A and 1 in 30,000 on B, then about 1 in a billion will be at least as extreme as them on both A and B. That's not exactly the number that we're looking for but it seems like it should give the right order of magnitude (30,000 rather than 500).

And it seems like the answer we're looking for should be larger than 30,000, since people who are more extreme than them on A+B includes everyone who is more extreme than them on both A and B, plus some people who are more extreme on only either A or B. That would make extreme scores on A+B more common, so we need a larger number than 30,000 to keep it as rare as 1 in a billion.

Comment by Unnamed on [deleted post] 2020-01-20T03:17:24.562Z

The popular conception of Dunning-Kruger has strayed from what's in Kruger & Dunning's research. Their empirical results look like this, not like the "Mt. Stupid" graph.

Comment by unnamed on The Tails Coming Apart As Metaphor For Life · 2020-01-16T00:26:58.730Z · score: 2 (1 votes) · LW · GW
the most interesting takeaway here is not the part where predictor regressed to the mean, but that extreme things tend to be differently extreme on different axis.

Even though the two variables are strongly correlated, things that are extreme on one variable are somewhat closer to the mean on the other variable.

Comment by unnamed on The Tails Coming Apart As Metaphor For Life · 2020-01-16T00:24:51.988Z · score: 2 (1 votes) · LW · GW

I think they're close to identical. "The tails come apart", "regression to the mean", "regressional Goodhart", "the winner's curse", "the optimizer's curse", and "the unilateralist's curse" are all talking about essentially the same statistical phenomenon. They come at it from different angles, and highlight different implications, and are evocative of different contexts where it is relevant to account for the phenomenon.

Comment by unnamed on How would we check if "Mathematicians are generally more Law Abiding?" · 2020-01-13T02:03:48.525Z · score: 16 (7 votes) · LW · GW

Eric Schwitzgebel has done studies on whether moral philosophers behave more ethically (e.g., here). Some of the measures from that research seem to match reasonably well with law-abidingness (e.g., returning library books, paying conference registration fees, survey response honesty) and could be used in studies of mathematicians.

Comment by unnamed on Are "superforecasters" a real phenomenon? · 2020-01-09T23:15:15.412Z · score: 10 (5 votes) · LW · GW
A better sentence should give the impression that, by way of analogy, some basketball players are NBA players.

This analogy seems like a good way of explaining it. Saying (about forecasting ability) that some people are superforecasters is similar to saying (about basketball ability) that some people are NBA players or saying (about chess ability) that some people are Grandmasters. If you understand in detail the meaning of any one of these claims (or a similar claim about another domain besides forecasting/basketball/chess), then most of what you could say about that claim would port over pretty straightforwardly to the other claims.

Comment by unnamed on Are "superforecasters" a real phenomenon? · 2020-01-09T03:45:37.707Z · score: 13 (5 votes) · LW · GW

I don't see much disagreement between the two sources. The Vox article doesn't claim that there is much reason for selecting the top 2% rather than the top 1% or the top 4% or whatever. And the SSC article doesn't deny that the people who scored in the top 2% (and are thereby labeled "Superforecasters") systematically do better than most at forecasting.

I'm puzzled by the use of the term "power law distribution". I think that the GJP measured forecasting performance using Brier scores, and Brier scores are always between 0 and 1, which is the wrong shape for a fat-tailed distribution. And the next sentence (which begins "that is") isn't describing anything specific to power law distributions. So probably the Vox article is just misusing the term.

Comment by unnamed on We run the Center for Applied Rationality, AMA · 2019-12-22T09:34:23.973Z · score: 21 (6 votes) · LW · GW

(This is Dan, from CFAR since 2012)

Working at CFAR (especially in the early years) was a pretty intense experience, which involved a workflow that regularly threw you into these immersive workshops, and also regularly digging deeply into your thinking and how your mind works and what you could do better, and also trying to make this fledgling organization survive & function. I think the basic thing that happened is that, even for people who were initially really excited about taking this on, things looked different for them a few years later. Part of that is personal, with things like burnout, or feeling like they’d gotten their fill and had learned a large chunk of what they could from this experience, or wanting a life full of experiences which were hard to fit in to this (probably these 3 things overlap). And part of it was professional, where they got excited about other projects for doing good in the world while CFAR wanted to stay pretty narrowly focused on rationality workshops.

I’m tempted to try to go into more detail, but it feels like that would require starting to talk about particular individuals rather the set of people who were involved in early CFAR and I feel weird about that.

Comment by unnamed on We run the Center for Applied Rationality, AMA · 2019-12-22T09:24:44.113Z · score: 26 (8 votes) · LW · GW

(This is Dan from CFAR)

In terms of what happened that day, the article covers it about as well as I could. There’s also a report from the sheriff’s office which goes into a bit more detail about some parts.

For context, all four of the main people involved live in the Bay Area and interact with the rationality community. Three of them had been to a CFAR workshop. Two of them are close to each other, and CFAR had banned them prior to the reunion based on a bunch of concerning things they’ve done. The other two I’m not sure how they got involved.

They have made a bunch of complaints about CFAR and other parts of the community (the bulk of which are false or hard to follow), and it seems like they were trying to create a big dramatic event to attract attention. I’m not sure quite how they expected it to go.

This doesn’t seem like the right venue to go into details to try to sort out the concerns about them or the complaints they’ve raised; there are some people looking into each of those things.

Comment by unnamed on We run the Center for Applied Rationality, AMA · 2019-12-22T08:39:04.234Z · score: 22 (7 votes) · LW · GW

Not precise at all. The confidence interval is HUGE.

stdev = 5.9 (without Bessel's correction)

std error = 2.6

95% CI = (0.5, 10.7)

The confidence interval should not need to go that low. Maybe there's a better way to do the statistics here.

Comment by unnamed on We run the Center for Applied Rationality, AMA · 2019-12-22T08:30:41.401Z · score: 24 (9 votes) · LW · GW

(This is Dan from CFAR)

Warning: this sampling method contains selection effects.

Comment by unnamed on We run the Center for Applied Rationality, AMA · 2019-12-22T08:28:32.024Z · score: 22 (6 votes) · LW · GW

(This is Dan, from CFAR since June 2012)

These are more like “thoughts sparked by Duncan’s post” rather than “thoughts on Duncan’s post”. Thinking about the question of how well you can predict what a workshop experience will be like if you’ve been at a workshop under different circumstances, and looking back over the years...

In terms of what it’s like to be at a mainline CFAR workshop, as a first approximation I’d say that it has been broadly similar since 2013. Obviously there have been a bunch of changes since January 2013 in terms of our curriculum, our level of experience, our staff, and so on, but if you’ve been to a mainline workshop since 2013 (and to some extent even before then), and you’ve also had a lifetime full of other experiences, your experience at that mainline workshop seems like a pretty good guide to what a workshop is like these days. And if you haven’t been to a workshop and are wondering what it’s like, then talking to people who have been to workshops since 2013 seems like a good way to learn about it.

More recent workshops are more similar to the current workshop than older ones. The most prominent cutoff that comes to mind for more vs. less similar workshops is the one I already mentioned (Jan 2013) which is the first time that we basically understood how to run a workshop. The next cutoff that comes to mind is January 2015, which is when the current workshop arc & structure clicked into place. The next is July 2019, which is the second workshop which was run by something like the current team and the first one where we hit our stride (it was also the first one after we started this year's instructor training, which I think helped with hitting our stride). And after that is sometime in 2016 I think when the main classes reached something resembling their current form.

Besides recency, it’s also definitely true that the people at the workshop bring a different feel to it. European workshops have a different feel than US workshops because so many of the people there are from somewhat different cultures. Each staff member brings a different flavor - we try to have staff who approach things in different ways, partly in order to span more of the space of possible ways that it can look like to be engaging with this rationality stuff. The workshop MC (which was generally Duncan’s role while he was involved) does impart more of their flavor on the workshop than most people, although for a single participant their experience is probably shaped more by whichever people they wind up connecting with the most and that can vary a lot even between participants at the same workshop.