Aspergers Survey Re-results
post by Roko · 2010-05-29T16:58:34.925Z · LW · GW · Legacy · 7 commentsContents
7 comments
Followup to: Aspergers Poll results
Since my little survey about the degree to which the Less Wrong community has a preponderance of people with systematizing personality types, I've been collecting responses only from those people who considered taking the survey after looking at the original post, but didn't, in order to combat nonresponse bias.
82 people responded to the initial survey, and another 186 responded after the request for non-responders to respond. In the initial survey, 26% of responders scored 32+ (which is considered to be a "high" score, and out of a group of Cambridge mathematics students, 7 out of 11 who scored over 32 were said to fit the full diagnostic criteria for aspergers syndrome after being interviewed).
In the combined survey of 82 initial responders and 186 "second"-responders, this increased to 28%. In the original survey, 5% of respondents said they had already been diagnosed with aspergers syndrome, and in the combined survey this increased to 7.5%.
Overall, this indicates that response bias is probably not significantly skewing our picture of the LW audience, though, as always, it is possible that there is a more sophisticated bias at work and that these 268 people are not representative of LW.
7 comments
Comments sorted by top scores.
comment by ParadisePete · 2010-05-30T02:55:18.275Z · LW(p) · GW(p)
I scored 37, but nobody I know would consider me that far from normal. A friend (who scored 10 himself) took it as if he were me and scored 27.
I did not respond to either poll, as I did not see either of the original articles.
comment by Unnamed · 2010-05-29T17:23:19.252Z · LW(p) · GW(p)
Overall, this indicates that response bias is probably not significantly skewing our picture of the LW audience...
As was mentioned in the comments to the previous post, the second round of the survey could have the same response bias as the first round, since you're still only sampling from people who read a post about Aspergers and then chose to click through to a poll about Aspergers.
Replies from: Roko↑ comment by Roko · 2010-05-29T17:50:39.887Z · LW(p) · GW(p)
Imagine that out of every 10 people who regularly read LW, only 3 clicked through to the original article, and out of those 3, only 1 filled out the survey.
That's two sequential filters. Now supposing that there's a bias where people with high AQ scores are both more likely to click through to the article, and once they're on the page more likely to bother to click through to the survey is reasonable. However, the data seems to rule that out.
The only hypothesis that still remains is that there is a big bias for clicking through to the article, but no bias at all for actually taking the (considerable) time to fill out the survey.
This seems somewhat contrived to me. Why would we expect that?
Replies from: Unnamed↑ comment by Unnamed · 2010-05-29T19:39:16.175Z · LW(p) · GW(p)
That is exactly what Psychohistorian expected.
Also, we don't know that there was "no bias at all" for taking the survey, just that the net change in bias between the first group and the second group was relatively small. I expected there to be a big bias in who made it through the first filter (reading the original article and finding out about the poll in its last paragraph) and multiple additional biases which would partially cancel out, and predicted that the net effect of these additional biases would be for the second group's AQ to be a bit lower than the first group's (but still substantially higher than the true population AQ). For instance, only some of the nonresponders from the first round actually saw your request for them to take the second-round survey, and they might tend to be high-AQ because they needed to look at your second Aspergers post to see the request.
I made this prediction about the AQ scores with low confidence, since it's hard to guess the relative sizes of all of these potential biases, or even to identify every relevant bias. For instance, the data are showing that there were slightly more people who have been diagnosed with Aspergers in the second round than the first, and this could reflect a genuine difference (rather than random variation) caused by another bias: people who have been clinically tested for Aspergers might have been more likely to read the post but less likely to go on to take the survey, since they had less to learn from it.
My main point in this discussion is that, for future surveys, it's better to try to avoid selection effects in the first place than to try to account for them after the fact, since they can introduce a lot of uncertainty which is hard to get rid of.
Replies from: Roko↑ comment by Roko · 2010-05-29T20:37:58.969Z · LW(p) · GW(p)
Yeah, you actually predicted that the second responders would be slightly less AS-ish than the first responders, but actually they are slightly more.
To be honest, this result surprised me too, I expected that the AQ scores would go down, so I'm updating towards the "no large net bias" hypothesis.
Replies from: Unnamed↑ comment by Unnamed · 2010-05-30T21:05:59.869Z · LW(p) · GW(p)
One way to think of it is that there were three filters:
- only some LW visitors became aware of the survey when you first posted it (by reading about it in the last paragraph of your article)
- of those who made it through filter 1, only some took the survey right then (in round 1)
- of those who made it through filter 1 and did not make it through filter 2, only some took the survey later in round 2
With all three filters, it seemed like high AQ people would be more likely to make it through, but it was hard to estimate how strongly each filter would select for high AQ. I expected filter 1 to be the strongest (at selecting for high AQ), and filter 2 to be a bit stronger than filter 3. The data suggest that filter 3 was (if anything) very slightly stronger than filter 2, which requires some updating. But since one group went through filters 1 & 2 and the other went through filters 1 & 3, the data don't speak directly to the strength of filter 1. You're inferring that all three filters are probably relatively weak, but I don't see a good reason to conclude that about filter 1.
comment by algekalipso · 2015-07-28T10:01:01.942Z · LW(p) · GW(p)
I'm currently running a study on personality and consciousness in the transhumanist community. The questionnaire also inquires into the possible effects of Aspergers in memetic affiliations.
Of course, LessWrongers are an important piece of the puzzle. Please help me by answering this survey:
qualiacomputing.com/2015/07/18/state-space-of-background-assumptions/