HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family
post by namespace (ingres) · 2017-10-08T03:53:15.749Z · LW · GW · 3 commentsThis is a link post for http://lesswrong.com/r/discussion/lw/ph4/howto_screw_up_the_lesswrong_survey_and_bring/
Contents
3 comments
3 comments
Comments sorted by top scores.
comment by habryka (habryka4) · 2017-10-08T05:11:44.576Z · LW(p) · GW(p)
(Warning, very long comment incoming)
So, I've been thinking a bit about this, because I obviously care a lot about the survey data. I was super busy during this year's survey time, so I didn't end up promoting it at all, but I think in the long run it makes a lot of sense for me and the LW 2.0 team in general to think about survey data.
I think in the long run, if LW 2.0 gets past the vote and everything, it makes sense for the survey to be hosted directly here on LessWrong, and to be integrated into the normal site experience. This allows us to get the data on a more continuous basis, and gives us a lot of control about what questions to ask of what users, and also helps us figure out much more detailed things about usage patterns and preferences, etc. (This does not mean you necessarily need an LW account to take the survey, just that if you had one you would be reminded on the page to take the survey)
I've been working full-time in EA and Rationality community building for the last 2.5 years, and am probably in the top 20 of people who have used the survey data to make actual decisions. In trying to use it, I found that while a lot of the data was useful, most of it actually turned out to be really hard to interpret.
I think this was partially because the process of coming up with the survey questions and gathering the data was disconnected from decisions that the organizations I worked with needed to make. I can only speak from the perspective of organizations in the EA and Rationality, but every year I found that the key questions I had hoped the survey would be able to answer, weren't answerable with the data available.
(Examples of this include: "Where should we locate the next EA Global, based on people's geographic location?" and "How much funding could we raise for something like the Pareto fellowship?", both of which were basically unanswerable because we had no way of measuring people's total exposure to the ideas in the community and so were basically unable to differentiate between someone who had just heard of Peter Singer in the news somewhere, or someone who's complete circle of friends consisted of other rationalists and EAs)
In the long run, I think it makes more sense to have some kind of broader survey-platform, or some set of reliable procedures, that allows organizations and stakeholders who want to answer certain questions about the community, get the data to answer them. And at the same time to capture that data in some more generalizable format that doesn't prevent more census and survey-style analysis. Things that go in this direction are:
Have a database of all LW accounts that is accessible to organizations in the community and selected individuals, and allow users when taking surveys on LessWrong to associate some of their responses uniquely with their LW account as well as their social accounts (optionally, of course).
Examples of such data could be: "How did you first hear about the EA/Rationality community?", "Has engagement with <piece of writing> significantly changed your career/life?", etc.
Have a list of all the group houses and how many people are in them, as well as some basic metadata about them
Now, the question is: What to do in the short run?
I think the survey is quite valuable, and I would be sad to see it disappear completely this year. However, the 300 responses we got is definitely far below average, and so I am not sure how representative that data will be. My initial gut reaction is to just do a marketing push for it, the lack of which is what I expect has caused the low turnout this year, but I do want to make sure we are not just wasting people's time with it.
I did also find the survey experience somewhat worse this year, and was really intimidated by the length of it. I also had a good amount of survey fatique from a lot of surveys CFAR sent out recently, so that probably contributed to it. I really think the big problem with the survey is that I've been filling in the same information into the survey every year for 4 years or so, and that does just seem really unnecessary and wasteful.
I think the thing that I would like to do before we make another push for the survey, is to concretely write down a list of questions we actually want to have answered about this community, that are decision relevant in some way. And I expect that if we have that list and boil it down to a bullet list of 3-4 core questions, that that list should be at the center of the communication and promotion of the survey. I feel that this year's question list in particular was filled with a lot of "nice to have" questions, and not with that many "we really need this info" questions.
If we have such a list, I am happy to commit to spending at least 2-3 hours promoting the survey and trying to get other people to do the same.
comment by habryka (habryka4) · 2017-10-08T05:13:20.915Z · LW(p) · GW(p)
Also: Thanks a lot for writing this. It's very hard to deal with situations like this, and I greatly appreciate you being so open and straightforward about this.
comment by iceman · 2017-10-09T21:12:34.067Z · LW(p) · GW(p)
(Comment copied from the old site; where are we supposed to be commenting during this transitory period?)
if you took the survey and hit 'submit', your information was saved and you don't have to take it again.
I'm not sure this is true.
I took the survey over two sessions, where I filled out most of the multiple choice questions in the first session, and most of the long form questions in the second. When I did my final submitting, I also downloaded a copy of my answers. I was annoyed to find that it didn't contain my long form responses. At the time, I had assumed that this was just an export error, but you might want to verify that across sessions, at least long form responses from the additional sessions get saved.