What is required to run a psychology study?

post by Raemon · 2019-05-29T06:38:13.727Z · LW · GW · 6 comments

This is a question post.

Contents

  Legal Stuff
  Doing Science Good Enough
  Particular Things I was interested in:
None
  Answers
    23 sdr
    19 jessicata
    4 Dagon
None
6 comments

I periodically find myself wishing someone had run an experiment on a particular topic. But they haven't.

Often, it seems like there's a relatively easy experiment you can run, which would give me some evidence about it. Maybe it wouldn't be perfect evidence, but it would be better than me making stuff up from my armchair.

The two clusters of things-I-can-imagine-being-quite-hard are:

Legal Stuff

Scott's IRB Nightmare suggests that the legal requirements can be quite awful. I'm unclear on what those requirements are if I don't want to publish anywhere, don't plan to interface with any hospital bureaucracies, I just want to ask a bunch of people to try something and see how it goes (and maybe make a LessWrong blogpost so people on LessWrong can know too)

Am I allowed to just go out and ask a whole bunch of people stuff? If I want to know things about 8 year olds or 16 year olds, if their parents sign a form and I'm not doing anything especially weird or traumatic, would that be fine or would terrible things happen to me? (Could I replicate the Marshmallow Test?)

Doing Science Good Enough

I have a sense that there's a lot of ways to deceive yourself if you have a pet psychology theory. But... I dunno it also seems like LessWrong collectively should be pretty good at this. The thing I might want to do (or get others to do sometimes) is post a plan for a study here, and get critiqued until it seems like an actually good plan, do the plan, write up the results.

Particular Things I was interested in:

(not meant to be exhaustive, important, or particularly achievable plans, just the things that generated this question)

Answers

answer by sdr · 2019-05-29T07:57:53.780Z · LW(p) · GW(p)

In the name of supporting people actually doing stuff:

  • Scott’s IRB Nightmare comes from the circumstance of polling taking place within the context of privileged patient-provider interaction, which is covered by HIPAA, which requires somewhat stringent data handling. If you are not a doctor, and you're not asking your patients in the hospital, this does not apply to you.
  • Yes, you are allowed to "just go out and ask a whole bunch of people stuff". People can, actually, give away whatever information they feel compelled to do so. People are allowed to enter (mostly) any trade. People are free to do stuff.
  • For people <18, you need parental consent.
  • There are, like, hundreds of tools to do this -both finding people, and nailing the questions. Google Survey samples currently best across US (specifically, it had predicted the 2016 election results successfully). This is good, if you have a specific hypothesis, that you want to ask from 1000++ people.
  • The more quantitative you get, the less signal it carries, at higher precision. Survey & stats criticism generally comes from attempting to determine "things about humanity in general", which is also (somewhat) useful, but requires N > very large, and _very_ methodical sampling / experiment formulating / etc.
  • Generate qualitatively, validate quantitatively. Vast majority of effort goes into actually locating the hypothesis. Before building a research thesis in your room, go out and do the simplest thing first. Talk with people, like, in-person. There's a learning curve prior to being able to formulate meaningful hypotheses.
  • Ask yourself, what rent does answer to a specific question pays. What does it say of reality if it turns out to be A vs B? How does that interact with neighbouring things?
    • And: what, specifically, do you wish to achieve here? Specifically, some qualitative answers to some of the questions above from Bay Area people, along with some synthesis, would be extremely informative (to me at least).
  • A good starting point for this might be cultural anthropology, but instead of getting a book, here's an MVP: get a tape recorder,ask 50 of your friends the questions above, then put answers into a spreadsheet, and a synthesis into an lw post. This is extremely informative for eg measuring local shifts in the overton window, finding common ground (and grounds shifting); and is sorely missing.
    • Why in-person? People who persistently fill out textareas on web pages are heavily biased in income, and mental illnesses; generally, people don't do that. Being in-person, you're raising the interview against personal reputation, which bridges the addressibility gap, and makes a much wider variety of people's voices accessible.
    • To avoid pet-theory issue: Ask open-ended questions (eg the relating to job/ambition ones above are good). Don't lead, capture the raw stuff
  • Do this simple thing first, prior to embarking on specific hypothesis formulation; and post the results!
comment by Said Achmiz (SaidAchmiz) · 2019-05-29T08:21:46.350Z · LW(p) · GW(p)

Have you done this? If so, what were the questions, what were the answers, and are they published anywhere?

Replies from: sdr
comment by sdr · 2019-05-30T04:44:37.949Z · LW(p) · GW(p)

In the context of customer development for product research, yes. For good questions on that, see eg the book "Mom test" by Rob Fitzpatrick, and lean customer development field in general. This was solving for the general question "will developing x be paid for"; being wrong on this particular question is expensive.

comment by Said Achmiz (SaidAchmiz) · 2019-05-29T08:22:58.354Z · LW(p) · GW(p)

There are, like, hundreds of tools to do this -both finding people, and nailing the questions. Google Survey samples currently best across US (specifically, it had predicted the 2016 election results successfully).

Could you list some good ones (other than Google Surveys)?

comment by Raemon · 2019-05-31T00:28:52.838Z · LW(p) · GW(p)

Thanks!

comment by habryka (habryka4) · 2019-05-29T19:23:11.623Z · LW(p) · GW(p)

(Edit note: Fixed formatting)

answer by jessicata · 2019-05-29T07:44:10.184Z · LW(p) · GW(p)

For studies in the US, see this flowchart.

If the human subjects research isn't supported by the US Department of Health and Human Services, and isn't supported by an institution that holds an FWA, then the research isn't covered by regulations on human subjects research.

At a cognitive science lab at Stanford I worked in, it was quite common to run studies using Mechanical Turk.

answer by Dagon · 2019-05-29T16:54:11.044Z · LW(p) · GW(p)

Informal surveys are done literally all the time, by university undergrads, sensationalist news organizations, political organizations, businesses, etc.

If you want to publish somewhere, you'll need to follow their rules. If you're using or establishing some sort of business or medical relationship with those surveyed, there are restrictions on how you can do that. Targeting or collecting data on under-18 humans in many jurisdictions is restricted, and I don't know what it takes. If you're calling or texting people, there are rules there too. The rules seem to be ignored a lot of the time, especially for informal one-time small-scale uses.

The bigger problem I see is validity of study, and representative-ness of sample. The sample topics you give all seem to be about counting or quantifying something within a population. Most of your work will be in defining the population you're trying to measure and figuring out how to get a wide-ranging evenly-distributed sample of responses within that population.

The other "most" of your work will be in figuring out how to get the data that actually tells you anything. There's a lot of individual variance in the topics given, and a lot of ambiguity in what results of any concrete test would show.

6 comments

Comments sorted by top scores.

comment by Dagon · 2019-05-29T18:39:34.336Z · LW(p) · GW(p)

I, for one, would get some value out of seeing how you'd use such data. Instead of running a survey or study, write up the results for all possible (or a few likely) outcomes, without actually knowing which is true. What are you going to infer differently, for example if 12% of of people can write FizzBuzz than if 40% or 80%?

Writing these (or at least the outlines of each) is a great way to pre-register the studies, to avoid worries about p-hacking.

comment by Matt Goldenberg (mr-hire) · 2019-05-29T16:09:01.392Z · LW(p) · GW(p)

You may want to check out Positly, a platform created by Spencer Greenberg to do precisely this type of research.

comment by Shmi (shminux) · 2019-05-29T06:59:00.906Z · LW(p) · GW(p)

A wild idea: don't call it a psychology study. Call it a poll or a survey.

Replies from: Raemon
comment by Raemon · 2019-05-29T07:07:11.916Z · LW(p) · GW(p)

Relevant questions include:

  • Are you allowed to just survey people about whatever without it being a big deal? (I'd have assumed/hoped so, but I wouldn't be that surprised if there are weird legal risks)
  • Are if you do a poll or a survey... and then call it a "study", then can you get in trouble? (I'd also have assumed this would be fine, but, seems useful to know if it's not fine)

The "learn programming" one also would need to be a lot more involved, although I'm sure you could still find some other non-study phrase.

Replies from: Kaj_Sotala, Douglas_Knight
comment by Kaj_Sotala · 2019-05-30T21:42:36.916Z · LW(p) · GW(p)

If you want to publish it formally, the journal may impose its own requirements. E.g. back when Facebook did a formal study on their users, they appealed to the users having consented to A/B testing when they accepted Facebook's TOS. Afterwards, several researchers argued that this broke the rules for informed consent, with one paragraph in the linked article suggesting that the paper might end up retracted by the publisher:

When asked whether the study had had an ethical review before being approved for publication, the US National Academy of Sciences, which published the controversial paper in its Proceedings of the National Academy of Sciences (PNAS), told the Guardian that it was investigating the issue.

(I don't recall hearing what the results of that investigation were, but I don't think it was ever retracted.)

comment by Douglas_Knight · 2019-05-29T16:13:39.116Z · LW(p) · GW(p)

As Jessicata said, the regulations only apply via federal funding. It's that simple.

There are some magic words that turn an oral history into a study, but a poll is already a study. And none of this changes your funding.