What recent academic areas of study have a sample size that is 'too small'?
post by casualphysicsenjoyer (hatta_afiq) · 2022-04-16T13:52:56.257Z · LW · GW · No commentsThis is a question post.
Contents
Answers 4 tailcalled 3 Derek M. Jones None No comments
In an article in Nature magazine, Scott Marek et al. [1] asserts that current studies that link behaviour to brain imaging have datasets which are too small to be reliable. In taking a larger dataset and reproducing previously established results on different subsets of data, it is found that few of these studies are reproducible.
Marek and his colleagues show that even large brain-imaging studies, such as his, are still too small to reliably detect most links between brain function and behaviour.
I was wondering if the community has a prior on what other areas of recent academic interest have fallen to a similar trap?
References
[1] https://www.nature.com/articles/d41586-022-00767-3
Answers
Social psychology seems infamous for this sort of thing; social priming, stereotype threat, ego depletion, etc..
Evolutionary psychology can also be understood as having a sample size that is too small. IME they are doing less bad than social psychology in terms of number of participants, but evolutionary psychology should be interested in cross-cultural universals, and should therefore probably study people across diverse societies (as well as studying animals of other species, perhaps). However, often evolutionary psychology studies only investigate a single society.
If you are willing to generalize the question a bit, there's also the issue of reliability. For instance, in polls, people may answer a question "differently depending on how they are asked" (really, it seems to me that often the different "ways of asking" are different but highly related questions - but the same point still holds). This introduces noise, and one way to reduce this noise is to ask multiple times in "different ways" and seeing what the overall response is. There's also reliability statistics like Cronbach's alpha which have been invented to see whether you've done a good enough job with asking in multiple ways. But in some contexts this is often not really done. (This is related to sample size in the sense that it is the sample size of the "transposed data" - where you turn variables into observations and observations into variables.)
Where to start? In my own field of software engineering we have: studies in effort estimation, and for those readers into advocating particular programming languages, the evidence that strong typing is effective, and the case of a small samples getting lucky. One approach to a small sample size is to sell the idea not the result.
Running a software engineering experiment with a decent sample size would cost about the same as a Phase I clinical drug trial.
↑ comment by casualphysicsenjoyer (hatta_afiq) · 2022-04-16T15:59:27.451Z · LW(p) · GW(p)
Thanks Derek. I'm writing a blog post on results from small samples - may I cite your answer?
Replies from: Derek-Jones↑ comment by Derek M. Jones (Derek-Jones) · 2022-04-16T16:19:25.539Z · LW(p) · GW(p)
I'm always happy to be cited :-)
Sample size is one major issue, the other is who/what gets to be in the sample.
Psychology has its issues with using WEIRD subjects.
Software engineering has issues with the use of student subjects, because most of them have relatively little experience.
It all revolves around convenience sampling.
No comments
Comments sorted by top scores.