Make evidence charts, not review papers? [Link]

post by XiXiDu · 2011-09-04T13:26:57.261Z · LW · GW · Legacy · 4 comments

Contents

4 comments

How do you get on top of the literature associated with a controversial scientific topic? For many empirical issues, the science gives a conflicted picture. Like the role of sleep in memory consolidation, the effect of caffeine on cognitive function, or the best theory of a particular visual illusion. To form your own opinion, you’ll need to become familiar with many studies in the area.

You might start by reading the latest review article on the topic. Review articles provide descriptions of many relevant studies. Also, they usually provide a nice tidy story that seems to bring the literature all together into a common thread- that the author’s theory is correct! Because of this bias, a review article may not help you much to make an independent evaluation of the evidence. And the evidence usually isn’t all there. Review articles very rarely describe, or even cite, all the relevant studies. Unfortunately, if you’re just getting started, you can’t recognize which relevant studies the author didn’t cite.

[...]

Hal Pashler and I have created, together with Chris Simon of the Scotney Group who did the actual programming, a tool that addresses these problems. It allows one to create systematic reviews of a topic, without having to write many thousands of words, and without having to weave all the studies together with a narrative unified by a single theory. You do it all in a tabular form called an ‘evidence chart’. Evidence charts are an old idea, closely related to the “analysis of competing hypotheses” technique. Our evidencechart.com website is fully functioning and free to all, but it’s in beta and we’d love any feedback.

More: alexholcombe.wordpress.com/2010/09/02/make-evidence-charts-not-review-papers/

Example: What is the role of sleep on hippocampus-dependent memory consolidation?

I thought this was an interesting idea. Do you think it would be possible and useful to create an evidence chart for risks from AI, existential risks in general and other topics on lesswrong?

4 comments

Comments sorted by top scores.

comment by MinibearRex · 2011-09-04T22:30:38.209Z · LW(p) · GW(p)

How are the numbers assigned? They seem rather arbitrarily specified.

Replies from: AlexHolcombe
comment by AlexHolcombe · 2011-09-10T06:12:35.596Z · LW(p) · GW(p)

That's right, we (the developers of the site) think the numbers should not be taken too seriously, but they help to order the evidence for and against in terms of strength. Any simple format for presenting information will have its own biases and an evidencechart will have many limitations, but we think it's nice that it's so concise relative to prose. A full argument map would provide more logical structure, but is much more difficult and time-consuming to create.

comment by gwern · 2011-09-04T14:13:50.813Z · LW(p) · GW(p)

Looking at that sleep one... I thought it looked nice initially, but then I tried to look at the information in more detail, and I became less impressed. The text was doing all the semantic heavy-lifting.

Replies from: anonym
comment by anonym · 2011-09-04T19:12:14.793Z · LW(p) · GW(p)

Much of the heavy lifting is also done by the assignment of numbers and colors to indicate the impact of the experiment on a hypothesis. That's much easier to grok as a whole than plain text. I can also easily make quick judgments from the chart that are much more difficult to do from a review paper, such as "later experiments generally oppose this hypothesis, and only early experiments strongly support it" (among those in the chart, of course).