Dialectical Bootstrapping

post by Johnicholas · 2009-03-13T17:10:20.436Z · LW · GW · Legacy · 8 comments

"Dialectical Bootstrapping" is a simple procedure that may improve your estimates. This is how it works:

  1. Estimate the number in whatever manner you usually would estimate. Write that down.
  2. Assume your first estimate is off the mark.
  3. Think about a few reasons why that could be. Which assumptions and considerations could have been wrong?
  4. What do these new considerations imply? Was the first estimate rather too high or too low?
  5. Based on this new perspective, make a second, alternative estimate.

Herzog and Hertwig find that average of the two estimates (in a historical-date estimating task) is more accurate than the first estimate, (Edit: or the average of two estimates without the "assume you're wrong" manipulation). To put the finding in a OB/LW-centric manner, this procedure (sometimes, partially) avoids Cached Thoughts.

8 comments

Comments sorted by top scores.

comment by RobinHanson · 2009-03-13T19:50:15.973Z · LW(p) · GW(p)

In this study is the average also more accurate than the second guess?

Replies from: Unnamed, Johnicholas
comment by Unnamed · 2009-03-13T20:21:16.547Z · LW(p) · GW(p)

They don't test that directly. From what they report, it looks like the average is more accurate than the second guess, but not statistically significantly so. The average is 7.6 better than the first guess (with mean errors of 123.2 vs. 130.8, looking at all participants' first guesses, and the averages of only those in the dialectical bootstrapping condition). The second guess (of those in the dialectical bootstrapping condition) is only 4.5 better than their first guess, which is not reliably different from zero (95% CI = -1.0 to +10.4).

comment by Johnicholas · 2009-03-13T20:13:38.825Z · LW(p) · GW(p)

In the reliability condition, the first and second estimates for each question were nearly identically accurate, with a mean within-participants difference of 0.4 (SD = 6.7; Mdn = 0.0; confidence interval, or CI = 0.0–+1.4; d = 0.06). In the dialectical-bootstrapping condition, the second estimates were somewhat, but not reliably, more accurate than their respective first estimates (within-participants difference: M = 4.5, SD = 19.6; Mdn = 3.0; CI = -1.0–+10.4; d = 0.23).

comment by jimmy · 2009-03-13T18:03:24.548Z · LW(p) · GW(p)

http://www.overcomingbias.com/2008/06/average-your-gu.html

The same point is made there, with the addition that the second guess is usually worse than the first. What kind of weighting do we need to put second answer so that we can eliminate that bias? It has to be less than one (since the second answer is worse than the first), but more than one fourth (since the average is better than the first guess)

Replies from: Johnicholas
comment by Johnicholas · 2009-03-13T18:07:55.472Z · LW(p) · GW(p)

The improvement in this paper, over simply making two estimates one after another, is the focus on assuming your first estimate is wrong while constructing your second estimate.

The control group in the paper did make two estimates; I'll edit to emphasize that.

comment by CarlShulman · 2009-03-13T20:26:32.861Z · LW(p) · GW(p)

I have found this to very useful.

comment by ArthurB · 2009-03-13T19:48:22.115Z · LW(p) · GW(p)

It would be interesting to try the experiment with Versed. You remove the dialectical aspect (steps 2,3,4) but you keep the wisdom of the crowd aspect.

Replies from: Johnicholas
comment by Johnicholas · 2009-03-13T20:17:30.850Z · LW(p) · GW(p)

By "Versed", are you referring to the drug Midazolam? Is there a particular reason that you picked that drug rather than, say, alcohol?