Wisdom of the Crowd: not always so wise

post by tgb · 2012-07-01T20:55:36.230Z · LW · GW · Legacy · 20 comments

Contents

20 comments

I have a confession to make: I have been not "publishing" my results to an experiment because the results were uninteresting. You may recall some time ago that I made a post asking people to take a survey so that I could look at a small variation of the typical "Wisdom of the Crowds" experiment where people make estimates on a value and the average of crowd's estimates is better than that of all or almost all of the individual estimates. Since LessWrong is full of people who like to do these kinds of things (thank you!), I got 177 responses - many more than I was hoping for!

I am now coming back to this since I happened upon an older post by Eliezer saying the following

When you hear that a classroom gave an average estimate of 871 beans for a jar that contained 850 beans, and that only one individual student did better than the crowd, the astounding notion is not that the crowd can be more accurate than the individual.  The astounding notion is that human beings are unbiased estimators of beans in a jar, having no significant directional error on the problem, yet with large variance.  It implies that we tend to get the answer wrong but there's no systematic reason why.  It requires that there be lots of errors that vary from individual to individual - and this is reliably true, enough so to keep most individuals from guessing the jar correctly. And yet there are no directional errors that everyone makes, or if there are, they cancel out very precisely in the average case, despite the large individual variations.  Which is just plain odd I find myself somewhat suspicious of the claim, and wonder whether other experiments that found less amazing accuracy were not as popularly reported.

(Emphasis added.) It turns out that I myself was sitting upon exactly such results.

The results are here. Sheet 1 shows raw data and Sheet 3 shows some values from those numbers. A few values that were clearly either jokes or mistakes (like not noticing the answer was in millions) were removed. In summary: (according to Wikipedia) 1000 million people in Africa (as of 2009) whereas the estimate from LessWrong was 781 million and the first transatlantic telephone call happened in 1926 whereas the average from the poll was 1899.

There! I've come clean!

I had deferred making this public because I thought the result that I was trying to test wasn't really being tested in this experiment, regardless of the results. The idea (see my original post linked about) was to see whether selecting between two choices would still let the crowd average out to the correct value (this two-option choice was meant to reflect the structure of some democracies). But how to interpret the results? It seemed that my selection of values is too important and that the average would change depending on what I picked even if everyone was to make an estimate, then look at the two options and choose the best one. So perhaps the only result of note here is that for the questions given, Less Wrong users were not particularly great at being a wise crowd.

20 comments

Comments sorted by top scores.

comment by sixes_and_sevens · 2012-07-01T21:24:18.484Z · LW(p) · GW(p)

The popular interpretation of the Wisdom of Crowds phenomenon is that each participant brings a certain amount of information to the result, and a certain amount of noise. Over a large enough sample size, the noise (divergent) cancels itself out, while the information converges on a value which, in the absence of systematic bias, should be proximate to the true value.

By restricting participants to two choices (one correct, one incorrect), all your noise is going to converge on the same answer.

Replies from: TheOtherDave, tgb
comment by TheOtherDave · 2012-07-01T21:51:59.060Z · LW(p) · GW(p)

Is there a short explanation of why we should expect an absence of systematic bias?

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-07-01T22:34:12.418Z · LW(p) · GW(p)

That's kind of Eliezer's point when he talks about how astounding it is that human beings are unbiased estimators of beans in a jar. I'd agree that it's astounding, but there are plenty of other statistical phenomena that astound me equally, so I've learned to not treat my level of astonishment as a precision tool for judging incredibility.

To some extent, I suspect the mechanism of estimation plays a significant role. I doubt very much that human beings have built-in heuristics for appraising large numbers of objects. Arithmetic is a fairly novel concept, evolutionarily speaking, and some cultures don't even have the natural numbers.

So when we try and guess the number of beans in a jar, there's presumably no single go-to mechanism we're using to come up with that value. It will be some sort of aggregate of sources, such as our past experience of beans in jars, visualisations of what 200 or 400 or 600 beans all in one place might look like, or rough guesses of volume and packing density. It isn't even necessarily a transparent process. If you try and make a rough estimate of something, aren't you using some sort of basis for that? It's not like the number just pops into your head. You wrestle with it for a little while.

Individual components of that estimation may be subject to bias in a given direction, but over enough sources, over enough people with many different estimation criteria, I wouldn't trust there to necessarily be a demonstrable bias over repeated experiments without deliberate intervention on the part of the experimenter, such as using a container of an unusual shape that would result in a known overestimation of its volume.

Edit: I should also add an expectation of bias idiosyncratic to specific questions. For example, I think it was Yvain's most recent LW membership poll that asked for the date Newton published his Philosophiæ Naturalis Principia Mathematica. If there was a widely-believed false date for this event, that would be an obvious source of noise that wouldn't be cancelled out by corresponding noise on the other side of the true value.

Replies from: khafra, Kawoomba
comment by khafra · 2012-07-02T17:57:18.624Z · LW(p) · GW(p)

According to a study cited in the Model Thinking class from Coursera.org, this is correct. Crowds which can be collectively characterized as a hedgehog do not have wisdom; crowds which are collectively foxes do have wisdom. The diversity of models is key.

comment by Kawoomba · 2012-07-02T18:22:45.290Z · LW(p) · GW(p)

Individual components of that estimation may be subject to bias in a given direction, but over enough sources, over enough people with many different estimation criteria, I wouldn't trust there to necessarily be a demonstrable bias over repeated experiments without deliberate intervention on the part of the experimenter

This can be seen simply as a version of the central limit theorem: Any sum or average of samples from ANY distribution (with finite mean and standard deviation) will be approximately normally distributed (Gaussian) with the approximation better for larger samples. Neato!

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-07-02T18:40:12.360Z · LW(p) · GW(p)

I'd say it's related to the central limit theorem, but would be cautious about equating the two. We would probably expect a Gaussian distribution from a variable which is the sum or product of a lot of component parts (i.e. lots of different estimator methods), but we wouldn't necessarily expect the mean to coincide with the true value unless some of those estimator methods were reliable, and they didn't collectively skew the distribution in one direction.

(and nit-picking, it's "a well-defined population mean and population standard deviation", which is required for defining the distribution. If you can't trust your sample mean and sample SD to approximate your population mean and SD, it's no longer reliable, and you'd have to use something else, like a t-distribution)

comment by tgb · 2012-07-04T02:47:25.872Z · LW(p) · GW(p)

By restricting participants to two choices (one correct, one incorrect), all your noise is going to converge on the same answer.

I've been pondering this since I first saw your post, but I still have no idea what you mean. Could you clarify?

The only interpretation that I can come up with is that if, say, the two options are 10 and 1 and the real answer is 9, you would expect that the average would approach 10 over time. I don't see why this would be obvious or even true: if people were guessing distributed around 9, we could certainly have 10% of the population closer to 1 than 10 and so the average would converge to 9.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-07-04T12:39:17.322Z · LW(p) · GW(p)

Let's say you're asking a thousand people to guess the date of the Battle of Bosworth Field. If I asked this right now in Less Wrong, I imagine it would receive some wildly different answers.

If you're me, and you remember it because its anniversary is on your birthday, (or if you were paying attention in a specific history class) you'll know the exact year (1485). These people are probably not very numerous, but their answers will all coincide and converge. This subgroup would also have a variance of zero.

All the people who were paying only a little bit of attention in that history class, or watched the first series of Blackadder, will not know the exact date, but they'll probably guess to within a few decades. This subgroup has a wider variance, but it's still pretty tight, and they're answering a convergent question. There's a correct answer, and the answer these people give is informed by it, even if it's not correct. In the absence of systematic bias, we would expect roughly the same number of people to answer 1480 as 1490, and so the mean of this group should converge.

We now look at a wider variance subgroup, which includes all the people who only have a sketchy idea of when this battle was and what it was about. Some people will recall it's got something to do with the Tudor dynasty, and Henry VIII was early 16th century. Some will recall that there was a King Richard involved, and dig up a late 14th century connection. They are all contributing some information to proceedings, (14th-16th Century), but in the absence of systematic bias, we'd expect people to be as wrong on one side as they are on the other. Even greater variance subgroups, who aren't sure whether this battle was fought by Romans or Crusaders or Confederates, are still contributing some small quantity of information by giving answers in the range of human history. No-one's going to say 3991 AD, or 6,000,000 BC.

As the variance gets wider, the population of any given subgroup gets larger, but the coherence of their answers gets smaller. If you take a hundred people who have absolutely no knowledge of human history and ask them when the Battle of Bosworth Field occurred, you're basically asking them to pick a number. Their answers aren't going to converge on anything, so they won't systematically interfere with the overall distribution, while the answers that are more informed will converge on the correct answer.

But systematic bias does occur. American education on non-American history is notoriously sketchy. If our participants included a large number of Americans, they're more likely to guess a date in American history through the availability heuristic. All of a sudden, the uninformed answers will start converging at some point in the late 19th Century, which will skew the overall distribution and pull the mean forward in time. The least wise parts of the crowd suddenly found a way to be a whole lot louder.

That's what I meant by your noise converging on the same answer. In giving people an incorrect choice, you're giving all the people who have no knowledge an opportunity to pick the same incorrect answer. If they didn't have that answer to converge on, the mean of their answer wouldn't be able to exert as much influence on the overall distribution.

Does that make sense?

(This also does point to an obvious source of systematic bias when dealing with dates: we have better records [and hence more available knowledge] of events closer to the present. History is lumpy, and forward-weighted, so any uninformed guess on the date of an event in the past is going to be distorted around points of greater historical interest, many of which occurred over the last century).

Replies from: tgb
comment by tgb · 2012-07-04T15:21:32.150Z · LW(p) · GW(p)

This seems like a round-about way to describe a bell curve...

But suppose in your example that we're only asking those silly Americans, who, like myself, have only even heard of the Battle of Bosworth as a name and really know nothing about it except maybe some English people were involved or something. And so let's assume that people are guessing as a bell curve around 1600 with a large variance of, say, 200 years or so. If the two options are 1600 and 1200, let's say, then 15.8% of the people will be guessing 1200 (ie. think it's earlier than 1400) and the rest are guessing 1600. This averages out to 1536 in the limit of large numbers.

So I guess I don't understand your point still - it's not converging to 1600 or anything like that. It is high, but their was a systematic bias towards being high so what else would you expect? In this example (which was chosen arbitrarily) the two options gave a more correct response than the free guess. Of course, we can come up with options that would make the free response better - choosing between, say, 2600 and 1200 gives an average of 1293 .

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-07-04T16:12:14.134Z · LW(p) · GW(p)

It doesn't have to be a Gaussian distribution. We would expect it to look like one under reasonably assumed conditions, but systematic bias would skew it. A particularly large single source (say there was a Battle of Dosworth Field that happened 400 years later) could easily result in a bimodal distribution.

In order for Wisdom of Crowds to work (as it's expected to work), people aren't guessing along a Gaussian distribution. They're applying knowledge they have, and some of that knowledge is useful information, while some of that knowledge is noise. All the useful information pulls the mean towards the true value, while all the noise pulls it away. The difference is that the useful information converges on a single value, (because it's a convergent problem with a single correct answer), while all the noise pulls arbitrarily in all directions.

Provided there isn't some reason for the noise itself to converge on a single value (and I think this is where my previous comments have not necessarily been clear, I'm talking about the noise converging, not the overall mean), the noise should cancel itself out.

It should be obvious that if you give people a right answer and a wrong answer, the noise will be weighted in the direction of the wrong answer (because there's no corresponding error on the other side of the true value). Even if you have two wrong answers on either side of a true value, and ask people to pick the one closest to the true value, you will still have a skew problem, because unless the two values are equidistant to the true value (which defeats the point of the question), your noise is not going to be equally distributed around the true value.

comment by John_Maxwell (John_Maxwell_IV) · 2012-07-02T21:35:45.748Z · LW(p) · GW(p)

Did both survey options offer two choices, or just one? Was one choice exactly correct? You still haven't explained the experiment very well...

Replies from: tgb
comment by tgb · 2012-07-03T00:10:40.022Z · LW(p) · GW(p)

Sorry, I thought it was better explained in my original post that I linked to, but infact that one just directed people to the poll which was self explanatory.

Each user was randomly given one of two sets of questions (randomly based on the parity of the minute that they took the test, hence that rather odd question). Each of the two sets had two questions, one of each 'type'. One type was where the question was asked as an open-ended response where users typed in their answer. The other type was where the user was given two options and was asked to pick the one that they thought was closer to the correct answer.

So the first set (left in the chart) had two-choice telephone and open-answer population of Africa questions, while the other had two-choice Africa questions and open-answer telephone questions.

Anyway, I didn't expect my experiment to generate much interest as I believe it to be systematically flawed, but rather the result that neither of the two open-ended questions produced very good estimates of the actual values. And further, just as Eliezer had suspected some years ago, I neglected publishing these results because they were not as interesting as a "Wow look at how wise the crowd is" result.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-07-03T03:27:26.300Z · LW(p) · GW(p)

Could you summarize any differences in performance between the two question formats?

Replies from: tgb
comment by tgb · 2012-07-04T02:44:24.039Z · LW(p) · GW(p)

Non-conclusive and I believe the experiment to be flawed as the average probably depends more on the two options than on what people believe. It might be more interesting as a way to test for anchoring - or unanchoring through showing two different options. But in short, neither option did well but the two-choice option did better when responders had consistently missed the actual value (the telephone question) and so both of the two choices were more correct than what most people were giving.

comment by JonathanLivengood · 2012-07-01T22:46:28.050Z · LW(p) · GW(p)

Given a forced choice, I would have thought that you would want to look at the proportion of answers that were right -- in this case, right in the sense of being closer to the truth. When you look at the data that way, you see that people here were really good with respect to Africa (82% success rate) and really bad with respect to the history of the telephone (28% success rate).

What do you suppose would happen if you took a four-vector of two-choice options and presented all 16 vectors for a vote? How likely would we be to vote for the all-true vector out of the field of 16 choices?

Replies from: AlexMennen
comment by AlexMennen · 2012-07-02T00:25:24.522Z · LW(p) · GW(p)

I would have thought that you would want to look at the proportion of answers that were right -- in this case, right in the sense of being closer to the truth.

Closer to the truth than what?

Replies from: JonathanLivengood, Jayson_Virissimo
comment by JonathanLivengood · 2012-07-02T03:48:35.115Z · LW(p) · GW(p)

Closer to the truth than the other answer. That is, if the truth is 1000 million and the available options are 1300 million and 200 million, then the first answer is 300 million away from the truth, while the second is 800 million away from the truth. One of those two answers is more approximately correct than the other.

Actually, looking back at the question asked, approximate truth isn't even an issue. The question asked which of the two numbers was closer to the (true) population of Africa. One is closer, the other is not. So, when one answers 1300 million in that forced choice, one is answering the question correctly.

Does that make sense?

Replies from: AlexMennen
comment by AlexMennen · 2012-07-02T05:14:33.418Z · LW(p) · GW(p)

Ah, I hadn't realized that people had been given only two options to choose from.

Replies from: None
comment by [deleted] · 2012-07-02T06:59:50.660Z · LW(p) · GW(p)

There were two versions of each question - one where you could answer anything, and one where there was a choice of two. I think the results here are from the free-choice version.

comment by Jayson_Virissimo · 2012-07-02T03:14:50.480Z · LW(p) · GW(p)

Closer to the truth than what?

Closer to the truth than coin flipping?