[Link] "Fewer than X% of Americans know Y"

post by Nisan · 2012-10-10T16:59:38.114Z · LW · GW · Legacy · 23 comments

How many times have you heard a claim from a somewhat reputable source like "only 28 percent of Americans are able to name one of the constitutional freedoms, yet 52 percent are able to name at least two Simpsons family members"?

Mark Liberman over at Language Log wrote up a post showing how even when such claims are based on actual studies, the methodology is biased to exaggerate ignorance:

The way it works is that the survey designers craft a question like the following (asked at a time when William Rehnquist was the Chief Justice of the United States):

"Now we have a set of questions concerning various public figures. We want to see how much information about them gets out to the public from television, newspapers and the like….
What about William Rehnquist – What job or political office does he NOW hold?"

The answers to such open-ended questions are recorded — as audio recordings and/or as notes taken by the interviewer — and these records are coded, later on, by hired coders.

The survey designers give these coders very specific instructions about what counts as right and wrong in the answers. In the case of the question about William Rehnquist, the criteria for an answer to be judged correct were mentions of both "chief justice" and "Supreme Court". These terms had to be mentioned explicitly, so all of the following (actual answers) were counted as wrong:

Supreme Court justice. The main one.
He’s the senior judge on the Supreme Court.
He is the Supreme Court justice in charge.
He’s the head of the Supreme Court.
He’s top man in the Supreme Court.
Supreme Court justice, head.
Supreme Court justice. The head guy.
Head of Supreme Court.
Supreme Court justice head honcho.

Similarly, the technically correct answer ("Chief Justice of the United States") would also have been scored as wrong (I'm not certain whether it actually occurred or not in the survey responses).

If, every time you heard a claim of the form "Only X% of Americans know Y" you thought "there's something strange about that", then you get 1 rationality point. If you thought "I don't believe that", then you get 2 rationality points.

 

23 comments

Comments sorted by top scores.

comment by Vaniver · 2012-10-10T17:43:48.688Z · LW(p) · GW(p)

So, basically every survey has to deal with bothersome subjects, especially if it's a survey of teens or children (for different reasons).

I remember a survey given by some friends of mine for a school project; they had taught a lesson to a classroom of children and wanted to measure how much stuck. The survey answers were all on a 1-5 scale, where 1 was "I disagree strongly" and 5 was "I agree strongly."

One of the questions, put on there as a test to ensure the children understood the format, was "I eat breakfast with Martin Luther King Jr every morning." (The lesson mentioned him, among others.) They were expecting 1s, but the average answer was 2.

Replies from: Nornagest, Viliam_Bur
comment by Nornagest · 2012-10-10T17:50:58.371Z · LW(p) · GW(p)

One of the questions, put on there as a test to ensure the children understood the format, was "I eat breakfast with Martin Luther King Jr every morning." (The lesson mentioned him, among others.) They were expecting 1s, but the average answer was 2.

Perhaps the "strongly" in the "disagree strongly" gloss is being understood to require an emotional reaction? It's not a phrase I'd normally use to describe an understanding that something I don't particularly care about is factually wrong.

Replies from: Vaniver
comment by Vaniver · 2012-10-10T18:16:09.565Z · LW(p) · GW(p)

I may be misremembering it- 1 might have been "false," 2 "mostly false," 3 "neither true nor false," 4 "mostly true," and 5 "true." I do remember that at the time I thought it was a disastrous showing that mostly invalidated the results of their study (or should have had a far more prominent role in their data analysis).

comment by Viliam_Bur · 2012-10-11T09:17:56.835Z · LW(p) · GW(p)

One of the questions, put on there as a test to ensure the children understood the format, was "I eat breakfast with Martin Luther King Jr every morning."

Some of the children probably considered a possibility of an acausal breakfast with Martin Luther King Jr. You don't have to be in the same room or in the same moment to have an acausal breakfast with someone.

The sanity waterline is already raising, and some teachers are scared... :D

comment by Paul Crowley (ciphergoth) · 2012-10-10T20:00:15.497Z · LW(p) · GW(p)

At least some public-ignorance surveys use multiple-choice questions, which do not suffer from the problem in the quoted text.

comment by DaFranker · 2012-10-10T20:11:22.571Z · LW(p) · GW(p)

"only 28 percent of Americans are able to name one of the constitutional freedoms, yet 52 percent are able to name at least two Simpsons family members"

My first, almost-like-a-sort-of-trained-reflex reaction to reading something like the above:

"So okay, about 55% of whichever target sub-population this study targeted (default: probably students) watch or hear about The Simpsons often enough to remember at least two names. On the other hand, a lot of that sub-population probably named something as a constitutional freedom after being primed on some unrelated subject but that something wasn't a constitutional freedom, and only about 30% saw the trap and managed to remember an actual good answer."

comment by buybuydandavis · 2012-10-10T19:16:27.523Z · LW(p) · GW(p)

In general, I'm pretty dubious of conclusions based on polls and questionnaires, and assign lower probability to both the author's and my own interpretation until I see the specifics of the methodology. I have trust issues.

comment by beoShaffer · 2012-10-10T21:46:23.404Z · LW(p) · GW(p)

Also, what incentives are there for answering truthfully? The alternative explanations provided a Language Log seem better, but I used to take these sorts of results as primarily being evidence for a high natural frequency of trolls in the sample population.

Replies from: Protagoras
comment by Protagoras · 2012-10-11T18:55:59.675Z · LW(p) · GW(p)

I seem to recall that the usual result of studies that investigate whether research participants perform better when incentivized is negative. My best guess as to why that would be the case is that people are already surprisingly strongly motivated to do what the researcher wants them to do (remember the Milgram experiment!) I don't remember seeing any studies specifically of how incentives affect general knowledge tests; of course they could be different from other tasks researchers assign to people, but it would surprise me if that were the case.

comment by Luke_A_Somers · 2012-10-11T14:25:01.874Z · LW(p) · GW(p)

Nate Silver blew up a public ignorance survey a few years ago, on the 538 blog.

http://www.fivethirtyeight.com/search/label/strategic%20vision

I would not be surprised to find more of that around.

comment by Swimmy · 2012-10-11T05:07:08.395Z · LW(p) · GW(p)

Neat! I'll put less confidence in such surveys now. HOWEVER! Many of the questions in such surveys are plain-ol' 50/50, and I have no idea how they could be very biased.

As an example, here is a scan from Carpini and Keeter's What Americans Know About Politics and Why It Matters. You'll notice that, in table 2.7, only 42% of Americans knew that Soviets suffered more deaths than Americans during World War 2. Seems like a coin flip to me, unless they asked, "Who had the most deaths during World War 2?" and ignored all answers besides US and USSR. I still think Americans are pretty durn ignorant of most political and historical matters. (Myself included, for many of the questions. I have no idea who my state's congressmen are (and I don't really care.))

But then, I've never been one to compare this to modern cultural knowledge. I see that as irrelevant. Asking about fresh memory vs. deep memory doesn't tell you about political knowledge per se. Responses should be compared against questions of similar difficulty.

comment by SilasBarta · 2012-10-10T23:50:33.414Z · LW(p) · GW(p)

If, every time you heard a claim of the form "Only X% of Americans know Y" you thought "there's something strange about that", then you get 1 rationality point. If you thought "I don't believe that", then you get 2 rationality points.

Well, if this was indeed a common methodological flaw. I'm not ready to break out the champagne yet.

comment by DanArmak · 2012-10-12T20:25:21.083Z · LW(p) · GW(p)

The Language Log post also emphasizes that mass media reports of such surveys sometimes quote numbers completely different from the actual survey results, presumably to increase the value of the news story. So:

In the passage quoted above, Robin Young states the survey result incorrectly — actually, 73% of respondents, not 28%, were able to name one of the constitutional freedoms – and she spins it in a doubtful direction to boot, because only 65% were able to name one of the Simpsons characters.

In the cited New York Times article, Diane Ravitch is referring to the 2010 NAEP 12th grade U.S. History test, in which 82%, not 2%, of 12th graders correctly identified Brown v. Board of Education.

In addition to discounting "public ignorance" surveys, we should discount surveys and other factual information reported through such media.

Replies from: Kindly
comment by Kindly · 2012-10-12T21:56:34.377Z · LW(p) · GW(p)

This Language Log post gives a much better idea of what's going on. 28% was the number for "more than one" of the constitutional freedoms, which was later commonly misquoted as "one or more". And, of course, there's the matter of picking out a point of the distribution which is the most striking.

In other words: nobody is actually lying about the survey results. Instead, the falsehood is distributed along the chain: the press release states the results in a deliberately misleading way, and subsequent reports on it simply aren't careful to avoid being misled.

Replies from: DanArmak
comment by DanArmak · 2012-10-13T14:21:59.838Z · LW(p) · GW(p)

The post you linked to argues that the poll and its original press release were deliberately designed to spin results and encourage misunderstanding, and that the error in subsequent reports was a deliberate goal on the part of the pollsters.

Deliberate spinning of statistics isn't different from lying in method or result; the only difference is that they cover themselves by making sure their words are literally true.

Replies from: Kindly
comment by Kindly · 2012-10-13T14:51:48.687Z · LW(p) · GW(p)

Lying and deliberately misleading aren't quite the same thing, although they have the same effect; I would expect the press to do the latter but not the former. So when you implied that the mass media reports did lie, I was confused and decided to dig further.

One practical difference is that, if lying is considered bad but things-close-to-lying aren't, it requires a tertiary source to completely replace the truth by a lie.

Replies from: DanArmak
comment by DanArmak · 2012-10-13T16:05:50.248Z · LW(p) · GW(p)

They're the same thing consequentially, but different under deontological and virtue ethics, so there's a signalling convention that one is better than the other.

comment by Epiphany · 2012-10-11T01:58:52.079Z · LW(p) · GW(p)

I wasn't skeptical enough of these. -1 point to self. Thanks, Nisan. (:

comment by thomblake · 2012-10-10T17:59:29.719Z · LW(p) · GW(p)

I actually don't find anything strange about that. I am reasonably well-educated and know a lot of things, and I have no idea who the US supreme court chief justice is (though if I needed to know, it would take me about 2 seconds).

Replies from: TimS
comment by TimS · 2012-10-10T19:08:30.133Z · LW(p) · GW(p)

The problem is that there's no reasonable way to grade the quoted rejects as false. If you aren't a lawyer (Edit: but maybe if you are*), there's really nothing about labeling John Roberts as Chief Justice of the Supreme Court that is more useful than labeling him as "the justice in charge of the Supreme Court." The error is roughly on par with asking "what does 2+3 =" and accepting "V" but rejecting "IIIII"

In short, I have dramatically adjusted downward my belief the reliability of public-ignorance surveys.


On reflection, I think some of the answers could be considered wrong in a technical sense not relevant to the question being asked. For example, "in charge" implies a bit more power over Supreme Court decisions than Roberts actually possesses.

In the old version, I stated that the difference wouldn't matter even to a lawyer.

Replies from: thomblake
comment by thomblake · 2012-10-10T19:46:44.561Z · LW(p) · GW(p)

I haven't. I expected they were making mistakes like this one, and haven't seen anything indicating they generally make mistakes in this direction rather than the other.

It makes sense to adjust downward your belief that they are reliable, if you thought they were very reliable before. But this shouldn't be enough to indicate they're reliably getting it wrong in a particular direction.

Replies from: Kindly
comment by Kindly · 2012-10-11T00:19:08.959Z · LW(p) · GW(p)

If the ultimate goal is to compare knowledge of the Supreme Court to knowledge of the Simpsons, I would expect the surveys to reliably be wrong in the more sensational direction.

comment by Blackened · 2012-10-12T09:39:24.316Z · LW(p) · GW(p)

I remember when a few years ago, on the news on TV, there was an article about how 40-70% (forgot the exact number) of the interviewed people said that Beethoven is a dog. I was frustrated at how shocked the other people in the room were.