Unbounded Scales, Huge Jury Awards, & Futurism

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-29T07:45:53.000Z · LW · GW · Legacy · 10 comments

Contents

10 comments

“Psychophysics,” despite the name, is the respectable field that links physical effects to sensory effects. If you dump acoustic energy into air—make noise—then how loud does that sound to a person, as a function of acoustic energy? How much more acoustic energy do you have to pump into the air, before the noise sounds twice as loud to a human listener? It’s not twice as much; more like eight times as much.

Acoustic energy and photons are straightforward to measure. When you want to find out how loud an acoustic stimulus sounds, how bright a light source appears, you usually ask the listener or watcher. This can be done using a bounded scale from “very quiet” to “very loud,” or “very dim” to “very bright.” You can also use an unbounded scale, whose zero is “not audible at all” or “not visible at all,” but which increases from there without limit. When you use an unbounded scale, the observer is typically presented with a constant stimulus, the modulus, which is given a fixed rating. For example, a sound that is assigned a loudness of 10. Then the observer can indicate a sound twice as loud as the modulus by writing 20.

And this has proven to be a fairly reliable technique. But what happens if you give subjects an unbounded scale, but no modulus? Zero to infinity, with no reference point for a fixed value? Then they make up their own modulus, of course. The ratios between stimuli will continue to correlate reliably between subjects. Subject A says that sound X has a loudness of 10 and sound Y has a loudness of 15. If subject B says that sound X has a loudness of 100, then it’s a good guess that subject B will assign loudness in the vicinity of 150 to sound Y. But if you don’t know what subject C is using as their modulus—their scaling factor—then there’s no way to guess what subject C will say for sound X. It could be 1. It could be 1,000.

For a subject rating a single sound, on an unbounded scale, without a fixed standard of comparison, nearly all the variance is due to the arbitrary choice of modulus, rather than the sound itself.

“Hm,” you think to yourself, “this sounds an awful lot like juries deliberating on punitive damages. No wonder there’s so much variance!” An interesting analogy, but how would you go about demonstrating it experimentally?

Kahneman et al. presented 867 jury-eligible subjects with descriptions of legal cases (e.g., a child whose clothes caught on fire) and asked them to either

  1. Rate the outrageousness of the defendant’s actions, on a bounded scale, 
  2. Rate the degree to which the defendant should be punished, on a bounded scale, or  
  3. Assign a dollar value to punitive damages.1

And, lo and behold, while subjects correlated very well with each other in their outrage ratings and their punishment ratings, their punitive damages were all over the map. Yet subjects’ rank-ordering of the punitive damages—their ordering from lowest award to highest award—correlated well across subjects.

If you asked how much of the variance in the “punishment” scale could be explained by the specific scenario—the particular legal case, as presented to multiple subjects—then the answer, even for the raw scores, was 0.49. For the rank orders of the dollar responses, the amount of variance predicted was 0.51. For the raw dollar amounts, the variance explained was 0.06!

Which is to say: if you knew the scenario presented—the aforementioned child whose clothes caught on fire—you could take a good guess at the punishment rating, and a good guess at the rank-ordering of the dollar award relative to other cases, but the dollar award itself would be completely unpredictable.

Taking the median of twelve randomly selected responses didn’t help much either.

So a jury award for punitive damages isn’t so much an economic valuation as an attitude expression—a psychophysical measure of outrage, expressed on an unbounded scale with no standard modulus.

I observe that many futuristic predictions are, likewise, best considered as attitude expressions. Take the question, “How long will it be until we have human-level AI?” The responses I’ve seen to this are all over the map. On one memorable occasion, a mainstream AI guy said to me, “Five hundred years.” (!!)

Now the reason why time-to-AI is just not very predictable, is a long discussion in its own right. But it’s not as if the guy who said “Five hundred years” was looking into the future to find out. And he can’t have gotten the number using the standard bogus method with Moore’s Law. So what did the number 500 mean?

As far as I can guess, it’s as if I’d asked, “On a scale where zero is ‘not difficult at all,’ how difficult does the AI problem feel to you?” If this were a bounded scale, every sane respondent would mark “extremely hard” at the right-hand end. Everything feels extremely hard when you don’t know how to do it. But instead there’s an unbounded scale with no standard modulus. So people just make up a number to represent “extremely difficult,” which may come out as 50, 100, or even 500. Then they tack “years” on the end, and that’s their futuristic prediction.

“How hard does the AI problem feel?” isn’t the only substitutable question. Others respond as if I’d asked “How positive do you feel about AI?”—except lower numbers mean more positive feelings—and then they also tack “years” on the end. But if these “time estimates” represent anything other than attitude expressions on an unbounded scale with no modulus, I have been unable to determine it.

1Daniel Kahneman, David A. Schkade, and Cass R. Sunstein, “Shared Outrage and Erratic Awards: The Psychology of Punitive Damages,” Journal of Risk and Uncertainty 16 (1 1998): 48–86; Daniel Kahneman, Ilana Ritov, and David Schkade, “Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues,” Journal of Risk and Uncertainty 19, nos. 1–3 (1999): 203–235.

10 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Robin_Hanson2 · 2007-11-29T11:35:11.000Z · LW(p) · GW(p)

If you are asked to estimate a number that is a product (or sum) of many numbers, and you have good estimates for all those numbers but one, well variance in that last number you can't estimate well will dominate the variance of your answer. It just takes one.

comment by Chris · 2007-11-29T12:52:05.000Z · LW(p) · GW(p)

I strongly encourage any AI worker who hasn't already done so to read Ian McDonald's 'River of Gods'. He's pretty positive (in timescale terms...) on AI, his answer to the question "How long will it be until we have human-level AI?" is 2047 AD, and it's a totally gob-smacking, brilliant, read.

Replies from: taryneast
comment by taryneast · 2011-02-13T12:11:45.892Z · LW(p) · GW(p)

Is this the one you mean: River of gods ?

if so - it's a novel... and it includes aliens... I admit I haven't read it, but I'm skeptical as to how much you might deduce about AI's likelihood...

comment by Ron_Hardin · 2007-11-29T13:03:35.000Z · LW(p) · GW(p)

Derrida must have done a thousand essays on how an author trying to be very precise about how language could possibly work, winds up in an infinte loop clarifying a final point that amounts to in effect starting over.

This contributes a lot to an indefinite future, whatever the modulus problem, if you take AI as just such a project.

comment by Stuart_Armstrong · 2007-11-29T13:25:04.000Z · LW(p) · GW(p)

I observe that many futuristic predictions are, likewise, best considered as attitude expressions. Take the question, "How long will it be until we have human-level AI?" The responses I've seen to this are all over the map. On one memorable occasion, a mainstream AI guy said to me, "Five hundred years." (!!)

Did you ask any of them how long they felt it would take to develop other "futuristic" technologies? (in other words, their rank ordering of technological changes).

comment by g · 2007-11-29T14:36:01.000Z · LW(p) · GW(p)

The damages experiment, as described here, seems not to nail things down enough to say that what's going on is that damages are expressions of outrage on a scale with arbitrary modulus. Here's one alternative explanation that seems consistent with everything you've said: subjects vary considerably in their assessment of how effective a given level of damages is in deterring malfeasance, and that assessment influences (in the obvious way) their assessment of damages.

(I should add that I find the arbitrary-modulus explanation more plausible.)

comment by MrCheeze · 2012-10-28T00:48:19.245Z · LW(p) · GW(p)

500 years still sounds optimistic to me.

Replies from: Icenogle
comment by Icenogle · 2018-07-16T18:27:09.346Z · LW(p) · GW(p)

You probably won't see this since it's six years old, but just in case, why do you think such a long time? A significant portion of people who are in the AI field give a much closer number, and while predicting the future isn't exact, 500 years is a pretty big difference from the numbers I've most often seen.

comment by Colombi · 2014-02-20T05:50:40.299Z · LW(p) · GW(p)

Interesting, but without the dollar values adjusted for inflation, I feel like the point is lost on me of that part of the data, all though get the idea.

Edit: It only went up to $.84, so I guess it doesn't matter that much (used the Inflation Calculator)

comment by trickster · 2018-01-21T17:58:27.480Z · LW(p) · GW(p)

''Assign a dollar value to punitive damages''' - does this corelated with the ammount of money, that peoples, who responded to this earn? It look plausible that people who earn more can assign a highely money punishement for body harm