The Scales of Justice, the Notebook of Rationality

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-03-13T16:00:00.000Z · LW · GW · Legacy · 22 comments

Contents

22 comments

Lady Justice is widely depicted as carrying scales. A set of scales has the property that whatever pulls one side down pushes the other side up. This makes things very convenient and easy to track. It’s also usually a gross distortion.

In human discourse there is a natural tendency to treat discussion as a form of combat, an extension of war, a sport; and in sports you only need to keep track of how many points have been scored by each team. There are only two sides, and every point scored against one side is a point in favor of the other. Everyone in the audience keeps a mental running count of how many points each speaker scores against the other. At the end of the debate, the speaker who has scored more points is, obviously, the winner; so everything that speaker says must be true, and everything the loser says must be wrong.

“The Affect Heuristic in Judgments of Risks and Benefits” studied whether subjects mixed up their judgments of the possible benefits of a technology (e.g., nuclear power), and the possible risks of that technology, into a single overall good or bad feeling about the technology.1 Suppose that I first tell you that a particular kind of nuclear reactor generates less nuclear waste than competing reactor designs. But then I tell you that the reactor is more unstable than competing designs, with a greater danger of melting down if a sufficient number of things go wrong simultaneously.

If the reactor is more likely to melt down, this seems like a “point against” the reactor, or a “point against” someone who argues for building the reactor. And if the reactor produces less waste, this is a “point for” the reactor, or a “point for” building it. So are these two facts opposed to each other? No. In the real world, no. These two facts may be cited by different sides of the same debate, but they are logically distinct; the facts don’t know whose side they’re on.

If it’s a physical fact about a reactor design that it’s passively safe (won’t go supercritical even if the surrounding coolant systems and so on break down), this doesn’t imply that the reactor will necessarily generate less waste, or produce electricity at a lower cost. All these things would be good, but they are not the same good thing. The amount of waste produced by the reactor arises from the properties of that reactor. Other physical properties of the reactor make the nuclear reaction more unstable. Even if some of the same design properties are involved, you have to separately consider the probability of meltdown, and the expected annual waste generated. These are two different physical questions with two different factual answers.

But studies such as the above show that people tend to judge technologies—and many other problems—by an overall good or bad feeling. If you tell people a reactor design produces less waste, they rate its probability of meltdown as lower. This means getting the wrong answer to physical questions with definite factual answers, because you have mixed up logically distinct questions—treated facts like human soldiers on different sides of a war, thinking that any soldier on one side can be used to fight any soldier on the other side.

A set of scales is not wholly inappropriate for Lady Justice if she is investigating a strictly factual question of guilt or innocence. Either John Smith killed John Doe, or not. We are taught (by E. T. Jaynes) that all Bayesian evidence consists of probability flows between hypotheses; there is no such thing as evidence that “supports” or “contradicts” a single hypothesis, except insofar as other hypotheses do worse or better. So long as Lady Justice is investigating a single, strictly factual question with a binary answer space, a set of scales would be an appropriate tool. If Justitia must consider any more complex issue, she should relinquish her scales or relinquish her sword.

Not all arguments reduce to mere up or down. Lady Rationality carries a notebook, wherein she writes down all the facts that aren’t on anyone’s side.

1Melissa L. Finucane et al., “The Affect Heuristic in Judgments of Risks and Benefits,” Journal of Behavioral Decision Making 13, no. 1 (2000): 1–17.

22 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Anders_Sandberg · 2007-03-13T16:50:00.000Z · LW(p) · GW(p)

This two-side bias appears to fit in nicely with the neuroscience of decisionmaking where anticipatory affect appears to be weighed together to decide wheter an action or option is "good enough" to act on. For example, in http://sds.hss.cmu.edu/media/pdfs/Loewenstein/knutsonetal_NeuralPredictors.pdf there seems to be an integration of positive reward in the nucleus accumbens linked to the value of the product and negative affect related to the price in the insula, while and medial prefrontal cortex apparently tracks the difference between them.

There is definitely room for a more complex decision system based on this kind of anticipatory emotional integration, since there might be more emotions than just good/bad - maybe some aspects of a choice could trigger curiosity (resulting in further information gathering), aggression (perhaps when the potential loss becomes very high and personal) or qualitative tradeoffs between different emotions. And the prefrontal cortex could jump between considering different options and check if any gains enough support to be acted upon, returning to cycle if none seem to get quite the clearcut support it ought to.

This makes a lot of sense from a neuroscience perspective, but as an approximation to rationality it is of course a total kludge.

comment by Stuart_Armstrong · 2007-03-13T17:28:57.000Z · LW(p) · GW(p)

I believe that there was something about a similar approach in a paper "Risk at a Turning Point?" by Andrew Stirling. He argued that analysis of risk should group all the risks as a vector valued quantity, rather than a scalar. That should be just a valid in this more general context: risks, costs and opportunities of a particular scenario can then be represented on a big vector, and each interest group applies their own method to bring it down to a scalar value (or probablility distribution) along the "support/oppose" continuum.

Andrew was focusing on the fact that generally the one to do the estimate was a government or a corporation that would apply their own method to get from the vector to the scalar, and only the scalar was announced. If the full vector was announced, however, it was easier for groups with different values to come up with their own estimate of the scalar "support/oppose" distribution. As well, they could easily add extra elements to the vector (things like "the project is an eyesore"), and see how that changed their estimate, rather than adding it as an extra and having those fruitless "the project is an eyesore" vs "yes, but it'll bring in cash" debates.

The vector could be what little ol' dame rationality writes down in her notebook.

comment by HalFinney · 2007-03-13T17:54:51.000Z · LW(p) · GW(p)

Keep in mind that in many situations we do in fact have to make a binary decision between two alternatives. Often it reduces to a go/no-go decision. In that case this heuristic of reducing multi-valued vectors to a single scalar weighting factor is a necessary step.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-03-13T18:27:17.000Z · LW(p) · GW(p)

Hal, even on binary decisions, the affect heuristic still leads to double-counting the evidence. If being told that the plant produces less waste causes us to feel, factually incorrectly, that the plant is less likely to melt down, then the same argument is being counted as two weighting factors instead of one.

Replies from: Keith_Coffman
comment by Keith_Coffman · 2014-09-03T02:41:30.071Z · LW(p) · GW(p)

I would call coming to conclusions like this a shortcoming of our rational thinking, rather than the weighing of benefits and costs to a decision. What HalFinney said is completely right, in that we very often have to pick alternatives as a package, and in doing so we are forced to weigh factors for and against a proposition.

Personally, I wouldn't have "factually incorrectly" jumped to the conclusion you stated here (especially if the converse is stated explicitly as you did here), and I think this is a diversion to the point that you are necessarily (and rationally) weighing between two alternatives in this particular example that you chose.

That being said, I wholeheartedly agree with the idea of evaluating claims based on their merits rather than the people who propose them - that's the rational way to do things - and rational people would indeed keep a notebook even if, in the end, it was going to end up on a scale (or a decision matrix).

comment by Jonathan_Falk · 2007-03-13T21:43:46.000Z · LW(p) · GW(p)

But, by thew same token, wouldn't then being told that the reactor is more likely to meltdown lead people to think it produces more waste. If I multiply the true effects of everything by 10, wy will tht affet the binary choice?

Replies from: Biophile
comment by Biophile · 2012-10-05T20:23:15.822Z · LW(p) · GW(p)

Perhaps it wouldn't affect the choice. For instance, if you have two reactors, and the only thing you've been told about them is which is more likely to melt down, then (assuming you don't want waste or nuclear meltdowns), you'll prefer the one that produces less waste regardless of whether you draw any illogical conclusions from the data you have, because the conclusions will be based on the emotions you have already. However, unless I am mistaken, this blog is about rationality in general, not just in decision-making. Many of the people here (including myself) probably want their information to be accurate just for the sake of accuracy, not just because of its influence on decisions. For them, this is important whether or not it will affect their decisions.

comment by Nic_"RedWord"_Smith · 2007-03-13T22:10:10.000Z · LW(p) · GW(p)

In response to the statement, "If you tell people a reactor design produces less waste, they rate its probability of meltdown as lower", this may be the result of a useful heuristic if technologies generally improve overall. Consider computers: if I asked people to guess if the amount of memory in a desktop computer with a 300MHz processor is less than or greater than that in a system with a 2GHz processor, they might reason that the computer with the faster processor is newer, that both technologies have improved, and the 2GHz system most likely has more memory as well. Similarly in the example, people may think that both anti-meltdown and anti-waste technologies are likely to have improved concurrently. This isn't to say that both factors don't need to be looked at separately in the "real world" - only that I'm not sure how we could consider any other answer rational in the absence of further information.

Basically, I'm curious if benefits and costs are really positively correlated to one another in the real world, as shown in Exhibit 1 in the PDF.

Replies from: joseph-noonan
comment by Plasma Ballin' (joseph-noonan) · 2024-06-13T17:28:51.331Z · LW(p) · GW(p)

I was going to comment this as well. I think it probably is the case that waste-efficiency and safety of nuclear reactors is positively correlated in the real world for that exact reason. Of course, reasoning to this point by, "Reactor A produces less waste than Reactor B. Therefore, Reactor A is better than Reactor B. Therefore, Reactor A is less likely to melt down than Reactor B," is invalid, so the main point of EY's post still stands. The correct reasoning is more like, "Technology improves and reactor design is refined over time. This occurs fast enough that reactors built later are likely to be better than earlier ones on all fronts. If Reactor A is more waste-efficient than Reactor B, it was probably built later and is therefore also likely to be safer and more cost-effective." Unlike the naive, "A is better than B" model, this one no longer predicts that A will be safer than B if I get the additional piece of information that A and B were built in the same year. Then I predict the opposite based on trade-offs that probably had to occur.

comment by Stuart_Armstrong · 2007-03-14T11:38:42.000Z · LW(p) · GW(p)

this may be the result of a useful heuristic

Another heuristic may be our habit of expecting some sources - say, newspapers - to present the arguments pro and agaisnt the issue ("this will clean up the beach, but costs money"). If they say "this will produce more waste" and leave it at that, we may assume that's the only way the reactor is different.

comment by Hopefully_Anonymous · 2007-09-26T18:18:54.000Z · LW(p) · GW(p)

Great post. You're on a roll, Eliezer. Hal, I query how often the best decision-making process really is binary go/non-go. That humans often reduce a decision making instant to go/non-go "as an approximation to rationality it is of course a total kludge" seems plausible to me (to use Anders Sandburg's words).

Eliezer, I doubt the justice system's guilty/not guilty approach is grounded in rationality either. "If Justitia must consider any more complex issue, she should relinquish her scales or relinquish her sword" -and I think the underlying issues regarding "justice" are almost always more complex. But then again, I think the approach with justice should be economic, incentive-based, and empirically grounded, rather than punitive and grounded in social norms the way it is now (in the U.S.).

Anders, is there any research on the degree to which human predisposition to "treat discussion as a form of combat, an extension of war" reduced to 2 binary and oppositional players is grounded in a primate aesthetic. The parallel to primate researcher discussions of alpha and challenger males seems strong to me.

comment by John_Maxwell (John_Maxwell_IV) · 2009-08-20T22:01:55.702Z · LW(p) · GW(p)

My guess is that nuclear waste production and chance of reactor meltdown are very weakly correlated. Both are decreased if the reactor is designed by a particularly conscientious group of researchers.

Replies from: Kingreaper
comment by Kingreaper · 2010-07-21T23:44:22.438Z · LW(p) · GW(p)

My expectation would be the opposite, a slight anticorrelation. (after further thought this changed, see below)

I would expect most reactor designs to be pretty heavily studied and worked on, making the conscientiousness factor reasonably small.

In two designs that were approximately contemporary I would therefore expect to see a tradeoff between different design goals (ie. waste production, chance of meltdown, fuel efficiency, total output, cost of production etc.)

Actually, no, that wouldn't necessarily result in an anticorrelation, in fact it would likely result in a correlation, because waste production and meltdown chance both fall under the same supergoal (environmental safety)

comment by MoreOn · 2010-12-10T21:25:38.977Z · LW(p) · GW(p)

To be clear: I’m not arguing against. I’m asking to clarify. I find myself thoroughly confused by this article.

How is a higher probability for meltdown NOT a “point against” the reactor—and how is less waste NOT a “point for?” I think I’m missing some underlying principle here.

If you tell people a reactor design produces less waste, they rate its probability of meltdown as lower.

Wait. WHAT? How does that even make sense?

I suppose if you gave me a long boring lecture about reactors, and then quizzed me on it before I remembered the facts (with my house cat memory), I would could get this wrong for the exact reasons you described, without being irrational.

Suppose there’s a multiple choice question, “How much waste does reactor 1 produce?” and I know that reactor 1 is the best across most categories (has the most points in its favor), and I know that all reactors produce between 10 and 15 units of waste, then my answer would be (b) below:

(a) 8 units

(b) 10 units

(c) 12 units

(d) 14 units

And of course, there’s every possibility that “reactor 1” didn’t get the best score in waste production. Didn’t I just make the same mistake as Eliezer described, for completely logical reasons (maximum likelihood guess under uncertainty)? This isn’t a failure of my logic; it’s a failure of my memory.

In real life, if I expected a quiz like this, I would have STUDIED.

Why else would anyone expect an overall-best-ranking reactor to necessarily be the best at waste production?

Here’s another idea. Suppose that long boring hypothetical lecture were on top of that so confusing that the listener carries away a message that “a meltdown is when a reactor has produced more waste than its capacity.” Then it is a perfectly logical chain of reasoning that if a reactor produces less waste, then its probability of meltdown as lower. But this is poor communication, not poor reasoning.

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-04T05:23:11.485Z · LW(p) · GW(p)

I believe the way it worked out was that when they heard a particular design produced less toxic waste, they also assumed a reactor that produced less waste was less likely to melt down.

That's +1 for less waste and +1 for less chance of meltdown.

When they are then told that this same design has a higher chance of meltdown, they subtract one point for meltdown without subtracting for less waste, even though they did the inverse earlier.

So, the audience tallies like so: +1 (less waste) +1 (inferred for less meltdown) -1 (more meltdown) = +1

When they should have tallied like so: +1 (less waste) -1 (more meltdown) = 0

The net ends up being +1 for the reactor, instead of 0.

This results in a good feeling for the reactor, when in reality they shouldn't have felt positive or negative.

Replies from: MoreOn
comment by MoreOn · 2011-02-04T14:07:39.281Z · LW(p) · GW(p)

You're right, of course.

I'd written the above before I read this defense of researchers, before I knew to watch myself when I'm defending research subjects. Maybe I was far too in shock to actually believe that people would honestly think that.

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-04T17:59:43.883Z · LW(p) · GW(p)

Yeah, it's a roundabout inference that I think happens a lot. I notice it myself sometimes when I hear X, assume X implies Y, and then later find out Y is not true. It's pretty difficult to avoid, since it's so natural, but I think the key is when you get surprised like that (and even if you don't), you should re-evaluate the whole thing instead of just adjusting your overall opinion slightly to account for the new evidence. Your accounting could be faulty if you don't go back and audit it.

Replies from: Keith_Coffman
comment by Keith_Coffman · 2014-09-03T02:53:43.341Z · LW(p) · GW(p)

I think we should also separate the subjects of the psychology behind when this might happen and whether or not we are using scales.

It may indeed be the case that people are bad accountants (although I rarely find myself assuming these implied things, and further if I find that my assumptions are wrong I adjust accordingly), but this doesn't change the fact that we are adding +/- points (much like you're keeping score/weighing the two alternatives).

Assuming a perfectly rational mind was approaching the proposition of reactor A vs reactor B (and we can even do reactor C...), then the way it would decide which proposition is best is by tallying the pros/cons to each proposition. Of course, in reality we are not perfectly rational and moreover different people assign different point-values to different categories. But it is still a scale.

comment by JJ10DMAN · 2011-04-26T18:08:30.219Z · LW(p) · GW(p)

The paper "The Affect Heuristic in Judgments of Risks and Benefits" doesn't mention explicitly separating benefit from risk in the critical second experiment (and probably not the first either, which I didn't read). If I were brought in and given the question, 'In general, how beneficial do you consider the use of X in the U.S. as a whole?', then I would weigh all positive and negative aspects together to get a final judgment on whether or not it's worth using. "Benefit" CAN be a distinct concept from risks, but language is messy, and it can be (and I would) interpreted as "sufficiency to employ". As a result, depending on the reader's interpretation of "benefit", it's possible that any lowering of perceived risk will NECESSARILY increase perceived benefit, no logical error required.

Rather sloppy science, if you ask me.

comment by Colombi · 2014-02-20T05:21:47.176Z · LW(p) · GW(p)

Cool.

comment by Philisophist · 2014-03-27T18:50:29.202Z · LW(p) · GW(p)

Lady Rationality... love it. I think I want her as a tattoo.

comment by tmercer · 2022-07-06T19:16:54.001Z · LW(p) · GW(p)

Another problem with policies like this hypothetical nuclear reactor is that people don't have access to facts about the future or hypothetical futures, so we're left with estimates. People don't acknowledge that their "facts" are actually estimates, and don't share how they're estimating things. If we did this, politics would be better. Just give your methods and assumptions, as well as the estimates that are determined by the methods and assumptions, of costs and benefits, and then people can pick the choice(s) with the best estimated benefits - costs.

The other thing about political arguments is that people don't start with the foundation to the above, which is values. People will often talk about "lives saved", which is ridiculous, because you can't save lives, only postpone deaths. People who don't agree on values aren't ready to look at estimated costs and benefits. If I value 3 years of postponed-death at $100k, and someone else values it at $10, then we're almost certain to disagree about which policies are best. Values are the prices at which you trade things you like/want/value.