Posts

Comments

Comment by Unnamed2 on Another Call to End Aid to Africa · 2009-04-04T05:38:54.000Z · LW · GW

This topic was posted at Less Wrong (by Phil Goetz), but apparently Eliezer thought it would fit better here.

If the goal is to encourage aid to become more effective and evidence-based, I don't think that shouting "Stop the aid!" will help. Setting yourself up in opposition to aid will just make the pro-aid team rally together against you, and in a head-to-head matchup the anti-aid side is at a huge disadvantage in winning over public opinion and celebrity culture (pro-aid forces have better ties to establishment power, emotions, common sense, and money). At worst, the pro-aid side will increasingly to see talk of logic, evidence, and counterproductive charity as the other side's buzzwords, or part of the other side's agenda. We have a better chance at getting more effective aid if the people arguing for more reasonable, rigorously-evaluated aid demonstrate that they're on the same side (the pro-helping side) by talking about (and emphasizing) what does work. Ideally, they'd even get involved to improve aid programs (like the MIT Poverty Action Lab), raise money for effective charity (like GiveWell), or run their own programs.

(Disclosure: I may be influenced by the fact that I think aid to Africa has been doing more good than harm, and that our best hope is to make incremental improvements and give more.)

Comment by Unnamed2 on Against Maturity · 2009-02-19T03:01:13.000Z · LW · GW

Eliezer, are you familiar with Carol Dweck's research on intelligence, or has that corner of psychology eluded you? It matches up very closely with what you say here about maturity. Dweck says: some people (like your parents on maturity) have an "entity theory" of intelligence - they think of it as something fixed that you either have or you don't - while others (like you on maturity) have an "incremental theory" - they think of it as continually developing. Incremental theorists tend to learn better and be more eager to face challenges, while entity theorists are more threatened by challenges and care more about signaling that they have intelligence. More here.

Entity views may be a common source of bias, with intelligence and other qualities that people value.

Comment by Unnamed2 on Efficient Cross-Domain Optimization · 2008-10-29T03:45:20.000Z · LW · GW

It's interesting that Eliezer ties intelligence so closely to action ("steering the future"). I generally think of intelligence as being inside the mind, with behaviors & outcomes serving as excellent cues to an individual's intelligence (or unintelligence), but not as part of the definition of intelligence. Would Deep Blue no longer be intelligent at chess if it didn't have a human there to move the pieces on the board, or if it didn't signal the next move in a way that was readily intelligible to humans? Is the AI-in-a-box not intelligent until it escapes the box?

Does an intelligent system have to have its own preferences? Or is it enough if it can find the means to the goals (with high optimization power, across domains), wherever the goals come from? Suppose that a machine was set up so that a "user" could spend a bit of time with it, and the machine would figure out enough about the user's goals, and about the rest of the world, to inform the user about a course of action that would be near-optimal according to the user's goals. I'd say it's an intelligent machine, but it's not steering the future toward any particular target in outcome space. You could call it intelligence as problem-solving.

Comment by Unnamed2 on Fake Norms, or "Truth" vs. Truth · 2008-07-22T14:45:55.000Z · LW · GW

I think a simpler explanation is just that people are not absolutists about following social norms, so they'll regularly violate a norm if it comes into conflict with another norm or something else. To take one example, there is a clear social norm against lying which children learn (they are told not to lie and chastised when they are caught lying). But people still lie all the time, and not just for personal benefit but also to spare other people's feelings and, perhaps most commonly, to make social interactions go more smoothly. And instead of seeing these cases as violating the norm against lying because something else is even more important here, it seems like liars often don't even feel like they are breaking a norm against lying. Instead, the norm against lying doesn't even get applied to this case.

How do people manage to pull off this flexibility in applying norms? The main trick may be something as simple as: once you've decided on something and have a norm that matches your decision, other norms are irrelevant - there's no need to even consider them. Although that leaves open the important question of how one norm wins in the first place. (Another possibility is that people are using something like modus tollens: lying is wrong, this is not wrong, therefore this doesn't really count as lying.)

Eliezer and many others here are absolutists about the truth norm, but most people see it as on par with other norms, like the norm in favor of being upbeat or optimistic and the norm about people being entitled to their beliefs. And when norm absolutists run into people who are mushy about their favored norm, they may doubt that those people even have the norm.

Comment by Unnamed2 on My Childhood Role Model · 2008-05-23T22:57:08.000Z · LW · GW

I'll echo Hofstadter and a few of the commenters. The mouse/chimp/VI/Einstein scale seems wrong to me; I think Einstein should be further off to the right. It all depends on what you mean by intelligence and how you define the scale, of course, but if intelligence is something like the generalized ability to learn, understand things, and solve problems, then the range of problems that Einstein is able to solve, and the set of things that Einstein is able to understand well, seem many times larger than what the village idiot is able to do.

The village idiot may be able to pull off some intellectual feats (like language) in specific contexts, but then again so can the mouse (like learning associations and figuring out the layout of its surroundings). When it comes to a general intellectual ability (rather than specialized abilities), Einstein can do much more than an idiot with a similar brain because he is much much better at thinking more abstractly, looking for and understanding the underlying logic of something, and thinking his way through more complex ideas and problems. The minor tweaks in brain design allowed enormous improvements in cognitive performance, and I think that the intelligence scale should reflect the performance differences rather than the anatomical ones. Even if it is a log scale, the village idiot should probably be closer to the chimp than to Einstein.

Comment by Unnamed2 on Anchoring and Adjustment · 2007-09-08T03:58:01.000Z · LW · GW

You're a few years behind on this research, Eliezer.

The point of the research program of Mussweiler and Strack is that anchoring effects can occur without any adjustment. "Selective Accessibility" is their alternative, adjustment-free process that can produce estimates that are too close to the anchor. The idea is that, when people are testing the anchor value, they bring to mind information that is consistent with the correct answer being close to the anchor value, since that information is especially relevant for answering the comparative question. Then when they are then asked for their own estimate, they rely on that biased set of information that is already accessible in their mind, which produces estimates that are biased towards the anchor.

In 2001, Epley and Gilovich published their first of several papers designed to show that, while the Selective Accessibility process occurs and creates adjustment-free anchoring effects, there are also cases where people do adjust from an anchor value, just as Kahneman & Tversky claimed. The examples that they've used in their research are trivia questions like "What is the boiling point of water on Mount Everest?" where subjects will quickly think of a relevant, but wrong, number on their own, and they'll adjust from there based on their knowledge of why the number is wrong. In this case, most subjects know that 212F is the boiling point of water at sea level, but water boils at lower temperatures at altitude, so they adjust downward. This anchoring & adjustment process also creates estimates that are biased towards the anchor, since people tend to stop adjusting too soon, once they've reached a plausible-seeming value.

Gilovich and Epley have shown that subjects give estimates farther from the anchor (meaning that they are adjusting more) on these types of questions when they are given incentives for accuracy, warned about the biasing effect of anchors, high in Need For Cognition (the dispositional tendency to think things through a lot), or shaking their head (which makes them less willing to stop at a plausible-seeming value; head-nodding produces even less adjustment than baseline). None of these variables matter on the two-part questions with an experimenter provided anchor, like the Africa UN %, where selective accessibility seems to be the process creating anchoring effects. The relevance of these variables is the main evidence for their claim that adjustment occurs with one type of anchoring procedure but not the other.

The one manipulation that has shown some promise at debiasing Selective Accessibility based anchoring effects is a version of the "consider the opposite" advice that Eliezer gives. Mussweiler, Strack & Pfeiffer (2000) argued that this strategy helps make a more representative set of information accessible in subjects' minds, and they did find debiasing when they gave subjects targeted, question-specific instructions on what else to consider. But they did not try teaching subjects the general "consider the opposite" strategy and seeing if they could successfully apply it to the particular case on their own.

Mussweiler and Gilovich both have all of their relevant papers available for free on their websites.


Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors. Psychological Science, 12, 391–396.

Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26, 1142-1150.

Comment by Unnamed2 on Stranger Than History · 2007-09-02T03:10:18.000Z · LW · GW

Length contraction was proposed by George FitzGerald in 1889, in response to the Michelson-Morley experiment, and it gained greater circulation in the physicist community after Hendrik Lorentz independently proposed it in 1892. I imagine that most top physicists would have been familiar with it by 1901. Lorentz's paper included the ideas that the relative motion of reference frames was important, and that funny things were going on with time (like non-simultaneity in different reference frames), and his 1899 follow-up included time dilation equations (as did a less-known 1897 paper by Joseph Larmor). I'm not sure if people familiar with this work saw c as the universal speed limit, but the length contraction equations (which imply imaginary length for v greater than c) suggest that this proposal wouldn't strike them as crazy (and they would have recognized the number as c, since estimates of the speed of light were accurate within less than 0.1% by then).