Posts
Comments
Eliezer tries to derive his morality from stated human values.
In theory, Eliezer's morality (at least CEV) is insensitive to errors along these lines, but when Eliezer claims "it all adds up to normality," he's making a claim that is sensitive to such an error.
Does anyone have a reputable source for Feynman's 137? google makes it look very concentrated in this group, probably the result of a single confabulation.
Sykes and Gleick's biographies both give 12x. Sykes quotes Feynman's sister remembering sneaking into the records as a child. This seems important to me: Feynman didn't just fabricate the 12x.
Eliezer's password is [correct answer deleted, congratulations Douglas --EY].
the dominant consensus in modern decision theory is that one should two-box...there's a common attitude that "Verbal arguments for one-boxing are easy to come by, what's hard is developing a good decision theory that one-boxes"
Those are contrary positions, right?
Robin Hason:
Punishment is ordinary, but Newcomb's problem is simple! You can't have both.
The advantage of an ordinary situation like punishment is that game theorists can't deny the fact on the ground that governments exist, but they can claim it's because we're all irrational, which doesn't leave many directions to go in.
Nick Tarleton,
Yes, it is probably correct that one should devote substantial resources to low probability events, but what are the odds that the universe is not only a simulation, but that the containing world is much bigger; and, if so, does the universe just not count, because it's so small? The bounded utility function probably reaches the opposite conclusion that only this universe counts, and maybe we should keep our ambitions limited, out of fear of attracting attention.
Luis Enrique, See above about "We Change Our Minds Less Often Than We Think"; my interpretation is that the people are trying to believe that they haven't made up their minds, but they are wrong. That is, they seem to be implementing the (first) advice you mention. Maybe one can come up with more practical advice, but these are very difficult problems to fix, even if you understand the errors. On the other hand, the main part of the post is about a successful intervention.
Science was weaker in these days
Could you elaborate on this? What do you mean by Science? (reasoning? knowledge?)
The thing whose weakness seems relevant to me is a cultural tradition of doubting religion. Also, prerequisites which I have trouble articulating because they are so deeply buried: perhaps a changing notion of benevolence.
You probably won't go far wrong if you assume I agree with you on the points I don't respond to. I probably shouldn't have talked about them in the first place.
overcoming heuristics:
If we know a bias is caused by a heuristic, then we should use that heuristic less. But articulating a meta-heuristic about when to use it is very different implementing that meta-heuristic. Human minds aren't eurisko that can dial up the strength on heuristics. Even if we implement a heuristic, as in Kahneman's planning anecdote, and get more accurate information, we may simply ignore it.
The basic problem is system 1 vs 2, when to analyze a problem and when to trust unconscious systems. Perhaps we have control over the conscious analysis (but still, it has unconscious inputs). But even starting the process, turning on the conscious process, must be done from an unconscious start.
Tom Myers,
Systematic but unexplained: sure, most errors are probably due to heuristics, but I'm not sure that's a useful statement. A number of posts here have been so specific, they don't seem like useful starting points for searching for heuristics.
Cost:
Most people don't seem to have sufficiently clear goals to make sense of whether something benefits or costs them, let alone balancing the two.
People live normal lives by not thinking too much about things, so it shouldn't be so surprising that they don't think in psych experiments in which it is often clear that analysis will help. But if one is interested in producing answers to questions that don't come up in normal life (eg, how much is medicine worth), avoiding everyday heuristics is probably worth the cost. Heuristics may well be worth overcoming in everyday life, as well, but I don't think any experiments I've heard about shed any light on this.
Torture:
Your proposals are too detailed. I don't imagine an opportunity to experiment enough to figure out how to structure torture, and if I do get an opportunity to experiment on government structure, torture is not going to be high on my list of variables. A government is an extremely expensive experimental apparatus. At least I can imagine how to experiment with corporal punishment, but I don't really have much of an idea of how one could go about comparing the efficacy of different interrogation methods or the general investigative qualities of, say, American and Japanese police.
I'm not inclined to find out what you mean by Saddam's people-shredders, but I imagine that one effect was a deterrent to crime, especially crime that would get Saddam's attention. Torture, especially creative torture with vivid imagery, may well exploit salience(?) biases to be a more effective deterrent (aside from the rationally greater desire to avoid torture+death than to avoid death). The role of whim on one's fate may also have (irrationally) increased the deterrent effect. The vague beliefs people hold about prison rape may play a very similar role in the US system. We do have arbitrary torture in our criminal justice system already.
Winston survives.
Tom Myers,
I think the convention on this blog, among the small set of people who have such a precise definition, is that not every heuristic is a bias, that only heuristics whose errors are worth overcoming should be called biases. I don't like this usage. For one thing, it's really hard to know the cost of overcoming particular biases. It's easy to look at an isolated experiment and say: here's a procedure that would have been better, but that doesn't cover cost of actually changing your behavior to look out for this kind of error.
Also, there are other people on this blog that use bias to refer to systematic, but unexplained errors, where it's hard to call them heuristics. Without a mechanism, it's probably hard to overcome these, although not always impossible.
While many people have mentioned similar disappointments, no one has echoed "I'll get that theorem eventually...even though my first try failed!" That was what seemed like a really bad sign when I read the essay before the comments. But I think we're really bad at communicating feelings, so I don't know how the feelings relate, how strong they were, and especially, how the commenters see the parallels with their reactions.
The ancient Greeks themselves played around with the rules. Archimedes used a "marked straightedge" to trisect an angle.
The first hit on google for trisect an angle is about ways to do it, not discussions of impossibility.
I think people should be more careful about the word "science." Here are some meanings I see attached to it:
- knowledge of nature
- naturalism
- "the scientific method"
- institutions practicing the scientific method
- rules of a particular institution
- the output of institutions
I feel compelled to add that what I mean by "the scientific method" is that observation should drive belief and that we can put effort into obtaining useful observations (experiments, stamp collecting). Also, it may be useful to distinguish between institutional rules intended to protect the institution from cheaters and rules intended to protect people from their own biases.
Eliezer Yudkowsky, The word "normative" has stood in the way of my understanding what you mean, at least the first few times I saw you use it, before I pegged you as getting it from the heuristics and biases people. It greatly confused me many times when I first encountered them. It's jargon, so it shouldn't be surprising that different fields use it to mean rather different things.
The heuristics and biases people use it to mean "correct," because social scientists aren't allowed to use that word. I think there's a valuable lesson about academics, institutions, or taboos in there, but I'm not sure what it is. As far as I can tell, they are the only people that use it this way.
My dictionary defines normative as "of, relating to, or prescribing a norm or standard." It's confusing enough that it carries those two or three meanings, but to make it mean "correct" as well is asking for trouble or in-groups.
(parody, I think) story
The story was for real. The site, I dunno, but it does accept money through paypal.
Your conclusion matches your data, but the data is suspiciously focused on charity. Is scope neglect easier to elicit in such contexts? Other explanations include it being hard to make large numbers relevant, and lack of imagination by researchers.
interpretations which postulate an infinitely sliced spatial manifold which is fundamentally real
Strictly speaking, I suppose that is part of the interpretation, but it's a pretty mild part of the interpretation of QM, or at least QFT. Many people expect that this to stop being true in a unification with GR, but that's about physical law, not interpretation.
randomly subject to anthropic constraints, for instance
That might lead us to simulations, quite close to the operating system example.
TGGP, You used an example of moral progress produced by a philosopher: the word consequentialist.
TGGP, What do you mean that you are a consequentialist, if you are so sure ethics is meaningless?
Eliezer, did you mean to evoke stock markets with "You could feed it to a display on people's cellphones"?
Surely financial markets are well-calibrated for events that happen once a month. Then an option that such an event will happen tomorrow is should be about right. Some claim that there is systematic bias in options against rare events, that on a long shot you do better than even.
the social dilemma is that neither writing grant proposals, nor showing up at your office desk, is inherently an evil deed.
One answer is that grant-writing is an evil deed. I don't tend to that belief, or the more plausible one that offering grants is an evil deed, but I think they're worth mentioning.
Promotion based on hours at the office, or working at a company that does that do seem to me like evil deeds, but human bias means that practically all companies have this effect, to some extent.
the argument about how noise traders survive
Surely the argument you give--that false beliefs can lead to extra risk, increasing expected returns while decreasing expected utility--is older than the noise trader literature?
Perhaps this is because of the larger power distance between the security people and the protected.
How do you measure this distance? The FDA has a monopoly, too. Here's another theory: drug companies are a third player. Moreover, they are concentrated interests, so they affect the public choice. (Airlines play a similar role in the security theater, but their interests are more diffuse. Also, getting rid of airline security is a public good, while getting a drug approved helps one drug company relative the others.)
That's not to say I disagree with Anders's psychology, but I discount it because I find it harder to judge than public choice arguments.
GPS? You can do better than that! I believe special relativity because it's implied by Maxwell's equations, which I have experienced. Normal human speeds are enough to detect contraction, if you do it by comparing E&M.
Apparently it is very hard to teach and test regarding the underlying reasons.
Does "apparently" (in general) mean you aren't using additional sources of information? In this case, are you concluding that it's difficult simply from the fact that it isn't done? That only seems to me like evidence that it's not worth it. Unfortunately, the value driving the system is getting published, not advancing science.
It is certainly true that one should not superficially try to replicate Aumann's theorem, but should try to replicate the process of the bayesians, namely, to model the other agent and see how the other agent could disagree. Surely this is how we disagree with creationists and customer service agents. Even if they are far from bayesian, we can extract information from their behavior, until we can model them.
But modeling is also what RH was advocating for the philosophers. Inwagen accepts Lewis as a peer, perhaps a superior. Moreover, he accepts him as rationally integrating Inwagen's arguments. This is exactly where Aumann's argument applies. In fact, Inwagen does model himself and Lewis and claims (I've only read the quoted excerpt) that their disagreement must be due to incommunicable insights. Although Aumann's framework about modelling the world seems incompatible with the idea of incommunicable insight, I think it is right to worry about symmetry. Possibly this leads us into difficult anthropic territory. But EY is right that we should not respond by simply changing our opinions, but we should try to describe this incommunicable insight and see how it has infected our beliefs.
Anthropic arguments are difficult, but I do not think they are relevant in any of the examples, except maybe the initial superintelligence. In that situation, I would argue in a way that may be your argument about dreaming: if something has a false belief about having a detailed model of the world, there's not much it can do. You might as well say it is dreaming. (I'm not talking about accuracy, but precision and moreover persistence.)
And you seem to say that if it is dreaming it doesn't count. When you claim that my bayesian score goes up if I insist that I'm awake whenever I feel I'm awake, you seem to be asserting that my assertions in my dreams don't count. This seems to be a claim about persistence of identity. Of course, my actions in dreams seem to have less import than my actions when awake, so I should care less about dream error. But I should not discount it entirely.