Reversed Stupidity Is Not Intelligence
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-12T22:14:07.000Z · LW · GW · Legacy · 120 commentsContents
120 comments
“. . . then our people on that time-line went to work with corrective action. Here.”
He wiped the screen and then began punching combinations. Page after page appeared, bearing accounts of people who had claimed to have seen the mysterious disks, and each report was more fantastic than the last.
“The standard smother-out technique,” Verkan Vall grinned. “I only heard a little talk about the ‘flying saucers,’ and all of that was in joke. In that order of culture, you can always discredit one true story by setting up ten others, palpably false, parallel to it.”
—H. Beam Piper, Police Operation
Piper had a point. Pers’nally, I don’t believe there are any poorly hidden aliens infesting these parts. But my disbelief has nothing to do with the awful embarrassing irrationality of flying saucer cults—at least, I hope not.
You and I believe that flying saucer cults arose in the total absence of any flying saucers. Cults can arise around almost any idea, thanks to human silliness. This silliness operates orthogonally to alien intervention: We would expect to see flying saucer cults whether or not there were flying saucers. Even if there were poorly hidden aliens, it would not be any less likely for flying saucer cults to arise. The conditional probability P(cults|aliens) isn’t less than P(cults|¬aliens), unless you suppose that poorly hidden aliens would deliberately suppress flying saucer cults.1 By the Bayesian definition of evidence, the observation “flying saucer cults exist” is not evidence against the existence of flying saucers. It’s not much evidence one way or the other.
This is an application of the general principle that, as Robert Pirsig puts it, “The world’s greatest fool may say the Sun is shining, but that doesn’t make it dark out.”2
If you knew someone who was wrong 99.99% of the time on yes-or-no questions, you could obtain 99.99% accuracy just by reversing their answers. They would need to do all the work of obtaining good evidence entangled with reality, and processing that evidence coherently, just to anticorrelate that reliably. They would have to be superintelligent to be that stupid.
A car with a broken engine cannot drive backward at 200 mph, even if the engine is really really broken.
If stupidity does not reliably anticorrelate with truth, how much less should human evil anticorrelate with truth? The converse of the halo effect is the horns effect: All perceived negative qualities correlate. If Stalin is evil, then everything he says should be false. You wouldn’t want to agree with Stalin, would you?
Stalin also believed that 2 + 2 = 4. Yet if you defend any statement made by Stalin, even “2 + 2 = 4,” people will see only that you are “agreeing with Stalin”; you must be on his side.
Corollaries of this principle:
- To argue against an idea honestly, you should argue against the best arguments of the strongest advocates. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates. If you want to argue against transhumanism or the intelligence explosion, you have to directly challenge the arguments of Nick Bostrom or Eliezer Yudkowsky post-2003. The least convenient path is the only valid one.3
- Exhibiting sad, pathetic lunatics, driven to madness by their apprehension of an Idea, is no evidence against that Idea. Many New Agers have been made crazier by their personal apprehension of quantum mechanics.
- Someone once said, “Not all conservatives are stupid, but most stupid people are conservatives.” If you cannot place yourself in a state of mind where this statement, true or false, seems completely irrelevant as a critique of conservatism, you are not ready to think rationally about politics.
- Ad hominem argument is not valid.
- You need to be able to argue against genocide without saying “Hitler wanted to exterminate the Jews.” If Hitler hadn’t advocated genocide, would it thereby become okay?
- In Hansonian terms: Your instinctive willingness to believe something will change along with your willingness to affiliate with people who are known for believing it—quite apart from whether the belief is actually true. Some people may be reluctant to believe that God does not exist, not because there is evidence that God does exist, but rather because they are reluctant to affiliate with Richard Dawkins or those darned “strident” atheists who go around publicly saying “God does not exist.”
- If your current computer stops working, you can’t conclude that everything about the current system is wrong and that you need a new system without an AMD processor, an ATI video card, a Maxtor hard drive, or case fans—even though your current system has all these things and it doesn’t work. Maybe you just need a new power cord.
- If a hundred inventors fail to build flying machines using metal and wood and canvas, it doesn’t imply that what you really need is a flying machine of bone and flesh. If a thousand projects fail to build Artificial Intelligence using electricity-based computing, this doesn’t mean that electricity is the source of the problem. Until you understand the problem, hopeful reversals are exceedingly unlikely to hit the solution.4
1Read “P(cults|aliens)” as “the probability of UFO cults given that aliens have visited Earth,” and read “P(cults|¬aliens)” as “the probability of UFO cults given that aliens have not visited Earth.”
2Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values, 1st ed. (New York: Morrow, 1974).
3See Scott Alexander, “The Least Convenient Possible World,” Less Wrong (blog), December 2, 2018, http://lesswrong.com/lw/2k/the_least_convenient_possible_world/.
4See also “Selling Nonapples.” http://lesswrong.com/lw/vs/selling_nonapples.
120 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by poke · 2007-12-12T22:41:48.000Z · LW(p) · GW(p)
It's amazing how many supposedly rationalist movements fall into the trap of crippling "reverse stupidity." Many in the atheist movement would not have you make positive pronouncements, not have you form organizations, not have you advocate, not have you adopt symbols or give the movement a name, not have you educate children on atheism, and so on, all because "religion does it." I think in the case of atheism the source is unique: every (modern) atheist knows his or her atheism is a product of scientific understanding but few atheists are willing to admit it (having taken up also the false belief that some things are "outside science"), so they go looking for other reasons, and "reverse stupidity" offers such reasons in abundance.
Replies from: mat33, joemarzen↑ comment by mat33 · 2011-10-05T10:33:17.086Z · LW(p) · GW(p)
"I think in the case of atheism the source is unique: every (modern) atheist knows his or her atheism is a product of scientific understanding..."
We are already "stronger" by far, than most of the "pagan" gods. This century, we may well create our own worlds ("virtual", yea - but theology doesn't hold our own world as the "real" for its creator...s). It's all comes down to terminology.
↑ comment by joemarzen · 2011-12-07T06:34:51.728Z · LW(p) · GW(p)
I think atheists would do well to encourage agnosticism, seems like an easier sell to me, training wheels? Much of the atheist movement reeks of fundamentalism. By definition atheism is closed minded. So much of science is unknown. I don't discount the idea that the possibility of collective consciousness or any number of other things viewed as supernatural, and therefore dismissed, exist. Read some theoretical physics, we don't understand a lot of stuff. That stuff could be the basis completely different ways of thinking about reality. It may very well be that what we perceive as reality is a small part, or an expression of something that no one has begun to understand. It's cliche but what if we are programs running on some ultra advanced computer. Would the operator of that computer not be a "god." Dismissing that idea is silly, creating computers of that complexity is science fiction but it certainly isn't out of the realm of possibility. Who's to say we'd be the first one's to do it.
Replies from: wedrifid, Manfred, Bugmaster, Spectral_Dragon, Mutasir, Mutasir↑ comment by wedrifid · 2011-12-07T06:58:54.838Z · LW(p) · GW(p)
I think atheists would do well to encourage agnosticism, seems like an easier sell to me, training wheels?
For most part they would also do well to eschew evangelism.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-07T07:40:27.520Z · LW(p) · GW(p)
For most part they would also do well to eschew evangelism.
Meh... In a way, this entire site is dedicated to evangelism of skepticism (and, therefore, atheism). I'm ok with that.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-07T08:24:16.529Z · LW(p) · GW(p)
Meh... In a way, this entire site is dedicated to evangelism of skepticism (and, therefore, atheism).
No it isn't. It's a rationality site... which actually puts it at odds with skepticism when it comes to approach and some major conclusions. It is that same rationality which mandates that even if people on the site go ahead and evangelism atheism on the side they do so while acknowledging that most people would be better off getting off the pulpit and living their lives.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-07T11:11:38.366Z · LW(p) · GW(p)
It's a rationality site... which actually puts it at odds with skepticism when it comes to approach and some major conclusions.
How so ? The site promotes (one might say, "evangelizes") rational thinking, especially the Bayes Rule, and evidence-based reasoning in general. These are the core values of skepticism; disbelief in fairies/UFOs/gods/etc. is merely a consequence.
Replies from: wizzwizz4↑ comment by Manfred · 2011-12-07T07:02:15.611Z · LW(p) · GW(p)
I would suggest you read the following two posts:
http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/ http://lesswrong.com/lw/mm/the_fallacy_of_gray/
↑ comment by Bugmaster · 2011-12-07T07:36:42.553Z · LW(p) · GW(p)
I think there are several problems with your statements; I'll try to address a few. In the interests of full disclosure, I'm an atheist myself, but I obviously can't speak for anyone other than myself.
Much of the atheist movement reeks of fundamentalism.
I don't know about "much", though some atheists are undeniably fundamentalist -- and some theists are, as well. However, this doesn't tell us anything about whether atheism (or theism) is actually true or not.
By definition atheism is closed minded.
I think this depends on which definition you're using; but something tells me it's different from mine.
So much of science is unknown. I don't discount the idea that the possibility of collective consciousness or any number of other things viewed as supernatural, and therefore dismissed, exist.
Neither do I, and neither do most atheists. In fact, most atheists don't discount the possibility of lots of other things existing, as well: Zeus, unicorns, a teapot in orbit of Saturn, leprechauns, FTL neutrinos, etc. But a possibility is not the same thing as probability; and we humans simply don't have the luxury in believing everything we can think of. We'd never get anywhere if we did that. So, atheists make the conscious choice to live their lives and think their thoughts as though that orbiting teapot did not, in fact, exist. Of course, once someone presents some evidence of its existence, we'd change our minds, and re-evaluate all of our beliefs to include the teapot (or gods, or leprechauns, or what have you).
Read some theoretical physics, we don't understand a lot of stuff.
I suspect we understand more than you think -- there are whole books written on the subject, after all. But more importantly, a lack of understanding doesn't automatically make any alternative hypothesis any more likely. For example, I don't know with certainty how that suspicious puddle under my car got there, but "aliens !" or "demons !" are not the kinds of answers that instantly spring to mind.
That stuff could be the basis completely different ways of thinking about reality.
Sure, it could be. But is it ? If it is, then I'd like to see some evidence. Note that the scientific method has a whole mountain of evidence behind it; your computer, for example, is merely a tiny piece of it.
It's cliche but what if we are programs running on some ultra advanced computer. Would the operator of that computer not be a "god."
I don't know, which god did you have in mind ? And do you have any evidence that we're all programs running on a giant computer, or dreams in the mind of a butterfly, or astral manifestations of Krishna's vibrations, or whatever else one can come up with ?
Replies from: Hawisher↑ comment by Spectral_Dragon · 2012-04-01T16:50:44.107Z · LW(p) · GW(p)
By definition atheism is closed minded.
Is it, really? I find more open mindedness in "there is no evidence for this, so I have no reason to believe it" than any theism. Someone telling you to be open minded usually means they want you to agree with them: Accepting a solution instead of considering others as well. It's happened to me, when people talked about ghosts, which have been disproven regardless. But then, it's just accepting one seemingly possible solution.
If all unlikely explanations seem possible, how is it open minded to select just one?
↑ comment by Mutasir · 2012-04-15T23:15:55.311Z · LW(p) · GW(p)
Much of the atheist movement reeks of fundamentalism. By definition atheism is closed minded. Arguing "By Definition"
↑ comment by Mutasir · 2012-04-15T23:28:52.038Z · LW(p) · GW(p)
Much of the atheist movement reeks of fundamentalism. By definition atheism is closed minded.
A few months have passed since that comment, but maybe you should consider reading: http://lesswrong.com/lw/nz/arguing_by_definition/ and http://lesswrong.com/lw/ny/sneaking_in_connotations/
comment by Tom_McCabe2 · 2007-12-12T23:23:55.000Z · LW(p) · GW(p)
"... you have to directly challenge the arguments of Nick Bostrom or Eliezer Yudkowsky post-2003."
Just what the heck happened in 2003? In any experimental field, particularly this one, having new insights and using them to correct old mistakes is just part of the normal flow of events. Was there a super-super-insight which corrected a super-super-old mistake?
Replies from: orthonormal↑ comment by orthonormal · 2012-02-13T05:39:08.782Z · LW(p) · GW(p)
He's referring to his coming of age as a rationalist (which he hadn't written yet then); his transhumanist ideas before 2003 were pretty heavily infected with biases (like the Mind Projection Fallacy) that he harps on about now.
comment by michael_vassar3 · 2007-12-12T23:50:17.000Z · LW(p) · GW(p)
If the same majority of smart people as stupid people are conservative then the statement that "Not all conservatives are stupid, but most stupid people are conservatives." is actually completely irrelevant, but I don't think that anyone believes otherwise. If there is a positive correlation between intelligence and the truth of one's beliefs (a claim the truth of which is probably assumed by most people to be true for any definition of intelligence they care about) then the average intelligence of people who hold a given belief is entangled with the truth of that belief and can be used as Bayesian evidence. Evidence is not proof of course, and this heuristic will not be perfectly reliable.
Replies from: pnrjulius, Eliezer_Yudkowsky↑ comment by pnrjulius · 2012-07-05T04:17:50.781Z · LW(p) · GW(p)
The statistical evidence is that liberalism, especially social liberalism, is positively correlated with intelligence. This does not prove that liberalism is correct; but it does provide some mild evidence in that direction.
Replies from: Stephenjk, BlueAjah, waveman, tlhonmey↑ comment by Stephenjk · 2012-12-28T03:43:29.574Z · LW(p) · GW(p)
How are values are true or false. You seem to be arguing for objectivist morality.
Consider, all the greatest minds in Philosophy, specifically ethics, believed in consequentialism. This provides no weight towards or against that particular ethical system. No one has value expertise. People can value one thing (security) or another (liberty). Inset whatever values as necessary.
The same is true with progressives and conservatives generally.
That fact provides no weight towards what we should value.
Replies from: BlueAjah↑ comment by BlueAjah · 2013-01-12T18:02:57.538Z · LW(p) · GW(p)
No, he's saying that liberalism and conservatism also come with sets of beliefs about the nature of reality and sets of predictions about the consequences of their actions. Some of which are wrong (for both groups). And he's saying we should be able to guess which group has a better understanding of the world by comparing their IQs. Which I think is a valid point, except that the example he chose is one where IQ clearly creates a bias of its own, and one where black people probably miscategorise themselves.
↑ comment by BlueAjah · 2013-01-12T16:47:36.497Z · LW(p) · GW(p)
Declaration of bias: I am a liberal, I am intelligent, but I'm not a Democrat or Republican.
It's hard to measure liberalism. For example, half the black people say they are conservative and half say they are liberal. But most outsiders would say most black people are liberal (and it's common for 100% of black people in an area to vote for Obama). People judge their liberalism against people like themselves, so it's hard to compare groups.
If you count most black people as liberals, then that intelligence difference between liberals and conservatives might disappear (if it exists, I haven't checked). For example, it's a proven fact that Republicans are smarter than Democrats (because of black people with an average IQ of 85 voting Democrat), although just between white people there is no real difference.
You also need to consider that intelligence comes with biases, even though it also improves your thinking. Intelligent people are biased towards things that benefit intelligent people, eg. complexity, even if they hurt other people.
Intelligent people are biased towards letting people do whatever they want, because intelligent people like themselves will do sensible things when given the choice. They aren't used to stupid people, who do stupid things when allowed to do whatever they want. Intelligent people need freedom, while stupid people need strong inviolable guidelines about acceptable behaviour.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-01-12T16:59:22.626Z · LW(p) · GW(p)
If you count most black people as liberals, then that intelligence difference between liberals and conservatives might disappear (if it exists, I haven't checked). For example, it's a proven fact that Republicans are smarter than Democrats (because of black people with an average IQ of 85 voting Democrat)
Could you give a citation for this? I've heard other studies claiming the opposite, and I'm not inclined to accept either at face value without knowing what actually went into the studies.
Replies from: BlueAjah↑ comment by BlueAjah · 2013-01-12T17:46:28.495Z · LW(p) · GW(p)
This article has a lot of bell-curve verbal IQ graphs from GSS (General Social Survey) data for the years 2000-2012, using the wordsum score as a measure of intelligence:
http://blogs.discovermagazine.com/gnxp/2012/04/verbal-intelligence-by-demographic/
It shows Republicans as smarter than Democrats, but Liberals smarter than Conservatives, and White people smarter than Black people, and some other comparisons.
Replies from: army1987, Vaniver↑ comment by A1987dM (army1987) · 2013-01-12T19:06:16.283Z · LW(p) · GW(p)
I'd expect the correlation between IQ and WORDSUM to be much weaker when controlling for educational attainment, so some of those graphs will have to be taken with a grain of salt.
Replies from: Vaniver, BlueAjah↑ comment by Vaniver · 2013-01-12T19:14:48.326Z · LW(p) · GW(p)
What would this statement predict about the WORDSUM distributions by educational level? Is that what that graph shows? (If the graph doesn't have enough data to answer that question, how else could you answer it?)
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-01-12T20:35:20.312Z · LW(p) · GW(p)
So... I think the correlation between IQ and WORDSUM is mostly mediated by education (i.e., in terms of Stuff That Makes Stuff Happen, there's an arrow from IQ to education and one from education and WORDSUM -- there's also one directly from IQ to WORDSUM but it's thinner). So I'd expect that the third graph in that article to show an effect more extreme than if you used IQ instead.
↑ comment by BlueAjah · 2013-01-12T20:12:18.796Z · LW(p) · GW(p)
But educational attainment is directly caused by IQ, so that wouldn't make any sense.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-01-12T20:24:09.584Z · LW(p) · GW(p)
Not exclusively IQ -- parents' socio-economic status also matters.
Replies from: BlueAjah↑ comment by BlueAjah · 2013-01-12T20:30:49.717Z · LW(p) · GW(p)
Parents' socio-economic status is directly caused by parents' IQ, which is passed on genetically (and a tiny bit environmentally) to their children.
Replies from: Kawoomba, army1987↑ comment by Kawoomba · 2013-01-12T20:33:14.890Z · LW(p) · GW(p)
a tiny bit environmentally
Explain that claim, please.
Replies from: BlueAjah↑ comment by BlueAjah · 2013-01-12T21:23:42.066Z · LW(p) · GW(p)
Environmentally in this context just means anything that's not directly genetic or inherited epigenetic. It doesn't mean plants and animals or anything like that.
IQ is mostly genetic (in rich egalitarian countries like the USA), but everyone seems to agree that there's still some environmental factors that smart parents can do to make their children a tiny bit smarter. I don't know exactly what those factors are though. Probably any kind of practice with thinking and studying would help a tiny bit, but perhaps other things to do with better care such as nutrition. But I know there's not a lot that parents can do that helps with IQ long-term, especially when society as a whole is already trying to do everything they can to boost IQ environmentally already.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-03-29T14:25:46.899Z · LW(p) · GW(p)
IQ is significantly genetic, but there's considerably more than a little bit of variance in intelligence between people given the same DNA, and that's without bringing in the effect of raising people in widely divergent cultures.
↑ comment by A1987dM (army1987) · 2013-01-12T20:43:36.346Z · LW(p) · GW(p)
What I mean is, someone with IQ 115 from a upper-class family will be more likely to go to college than someone with IQ 115 from a lower-class family.
Replies from: BlueAjah↑ comment by BlueAjah · 2013-01-12T21:13:04.068Z · LW(p) · GW(p)
I can't find anything right now on what effect parents' class (what does that mean? SES?) has on educational attainment for people of the same IQs. Someone else may want to look it up if they're better at googling than me.
But it doesn't matter. We already know that wordsum, IQ, and educational attainment are measuring similar things. Wordsum seems like a good proxy for IQ. It gives sensible answers in all the graphs, and it is said to correlate .71 with adult IQ.
Do you have a point, or some sort of theory about what I was saying? Do you disagree with the idea that Republicans are smarter (except at the top end) than Democrats, or that "liberals" are smarter than "conservatives"?
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-01-12T23:19:19.563Z · LW(p) · GW(p)
Do you disagree with the idea that Republicans are smarter (except at the top end) than Democrats, or that "liberals" are smarter than "conservatives"?
I don't.
My point was that using a test that heavily relies on ‘learned’ knowledge such as Wordsum may have exaggerated the effect (compared to what one would see if one used a more culture-neutral test such as Raven's progressive matrices) when some of the groups have historically been educated more than others for additional reasons besides IQ (even if said reasons correlate with IQ, so long as the correlation isn't close to 1).
↑ comment by Vaniver · 2013-01-12T19:19:19.126Z · LW(p) · GW(p)
It shows Republicans as smarter than Democrats
Kind of; the great thing about those distributions is that you can talk about more of the distribution than one summary statistic. There's a clump of high IQ democrats, a clump of low IQ democrats, and then a clump of medium IQ democrats, whereas the Republicans look like one clump of medium IQ republicans. There are more Democrats from 0 to 5, more Republicans from about 6 to 8, and a tiny few more Democrats from 9 to 10.
This matches with the prediction that there is a significant group of low-vocabulary people who vote predominantly Democratic, the middles voting somewhat more Republican, and the highs about evenly split.
↑ comment by waveman · 2016-08-05T01:00:08.920Z · LW(p) · GW(p)
it does provide some mild evidence in that direction.
It would provide significantly useful evidence, if we had no other information to determine the truth of the tenets of conservatism. Given that we do, and that the 'evidence' provided by who believes liberalism vs conservatism is not strong, I suggest it is better to ignore it.
Why? Because using these sorts of arguments are very dangerous because they so readily degenerate into overvaluing social proof.
↑ comment by tlhonmey · 2021-01-07T20:15:27.635Z · LW(p) · GW(p)
As an interesting phenomenon, I've noticed that when I question people in-depth about their beliefs on specific issues what they actually want is often seriously at odds with the political group to which they claim to adhere.
It's almost like political affiliations are tribal memberships and people engage in double-think to not risk those memberships even when having that membership doesn't form a coherent whole with the rest of their ideology.
To the extent which IQ actually matters, I've noticed two patterns:
Firstly, to a certain extent, those with higher IQ tend to spend more years of their life in school, and most schools have a very definite liberal or conservative culture and actively punish "wrongthink" to a certain degree. So IQ correlation with political faction may be more indicative of the ratio between schools than anything else.
Secondly, once a person's IQ gets into the 130+ range you seem to start finding a higher fraction of people who really despise the stupidity and waste of primate social politics and so prefer consistency of internal logic over maintaining good tribal standing. These people are actually interesting to talk to about politics because they're actually interested in what the facts are and in whether or not policy actually meets its goals. Even when you disagree with their conclusions, you don't have to spend all your time pointing out the same contradictions again and again.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-05T19:42:26.370Z · LW(p) · GW(p)
Why would the number of stupid people who believe something anticorrelate with the number of smart people who believe it? Most stupid people and most smart people believe the sky is blue. A shift in the fraction of stupid people who do X can take place without any corresponding shift in the fraction of smart people who do X one way or another. Some smart people actively prefer not to affiliate themselves with stupid people and will try to believe something different, but they are committing the error of the OP and should not be listened to anyway.
comment by TGGP4 · 2007-12-13T00:17:50.000Z · LW(p) · GW(p)
I believe it was John Stuart Mill who said that.
Nice move using Stalin instead of Hitler, since I get tired of hearing the latter brought up. I myself have endorsed some of Stalin's ideas like "[ideology] in one country" since even if his policies were bad he was at least fairly successful in getting them implemented and lasting for a good while.
comment by burger_flipper2 · 2007-12-13T00:30:46.000Z · LW(p) · GW(p)
I'm with McCabe-- what was the epiphany?
comment by Caledonian2 · 2007-12-13T00:31:18.000Z · LW(p) · GW(p)
every (modern) atheist knows his or her atheism is a product of scientific understanding
This is wrong.
Even presuming that you're speaking very informally, and your statement shouldn't be interpreted literally, it's STILL wrong.
comment by Michael_G. · 2007-12-13T00:44:33.000Z · LW(p) · GW(p)
"The least convenient path is the only valid one."
When arguing against and idea honestly with the strongest advocates, is it always true that what is right is not always what is easy? Does making the choice not to argue make someone wrong outright or does not entering into the argument in the first place make the point of view non-existent in some way?
Replies from: bigjeff5↑ comment by bigjeff5 · 2011-02-04T21:27:32.550Z · LW(p) · GW(p)
Does making the choice not to argue make someone wrong outright
It makes your argument wrong by default.
This is in the context of arguing against someone else's opinion. If you are entering such an argument, the only correct choice is the least convenient - that is, arguing the strongest proponent of the idea you are arguing against.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-13T01:13:53.000Z · LW(p) · GW(p)
Matthew: Just me after 2003, not Bostrom.
I call the experience my "Bayesian enlightenment" but that doesn't really say anything, does it? Guess you'll have to keep reading Overcoming Bias until I get there.
comment by steven · 2007-12-13T01:42:09.000Z · LW(p) · GW(p)
michael vassar: You're right when you say a correlation of intelligence with liberalism is evidence for liberalism, but that's not because the stupid people are conservative, it's because the smart people are liberal. At least I think that's what Eliezer meant.
comment by Tiiba2 · 2007-12-13T01:49:15.000Z · LW(p) · GW(p)
"""A car with a broken engine cannot drive backward at 200 mph, even if the engine is really really broken."""
"When the player's truck is put into reverse, the truck will accelerate infinitely; however, the truck will halt instantly when the reverse key is released."
comment by Tom_McCabe2 · 2007-12-13T02:17:18.000Z · LW(p) · GW(p)
"I call the experience my "Bayesian enlightenment" but that doesn't really say anything, does it?"
Note to readers: Eli discovered Bayesian probability theory (in general) much earlier than 2003, see http://www.singinst.org/upload/CFAI//design/clean.html#programmer_bayesbinding.
comment by Tom_McCabe2 · 2007-12-13T02:21:15.000Z · LW(p) · GW(p)
"You're right when you say a correlation of intelligence with liberalism is evidence for liberalism, but that's not because the stupid people are conservative, it's because the smart people are liberal."
If you assume the population is partitioned into liberals and conservatives, a high percentage of stupid conservatives implies a high percentage of smart liberals, and vice-versa. If smart liberals are Bayesian evidence for B, then smart conservatives must be Bayesian evidence against B (note that 'smart' here is relative to the average, not some absolute level of smartness).
comment by steven · 2007-12-13T02:44:00.000Z · LW(p) · GW(p)
Can we agree on the following: if you pick a random stupid person and ask for an opinion on B, and the stupid person says B is false, this cannot be evidence against B unless you have background knowledge on the fraction of people who think B, in which case all the work is really being done by the indirect inference about the opinions of smarter people, so calling the stupid person's opinion negative evidence is misleading even if strictly speaking correct?
Replies from: franziska-fischer↑ comment by Franziska Fischer (franziska-fischer) · 2022-12-18T07:32:02.526Z · LW(p) · GW(p)
I'm not sure if I'd agree on that, especially when it comes to political topics, stupid people with strong exposition to mass media tend to perform significantly worse than random: Thus using the opposite of what said stupid person supported seems to have at least a mildly higher chance of being true in T/F question.
comment by Ian_C. · 2007-12-13T03:56:41.000Z · LW(p) · GW(p)
Isn't the truth of a thing (such as a sentence or artwork) determined by how closely it matches reality? And the match-level is a function of the identity of reality and of the thing. So there is no mention of smart or dumb people anywhere in that.
comment by Burton_MacKenZie · 2007-12-13T03:56:48.000Z · LW(p) · GW(p)
Good post, and good job putting this into a common language framework. If you convince only one or two more people to think clearly, it was worth it! B
comment by michael_vassar3 · 2007-12-13T04:24:22.000Z · LW(p) · GW(p)
Steven: Yes we can, with the caveat you mentioned earlier about the human baseline. Of course, that point is plausibly precisely what Mill or whoever was pointing to with his comment.
comment by Caledonian2 · 2007-12-13T04:43:32.000Z · LW(p) · GW(p)
this cannot be evidence against B unless you have background knowledge on the fraction of people who think B,
No. The "unless" clause is still incorrect. We can know a great deal about the fraction of people who think B, and it still cannot serve even as meta-evidence for or against B.
There is an ongoing confusion here about the difference between evidence and meta-evidence. It is as obvious and important as the difference between experimental analysis and meta-analysis, and it is NOT being acknowledged.
comment by Paul_Crowley2 · 2007-12-13T08:11:46.000Z · LW(p) · GW(p)
"No. The "unless" clause is still incorrect. We can know a great deal about the fraction of people who think B, and it still cannot serve even as meta-evidence for or against B."
This can't be right. I have a hundred measuring devices. Ninety are broken and give a random answer with an unknown distribution, while ten give an answer that strongly correlates with the truth. Ninety say A and ten say B. If I examine a random meter that says B and find that it is broken, then surely that has to count as strong evidence against B.
This is probably an unnecessarily subtle point, of course; the overall thrust of the argument is of course correct.
comment by Nick_Tarleton · 2007-12-13T13:26:04.000Z · LW(p) · GW(p)
We can know a great deal about the fraction of people who think B, and it still cannot serve even as meta-evidence for or against B. There is an ongoing confusion here about the difference between evidence and meta-evidence.
No. From a Bayesian perspective, there is no difference other than strength. This is, of course, different from saying that the truth is what the authorities say it is, but I think that's what you're hearing it as.
comment by steven · 2007-12-13T14:09:15.000Z · LW(p) · GW(p)
Actually, if I'm not wrong (and it still confuses me), arguments from authority have a different conditional probability structure than "normal" arguments.
comment by kdwmson · 2007-12-13T16:23:14.000Z · LW(p) · GW(p)
"You're right when you say a correlation of intelligence with liberalism is evidence for liberalism, but that's not because the stupid people are conservative, it's because the smart people are liberal."
That seems to me exactly wrong. A proposition's truth or falseness is not entangled in the intelligence of the people who profess the proposition. Alien cultists do not change the probability of poorly hidden aliens. Dumb people who argue for evolution over creationism do not raise the probability that Genesis is natural history, no matter how dumb they are. Conservative Proposition X will be true or not true regardless of whether it is supported by a very intelligent conservative or by a very dumb conservative.
comment by Caledonian2 · 2007-12-13T18:18:39.000Z · LW(p) · GW(p)
From a Bayesian perspective, there is no difference other than strength.
That's precisely why Bayes' Theorem isn't all you need to know in order to reason. It's an immensely powerful tool, but a grossly inadequate methodology.
Again: there is a great deal of confusion about the difference between evidence and meta-evidence here.
comment by GreedyAlgorithm · 2007-12-13T19:30:47.000Z · LW(p) · GW(p)
Caledonian: please define meta-evidence, then, since I think Eliezer has adequately defined evidence. Clear up our confusion!
comment by Caledonian2 · 2007-12-13T19:47:27.000Z · LW(p) · GW(p)
Eliezer has NOT adequately defined evidence. There is no data that isn't tied to every event through the operations of causality.
comment by Nick_Tarleton · 2007-12-13T19:54:36.000Z · LW(p) · GW(p)
To say it abstractly: For an event to be evidence about a target of inquiry, it has to happen differently in a way that's entangled with the different possible states of the target. (To say it technically: There has to be Shannon mutual information between the evidential event and the target of inquiry, relative to your current state of uncertainty about both of them.) Entanglement can be contagious when processed correctly, which is why you need eyes and a brain. If photons reflect off your shoelaces and hit a rock, the rock won't change much. The rock won't reflect the shoelaces in any helpful way; it won't be detectably different depending on whether your shoelaces were tied or untied. This is why rocks are not useful witnesses in court. A photographic film will contract shoelace-entanglement from the incoming photons, so that the photo can itself act as evidence. If your eyes and brain work correctly, you will become tangled up with your own shoelaces.
And you haven't tried to define meta-evidence at all.
comment by michael_vassar3 · 2007-12-13T20:23:00.000Z · LW(p) · GW(p)
Yes Doug. Furthermore, if you can find a pair of people the difference of who's opinions seems to correlate with reality you can use that as evidence, which is the pattern pointed to by the original quote.
comment by Caledonian2 · 2007-12-13T20:56:26.000Z · LW(p) · GW(p)
The definition Eliezer offered, and the way in which he used the term later, are not connected in any meaningful way. His definition is wrong.
And you haven't tried to define meta-evidence at all.
Do you know what a meta-analysis study is?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-13T21:30:10.000Z · LW(p) · GW(p)
Beware of feeding trolls. If the one can offer naught but flat assertions, you may be better off saying, "Let the audience decide." If you engage and offer defense to each repeated flat assertion, you encourage them to do even less work in the future, since it offers the same attention-reward.
comment by Matthew2 · 2007-12-13T21:42:22.000Z · LW(p) · GW(p)
@yudkowsky I would be happy if I could judge the merit of Bayes for myself versus the frequentists approach. I doubt UTD faculty have seen the light, but who knows, they might. I wonder even more deeply if a thorough understanding of Bayes gives any insight into Epistemology? If you can answer Bayes does offer insight into epistemology I know for sure I will be around for many more months. If I remember correctly, we both have the same IQ (140) yet I am much worse at mathematics. OF course, my dad is an a/c technician, not a physicist.
I enjoy your hard work and insights Eliezer. Also Caledonians comments, mainly for their mystery.
comment by Caledonian2 · 2007-12-13T21:43:11.000Z · LW(p) · GW(p)
Likewise, if you attempt to engage people who make foolish proclamations and ambiguous definitions, it can reward them with attention and conversation. The benefits to puncturing shoddy arguments are often greater than the prices that need to be paid to do so.
Eliezer has repeatedly offered a definition for a term, gone on to mention that this definition is incomplete, and then failed to explicitly refine the definition or provide a process for the reader to update it. Despite recognizing the fallacious nature of conclusions or arguments supported with such behavior (what he has called the "hidden advisor fallacy"), he doesn't seem to have a problem with it when he's the one using the fallacy.
Precision is an absolute requirement for deriving valid conclusions, and when using natural languages extreme care has to be taken to compensate for their ambiguity. You're not taking that care - you are in fact being extraordinarily careless.
comment by Peter_de_Blanc · 2007-12-13T23:12:47.000Z · LW(p) · GW(p)
How about this for a precise definition: A is evidence about B if p(A | B) != p(A | ~B).
Of course, by this definition, almost everything is evidence about almost everything else. So we'd like to talk about the strength of evidence. A good candidate is log p(A | B) - log p(A | ~B). This is the number that gets added to your log odds for B when you observe A.
comment by Caledonian2 · 2007-12-13T23:27:16.000Z · LW(p) · GW(p)
Of course, by this definition, almost everything is evidence about almost everything else.
Ding ding ding!
It may even be the case that, by that definition, everything is evidence about everything else. And clearly that doesn't match our everyday understanding and use of the term - it doesn't even match our formal understanding and use of the term.
What's missing from the definition that we need, in order to make the definition match our understanding?
comment by Caledonian2 · 2007-12-14T00:01:10.000Z · LW(p) · GW(p)
But everything is evidence about everything else. I don't see the problem at all.
Given the circumference of Jupiter around its equator, the height of the Statue of Liberty, and the price of tea in China, can you tell me what's sitting atop my computer monitor right now?
If so, what is it?
If not, why not? I gave you plenty of evidence.
Replies from: pnrjulius↑ comment by pnrjulius · 2012-07-05T04:26:27.147Z · LW(p) · GW(p)
I know with 99% probability that the item on top of your computer monitor is not Jupiter or the Statue of Liberty. And a major piece of information that leads me to that conclusion is... you guessed it, the circumference of Jupiter and the height of the Statue of Liberty. So there you go, this "irrelevant" information actually does narrow my probability estimates just a little bit.
Not a lot. But we didn't say it was good evidence, just that it was, in fact, evidence.
(Pedantic: You could have a model of Jupiter or Liberty on top of your computer, but that's not the same thing as having the actual thing.)
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-14T00:08:40.000Z · LW(p) · GW(p)
Steven, I reduxified your argument as Argument Screens Off Authority.
comment by Peter_de_Blanc · 2007-12-14T00:22:34.000Z · LW(p) · GW(p)
If not, why not? I gave you plenty of evidence.
Caledonian, you gave evidence, but you certainly didn't give plenty of it. I see you ignored the part of my post where I talked about how to quantify evidence. The important question isn't whether or not we have evidence; it's how much evidence we have.
Let me make an analogy. I can define sugar as sucrose; a specific carbohydrate whose molecular structure you can view on wikipedia. I might say that a substance is "sugary" if it contains some sugar. But by this definition, almost everything is sugary, so I hasten to point out that the important question is how sugary it is, and we might define this as the fraction of its mass which consists of sugar.
If, after I have pointed this out, you offer me some sugar cookies containing 1 molecule of sucrose, and then defend yourself by saying that according to my definition, they are indeed sugary, you are being obnoxious. I already told you how to quantify sugariness, and you ignored it for rhetorical reasons.
comment by Nominull2 · 2007-12-14T01:42:53.000Z · LW(p) · GW(p)
Evidence is like gravity. Everything is pulling on everything else, but in most cases the pull is weak enough that we can pretty much ignore it. What you have done, Caledonian, is akin to telling me the position of three one-gram weights, and then asking me to calculate the motion of Charon based on that.
comment by Caledonian2 · 2007-12-14T01:56:20.000Z · LW(p) · GW(p)
If, after I have pointed this out, you offer me some sugar cookies containing 1 molecule of sucrose, and then defend yourself by saying that according to my definition, they are indeed sugary, you are being obnoxious. I already told you how to quantify sugariness, and you ignored it for rhetorical reasons.
No, I'm not being obnoxious. I'm pointing out that your definition is bad by showing that it leads directly to common and absurd conclusions.
By Eliezer's definition, even the thing he offers as an example of a thing that isn't evidence IS STILL EVIDENCE. And instead of you recognizing that this means something is deeply wrong with the definition, you try to exploit the ambiguity of language to defend the utterly absurd result.
comment by Caledonian2 · 2007-12-14T01:58:28.000Z · LW(p) · GW(p)
Everything is pulling on everything else, but in most cases the pull is weak enough that we can pretty much ignore it. What you have done, Caledonian, is akin to telling me the position of three one-gram weights, and then asking me to calculate the motion of Charon based on that.
So close... and yet, so far.
I agree with you that, even if I gave you absolute, complete, and utterly precise data on the three weights, there is no way you could derive the motion of Charon from that.
So: are the three weights evidence of Charon's movement?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-14T02:09:03.000Z · LW(p) · GW(p)
For any that may be genuinely confused: If you read What is Evidence?, An Intuitive Explanation of Bayesian Reasoning, and A Technical Explanation of Technical Explanation, you will understand how to define evidence both qualitatively and quantitatively.
For the rest of you: Stop feeding the troll.
comment by Tom3 · 2007-12-14T05:17:13.000Z · LW(p) · GW(p)
Caledonian is just trying to point out that the keys to rationalism are family values and a literal interpretation of the Bible. I don't know why you all can't see something so obvious.
Observe:
"It may even be the case that, by that definition, everything is evidence about everything else. And clearly that doesn't match our everyday understanding and use of the term - it doesn't even match our formal understanding and use of the term.
What's missing from the definition that we need, in order to make the definition match our understanding?"
Jesus.
"Given the circumference of Jupiter around its equator, the height of the Statue of Liberty, and the price of tea in China, can you tell me what's sitting atop my computer monitor right now?"
Jesus.
"Do you know what a meta-analysis study is?"
Jesus.
The Bible has the answers, people. This is just further proof that until the 'rationalist' community incorporates insights from the Intelligent Design movement and other members of the irrational community, no further progress can be made in understanding the movement of Charon or whatever. Keep the faith Caledonian. You're a warrior of God.
comment by Paul_Crowley · 2007-12-14T12:35:49.000Z · LW(p) · GW(p)
If this is the same Caledonian who used to post to the Pharyngula blog, he's barred from there now with good reason.
Is there a cognitive bias at work that makes it hard for people not to feed trolls?
comment by Ben_Jones · 2007-12-14T14:51:26.000Z · LW(p) · GW(p)
Is there a mathematical expression in probability for the notion that unless someone is making a special effort (concerted or otherwise) they can't be any 'wronger' than 50% accuracy? Subsequently betting the other way would be generating evidence from nothing - creating information. Why no mention of thermodynamics in this post & thread?
Not to feed the troll or anything, but yes, the masses and positions of the three weights are evidence about Charon's movement. Why? Because if you calculated Charon's orbit without knowing their masses, positions etc, you'd be less accurate than if you did. Fact! (Note evidence ABOUT. Evidence OF Charon's movement is taken care of with a decent telescope!)
Eliezer, in your opinion, do the historical prevalence of organised religion, and the human tendency to faith in the unknowable/unprovable, have any bearing at all on the likelihood of the existence of a supreme being of some description?
Replies from: Tedd↑ comment by Tedd · 2013-10-09T03:47:27.454Z · LW(p) · GW(p)
Is there a mathematical expression in probability for the notion that unless someone is making a special effort (concerted or otherwise) they can't be any 'wronger' than 50% accuracy?
That's exactly what I was wondering. A perfect score presumably means either an amazing coincidence or perfect intelligence within the context of the decisions made. (Or is it just perfect information?) And a perfectly incorrect score would then mean the same thing. And a score that exactly matches randomness would seem to involve no intelligence or information at all, although it, too, could presumably also result from perfect information, if that was the objective.
comment by Caledonian2 · 2007-12-14T16:15:05.000Z · LW(p) · GW(p)
Not to feed the troll or anything, but yes, the masses and positions of the three weights are evidence about Charon's movement. Why? Because if you calculated Charon's orbit without knowing their masses, positions etc, you'd be less accurate than if you did.
Calculating Charon's orbit without knowing what direction Charon moves in, or even whether it moves at all, is an impossible task. You are substituting "Charon's orbit" for "Charon's movement" in your argument, then acting as though you have made a statement about Charon's movement.
If you all do not grasp why precision in the use of words is absolutely necessary to eliminate bias in natural language, I will drop the point; nevertheless, it remains.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-14T19:17:19.000Z · LW(p) · GW(p)
Ben Jones, I don't see the human existence of religion as having any evidential bearing on the existence of a Super Happy Agent sufficiently like a person and unlike evolution that theists would actually notice its existence. Pretty much the same probability as an object one foot across and composed of chocolate cake existing in the asteroid belt. For interventionist Super Happy Agents, same probability as elves stealing your socks.
Incidentally, with sufficiently precise measurements it's perfectly possible to get a gravitational map of the entire Solar System off a random household object.
comment by Nick_Tarleton · 2007-12-14T19:30:00.000Z · LW(p) · GW(p)
Ben Jones, I don't see the human existence of religion as having any evidential bearing on the existence of a Super Happy Agent sufficiently like a person and unlike evolution that theists would actually notice its existence.
Any evidential bearing? Surely P(religion X exists|religion X is true) is higher than P(religion X exists|religion X is false).
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-14T19:45:00.000Z · LW(p) · GW(p)
Nick, I don't see how that follows for the supermajority of religions that are logically self-contradictory, except in the sense that if 1=2 then the probability of the Sun rising tomorrow is nearly 200%. Furthermore, Ben Jones asked about religion in general rather than any specific religion, and religion in general most certainly cannot be true.
comment by Unknown3 · 2007-12-14T19:58:00.000Z · LW(p) · GW(p)
In general, any claim maintained by even a single human being to be true will be more probable, simply based on the authority of that human being, than some random claim such as the chocolate cake claim, which is not believed by anyone.
There are possibly some exceptions to this (and possibly not), but in general there is no particular reason to include religions as exceptions.
comment by Caledonian2 · 2007-12-14T19:58:00.000Z · LW(p) · GW(p)
Incidentally, with sufficiently precise measurements it's perfectly possible to get a gravitational map of the entire Solar System off a random household object.
Also incorrect. More than one configuration of masses can have exactly the same effect on the object. No matter how precisely you measure the properties of the object, you can never distinguish between those configurations.
comment by Ben_Jones · 2007-12-14T21:36:00.000Z · LW(p) · GW(p)
"If God did not exist, it would be necessary to invent him."
Nick: Why should an atheistic rationalist have any more faith a a religion that exists than a religion that doesn't? I don't belive in God; the testimony of a man that claims he spoke to God in a burning bush doesn't sway me to update my probability. I Defy The Data!
My 'lack of faith' stems from a probability-based judgment that there is no Super Agent. With this as my starting point, I have as much reason to worship Yoda as I do God.
comment by Peter_de_Blanc · 2007-12-14T21:45:00.000Z · LW(p) · GW(p)
Ben Jones, I don't see the human existence of religion as having any evidential bearing on the existence of a Super Happy Agent sufficiently like a person and unlike evolution that theists would actually notice its existence. Pretty much the same probability as an object one foot across and composed of chocolate cake existing in the asteroid belt. For interventionist Super Happy Agents, same probability as elves stealing your socks.
Eli, you're just saying that you don't believe in the existence of a SHASLAPAUETTWANIE. But since you labeled it with: "...that theists would actually notice its existence," then clearly the existence of religion has some evidential bearing on the existence of a SHASLAPAUETTWANIE.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-14T22:25:00.000Z · LW(p) · GW(p)
Eli, you're just saying that you don't believe in the existence of a SHASLAPAUETTWANIE. But since you labeled it with: "...that theists would actually notice its existence," then clearly the existence of religion has some evidential bearing on the existence of a SHASLAPAUETTWANIE.
(Blink.)
Um, I concede to your crushing logic, I guess... what exactly am I conceding again?
comment by Steve_Sailer · 2007-12-18T02:30:00.000Z · LW(p) · GW(p)
Flying saucer cultism was helped along by secret Cold War technological advances that were accidentally witnessed by civilians.
For example, the famous 1947 Roswell incident was the crashing of an American strategic reconnaissance super-balloon that was supposed to float over the Soviet Union and snap pictures, which would then be recovered many thousands of miles away. That's why it was made out of the latest high-tech materials that were unfamiliar to people in small town New Mexico in 1947.
The KGB used to generate flying saucer stories in Latin America to discredit actual sightings of the re-entry of a Soviet "partial-orbit" missile that was being tested in order to allow a surprise attack on the U.S. from the South (the NORAD radar assumed a Soviet attack would come over the Arctic). KGB agents in Latin America would phone in flying saucer reports to newspapers to make honest witnesses of the Soviet missile test look like lunatics.
comment by Unknown3 · 2007-12-18T08:13:00.000Z · LW(p) · GW(p)
Steve, maybe this was your point anyway, but the incidents you mention indicate that the existence of flying saucer cults is evidence for the existence of aliens (namely by showing that the cults were based on seeing something in the real world.) No doubt they aren't much evidence, especially given the prior improbability, but they are certainly evidence.
comment by Brant_Boucher · 2008-01-09T22:20:00.000Z · LW(p) · GW(p)
"Not all Conservatives are stupid, but most stupid people are
Conservatives." (The British Conservative Party was the brunt of this quip by J.S. Mills.) It helps to Venn diagram this. I find that many stupid conservatives assume that conservatives are the majority, which leaves few stupid people to be liberals or anything else (although a majority of Liberals are assumed by stupid conservatives to be stupid people). But if conservatives are not a majority, there are many stupid people who MIGHT or MIGHT NOT be liberals. I assume there are plenty of stupid people to go around between the conservatives, liberals and other groups. If conservatives ARE the majority and most stupid people are conservatives, but liberals are a very sizeable minority, you would expect a lot of Smart People Who Are liberals. To quote Karl Rove: "As people do better, they start voting like Republicans - unless they have too much education and vote Democratic, which proves there can be too much of a good thing." Presumably the Republicans have a lock on the rich and stupid vote, or at least the rich and uneducated.
comment by Fal · 2011-03-23T00:42:22.819Z · LW(p) · GW(p)
pokes head in and looks around Okay, I'm new here, and maybe I shouldn't open by poking a sleeping dragon, but I can't help but try and take a small crack at this. firm nod
As I understand it, the crux of the article is concern about irrational arguments which imply that valid points and rational arguments should be discarded if they are somehow associated with irrational, unsuccessful, or commonly disliked people. Many of the comments on the conservatives quote seem to ignore that context.
Also, debating the semantics of the article rather than its core meaning (Whether or not the word "evidence" should be used to describe such, for example) is counterproductive, distracting and not really assisting people in learning rational thought.
That said, I feel that even out of context the point is still valid. One major and somewhat flawed assumption in many of the comments here as well as consistently elsewhere is that smart people are more likely to arrive at the "correct" answer than stupid people.
Stupid people often arrive at the correct conclusion, either by luck, or because a question isn't actually very difficult, or because they may listen to the advice of others.
Smart people often arrive at the incorrect conclusion, as they do not (and can not) always gather enough evidence to make a sufficiently informed decision, and sometimes take false advice or evidence under consideration.
Both smart and stupid people can and will often lie about their findings for many reasons, or they may disagree over what defines the correct conclusion.
Such associative statements are therefore so weakly weighted as evidence that they are only really worth mentioning if no other evidence can be found, and an argument which relies solely on such weak data as this is not to be trusted as truth.
At best, it may serve as a cautious guess until more reliable evidence can be found, which would quickly render the opinions of unreliable people (be they stupid or smart) obsolete.
In the Charon's orbit example, this would be less like using small weights in a room (Which is very fine, very precise evidence), and more like trying to track Charon's movement using blurry pictures taken at random across the entire night sky. If some of them had inaccurate timestamps.
Replies from: Scaevite↑ comment by Scaevite · 2011-08-05T09:19:53.645Z · LW(p) · GW(p)
Even though stupid people sometimes get things right, and smart people sometimes get things wrong, that doesn't say anything about how often they do so (comparatively). You can't use those rare cases to negate the 'assumption' that intelligence aids correct judgements. It just means that intelligence is not a 100% guarantee of correctness - but we knew that anyway. As it stands, the usefulness of different aspects of intelligence - reasoning, analytical ability and so on - in assessing probablities and making judgements is fairly obvious.
Also, even if the personal beliefs of one individual don't serve as very strong evidence, a large-scale trend towards more intelligent people favouring one side of the argument should be taken into account. It's not so much evidence in itself as meta-evidence that a) other people who may know things you don't, tend to favour one option; and b) other people with the same knowledge as you, but better processing capabilities, tend to favour that option. With more complex issues which you may not have much personal experience of, this could be a rather substantial factor in your probability assessment.
I should also point out that it's intelligence, not stupidity, that is important. Intelligent supporters of a view can be taken as reasonably strong evidence, as seen above. Stupid people have less intelligence, therefore their view should be weaker evidence - but even a stupid person supporting something INCREASES the probability that that view is correct, albeit by such a small amount that it can almost be ignored in favour of assessing what smart people think.
Of course, then there's the worldview difference to consider, and the fact that even if they can make a better decision than you, their "better" option may not lead to a more desirable world from your perspective.
comment by buybuydandavis · 2011-09-26T09:37:51.448Z · LW(p) · GW(p)
In Hansonian terms: Your instinctive willingness to believe something will change along with your willingness to affiliate with people who are known for believing it - quite apart from whether the belief is actually true.
And that's why politics is more about identity than predictive truth.
comment by lessdazed · 2012-05-21T06:30:05.167Z · LW(p) · GW(p)
I still believe in Global Warming. Do you?
-Ted Kaczynski, The Unabomber
-Heartland Institute billboard
From the press release:
1. Who appears on the billboards?
The billboard series features Ted Kaczynski, the infamous Unabomber; Charles Manson, a mass murderer; and Fidel Castro, a tyrant. Other global warming alarmists who may appear on future billboards include Osama bin Laden and James J. Lee (who took hostages inside the headquarters of the Discovery Channel in 2010).
These rogues and villains were chosen because they made public statements about how man-made global warming is a crisis and how mankind must take immediate and drastic actions to stop it.
2. Why did Heartland choose to feature these people on its billboards?
Because what these murderers and madmen have said differs very little from what spokespersons for the United Nations, journalists for the “mainstream” media, and liberal politicians say about global warming. They are so similar, in fact, that a Web site has a quiz that asks if you can tell the difference between what Ted Kaczynski, the Unabomber, wrote in his “Manifesto” and what Al Gore wrote in his book, Earth in the Balance.
The point is that believing in global warming is not “mainstream,” smart, or sophisticated. In fact, it is just the opposite of those things. Still believing in man-made global warming – after all the scientific discoveries and revelations that point against this theory – is more than a little nutty. In fact, some really crazy people use it to justify immoral and frightening behavior.
Interestingly, science is the first thing mentioned in the next section:
Replies from: stripey73. Why shouldn’t I still believe in global warming?
Because the best available science says about two-thirds of the warming in the 1990s was due to natural causes, not human activities; the warming trend of the second half of the twentieth century century already has stopped and forecasts of future warming are unreliable; and the benefits of a moderate warming are likely to outweigh the costs. Global warming, in other words, is not a crisis.
comment by BlueAjah · 2013-01-12T16:23:29.763Z · LW(p) · GW(p)
Again, I disagree. Cults can't form around anything. They can only form around issues that would make them social or intellectual outcasts. And in a world in which there were poorly hidden aliens, too many intelligent people would be of the opinion that there are poorly hidden aliens, and no such cult could arise.
But the more important point is... IF I start to think that there are poorly hidden aliens, that could be due to one of two reasons: either because I have reasonable evidence for their existence, or because I'm being influenced by some sort of bias.
The existence of cults around the issue shows that those biases exist and are reasonably common, and thus are a more likely reason for my belief than the alternative of actual aliens.
comment by A1987dM (army1987) · 2013-08-24T06:58:51.171Z · LW(p) · GW(p)
It's not much evidence one way or the other.
I'd guess P(cults|aliens) would be noticeably bigger than P(cults|~aliens).
It's just that the prior P(aliens) is so tiny that even the posterior P(aliens|cults) is negligible.
comment by Colombi · 2014-02-20T05:23:59.032Z · LW(p) · GW(p)
That's one way to put it, I guess.
Replies from: hyporational↑ comment by hyporational · 2014-02-20T12:35:05.043Z · LW(p) · GW(p)
Please try to keep your comments informative. Note that your comments show up in recent comments.
comment by trickster · 2017-09-27T09:33:53.219Z · LW(p) · GW(p)
I think that this has deal with boundeed rationality. Perfect knowledge required endless ammount of time- and all human have only limited lifetime. So, ammount of time for each dessision limited even more. Therefore we can not explore all argument. And I think - it would be a good strategy to throw away some arguments right in the begining and don't waste time on them. Instead you can pay more attention to more plausible one. And this give you a opportunity to build a relatively accurate model of the world in relativly short time. If you not agree- consider this argument. How you can argue against communism if you don't read all the works of Marx/Engels/Lenin/Mao/Trotskiy/Rosa Luxemburg/Bucharin/Zinovev/Stalin/Kautskiy/Sen-Simon....And so on- liste can be endless. I think that such arguments are actually used as a demand that you must slavishly agree with persone who said this. Clearly, this unacceptable and we have right don't agree even if we doesn't read all this volumes- using only limited ammount of data that we already posses. So. if we throw away some idea after first glance- stupidity of followers is not a worst criteria for doing this
comment by LESS · 2020-05-12T01:04:31.532Z · LW(p) · GW(p)
"The conditional probability P(cults|aliens) isn’t less than P(cults|aliens) "
Shouldn't this be " The conditional probability P(cults|~aliens) isn’t less than P(cults|aliens) ?" It seems trivial that a probability is not less than itself, and the preceding text seems to propose the modified version included in this comment.
Replies from: LESScomment by Jake_NB · 2021-05-21T16:53:40.893Z · LW(p) · GW(p)
I'm reserved as to the corollary that only winning against the strongest advocate of an idea holds ANY meaning to disprove the idea.
For one, there could be a better arguer. If there is a better advocate of the intelligence explosion than Eliezer, unlikely as they may seem, who just won't go public and keeps to private circles, would it do nothing to win against the former? Taken another step further, if it is likely there ever will be such a proponent, does that invalidate all present and past efforts?
For another, the quality of an arguer can only be made after they effect. So to have any standing on any idea, one must win against every single advocate of the opposing view. Has anyone here tried that on, say, theism?
I think it's more accurate to say that winning an argument against sub-optimal advocates of an idea doesn't give enough basis to discredit the idea reliably. Indeed, since in complicated issues there is often no advocate who can exhibit all arguments favoring a position, one cannot completely discredit the idea even after defeating the champion of advocates. This frame seems more Bayesian Rationalistic, too, as it does not deal with probabilities of 0 or 1.
comment by Franziska Fischer (franziska-fischer) · 2022-12-18T07:25:39.944Z · LW(p) · GW(p)
One point Hans Rosling tried to convey constantly in Factfulness is that most stupid people perform significantly worse than random (than "the chimps" as he portrayed it). He argued that this results from biases etc. If this generalises to more than people's perception of how the world is doing and increases in strength for less intelligent people or people with more exposition to mass media, this could potentially apply to silliness and alien intervention not being orthogonal but having significant correlation as well. Similarly, "most stupid people vote republican" is then at least some mild evidence that this party actually is more "wrong" (although I think this is weaker for a two-party system like the US, and stronger for European multiparty systems: If a party's actions correlate with the preferences of stupid people (and those perform worse than random), that's some evidence that they're wrong and thus at least mildly relevant to a political debate.
comment by Plasma Ballin' (joseph-noonan) · 2023-01-28T17:51:36.170Z · LW(p) · GW(p)
I actually think it is possible for someone's beliefs to anti-correlate with reality without being smart enough to know what really is true just to reverse it. I can think of at least three ways this could happen, beyond extremely unlikely coincidences. The first two are that a person could be systematically deceived by someone else, until they have more false beliefs then true ones, and the second is that systematic cognitive biases could reliably distort their beliefs. The third is the most interesting one, though: If someone has a belief that many of their other beliefs depend on, and that belief is wrong, it could lead to all of those other beliefs being wrong as well. There are plenty of people who base a large portion of their beliefs on a single belief or cluster of beliefs, the most obvious example being the devoutly religious, especially if they belong to a cult or fundamentalist group. Basically, since beliefs are not independent, people can have large sets of connected beliefs that stand or fall together. Of course, this still wouldn't affect the probability that any of their beliefs not connected to those clusters are true, so it doesn't really change the conclusion of this essay by much, but I think it is interesting nonetheless. At the very least, it is a warning against having too many beliefs that all depend on a single idea.
comment by Jan Galkowski (jan-galkowski) · 2023-05-07T20:59:44.432Z · LW(p) · GW(p)
At least for me and therefore possibly others it would be useful to put a hyperlink on an Important Name to go off to a review of their primary argument. For instance "Hanson". Wonderful writeup.
Thanks.
comment by VeryPeeved · 2024-05-29T03:12:51.055Z · LW(p) · GW(p)
"If a hundred inventors fail to build flying machines using metal and wood and canvas, it doesn’t imply that what you really need is a flying machine of bone and flesh."
Although we do, because that would be awesome.