The Criminal Stupidity of Intelligent People
post by fare · 2012-07-27T04:08:38.508Z · LW · GW · Legacy · 52 commentsContents
52 comments
What always fascinates me when I meet a group of very intelligent people is the very elaborate bullshit that they believe in. The naive theory of intelligence I first posited when I was a kid was that intelligence is a tool to avoid false beliefs and find the truth. Surrounded by mediocre minds who held obviously absurd beliefs not only without the ability to coherently argue why they held these beliefs, but without the ability of even understanding basic arguments about them, I believed as a child that the vast amount of superstition and false beliefs in the world was due to people both being stupid and following the authority of insufficiently intelligent teachers and leaders. More intelligent people and people following more intelligent authorities would thus automatically hold better beliefs and avoid disproven superstitions. However, as a grown up, I got the opportunity to actually meet and mingle with a whole lot of intelligent people, including many whom I readily admit are vastly more intelligent than I am. And then I had to find that my naive theory of intelligence didn't hold water: intelligent people were just as prone as less intelligent people to believing in obviously absurd superstitions. Only their superstitions would be much more complex, elaborate, rich, and far reaching than an inferior mind's superstitions.
For instance, I remember a ride with an extremely intelligent and interesting man (RIP Bob Desmarets); he was describing his current pursuit, which struck me as a brilliant mathematical mind's version of mysticism: the difference was that instead of marveling at some trivial picture of an incarnate god like some lesser minds might have done, he was seeking some Ultimate Answer to the Universe in the branching structures of ever more complex algebras of numbers, real numbers, complex numbers, quaternions, octonions, and beyond, in ever higher dimensions (notably in relation to super-string theories). I have no doubt that there is something deep, and probably enlightening and even useful in such theories, and I readily disqualify myself as to the ability to judge the contributions that my friend made to the topic from a technical point of view; no doubt they were brilliant in one way or another. Yet, the way he was talking about this topic immediately triggered the "crackpot" flag; he was looking there for much more than could possibly be found, and anyone (like me) capable of acknowledging being too stupid to fathom the Full Glory of these number structures yet able to find some meaning in life could have told that no, this topic doesn't hold key to The Ultimate Source of All Meaning in Life. Bob's intellectual quest, as exaggeratedly exalted as it might have been, and as interesting as it was to his own exceptional mind, was on the grand scale of things but some modestly useful research venue at best, and an inoffensive pastime at worst. Perhaps Bob could conceivably used his vast intellect towards pursuits more useful to you and I; but we didn't own his mind, and we have no claims to lay on the wonders he could have created but failed to by putting his mind into one quest rather than another. First, Do No Harm. Bob didn't harm any one, and his ideas certainly contained no hint of any harm to be done to anyone.
Unhappily, that is not always the case of every intelligent man's fantasies. Let's consider a discussion I was having recently, that prompted this article. Last week, I joined a dinner-discussion with a lesswrong meetup group: radical believers in rationality and its power to improve life in general and one's own life in particular. As you can imagine, the attendance was largely, though not exclusively, composed of male computer geeks. But then again, any club that accepts me as a member will probably be biased that way: birds of the feather flock together. No doubt, there are plenty of meetup groups with the opposite bias, gathering desperately non-geeky females to the almost exclusion of males. Anyway, the theme of the dinner was "optimal philanthropy", or how to give time and money to charities in a way that maximizes the positive impact of your giving. So far, so good.
But then, I found myself in a most disturbing private side conversation with the organizer, Jeff Kaufman (a colleague, I later found out), someone I strongly suspect of being in many ways saner and more intelligent than I am. While discussing utilitarian ways of evaluating charitable action, he at some point mentioned some quite intelligent acquaintance of his who believed that morality was about minimizing the suffering of living beings; from there, that acquaintance logically concluded that wiping out all life on earth with sufficient nuclear bombs (or with grey goo) in a surprise simultaneous attack would be the best possible way to optimize the world, though one would have to make triple sure of involving enough destructive power that not one single strand of life should survive or else the suffering would go on and the destruction would have been just gratuitous suffering. We all seemed to agree that this was an absurd and criminal idea, and that we should be glad the guy, brilliant as he may be, doesn't remotely have the ability to implement his crazy scheme; we shuddered though at the idea of a future super-human AI having this ability and being convinced of such theories.
That was not the disturbing part though. What tipped me off was when Jeff, taking the "opposite" stance of "happiness maximization" to the discussed acquaintance's "suffering minimization", seriously defended the concept of wireheading as a way that happiness may be maximized in the future: putting humans into vats where the pleasure centers of their brains will be constantly stimulated, possibly using force. Or perhaps instead of humans, using rats, or ants, or some brain cell cultures or perhaps nano-electronic simulations of such electro-chemical stimulations; in the latter cases, biological humans, being less-efficient forms of happiness substrate, would be done away with or at least not renewed as embodiments of the Holy Happiness to be maximized. He even wrote at least two blog posts on this theme: hedonic vs preference utilitarianism in the Context of Wireheading, and Value of a Computational Process. In the former, he admits to some doubts, but concludes that the ways a value system grounded on happiness differ from my intuitions are problems with my intutions.
I expect that most people would, and rightfully so, find Jeff's ideas as well as his acquaintance's ideas to be ridiculous and absurd on their face; they would judge any attempt to use force to implement them as criminal, and they would consider their fantasied implemention to be the worst of possible mass murders. Of course, I also expect that most people would be incapable of arguing their case rationally against Jeff, who is much more intelligent, educated and knowledgeable in these issues than they are. And yet, though most of them would have to admit their lack of understanding and their absence of a rational response to his arguments, they'd be completely right in rejecting his conclusion and in refusing to hear his arguments, for he is indeed the sorely mistaken one, despite his vast intellectual advantages.
I wilfully defer any detailed rational refutation of Jeff's idea to some future article (can you without reading mine write a valuable one?). In this post, I rather want to address the meta-point of how to address the seemingly crazy ideas of our intellectual superiors. First, I will invoke the "conservative" principle (as I'll call it), well defended by Hayek (who is not a conservative): we must often reject the well-argued ideas of intelligent people, sometimes more intelligent than we are, sometimes without giving them a detailed hearing, and instead stand by our intuitions, traditions and secular rules, that are the stable fruit of millenia of evolution. We should not lightly reject those rules, certainly not without a clear testable understanding of why they were valid where they are known to have worked, and why they would cease to be in another context. Second, we should not hesitate to use proxy in an eristic argument: if we are to bow to the superior intellect of our better, it should not be without having pitted said presumed intellects against each other in a fair debate to find out if indeed there is a better whose superior arguments can convince the others or reveal their error. Last but not least, beyond mere conservatism or debate, mine is the Libertarian point: there is Universal Law, that everyone must respect, whereby peace between humans is possible inasmuch and only inasmuch as they don't initiate violence against other persons and their property. And as I have argued in another previous essay (hardscrapple), this generalizes to maintaining peace between sentient beings of all levels of intelligence, including any future AI that Jeff may be prone to consider. Whatever the one's prevailing or dissenting opinions, the initiation of force is never to be allowed as a means to further any ends. Rather than doubt his intuition, Jeff should have been tipped that his theory was wrong and taken out of context by the very fact that it advocates or condones massive violation of this Universal Law. Criminal urges, mass-criminal at that, are a strong stench that should alert anyone that some ideas have gone astray, even when it might not be immediately obvious where exactly they started parting from the path of sanity.
Now, you might ask, it is good and well to poke fun at the crazy ideas that some otherwise intelligent people may hold; it may even allow one to wallow in a somewhat justified sense of intellectual superiority over people who otherwise are actually and objectively so one's intellectual superiors. But is there a deeper point? Is it relevant what crazy ideas intellectuals hold, whether inoffensive or criminal? Sadly, it is. As John McCarthy put it, "Soccer riots kill at most tens. Intellectuals' ideological riots sometimes kill millions." Jeff's particular crazy idea may be mostly harmless: the criminal raptures of the overintelligent nerd, that are so elaborate as to be unfathomable to 99.9% of the population, are unlikely to ever spread to enough of the power elite to be implemented. That is, unless by some exceptional circumstance there is a short and brutal transition to power by some overfriendly AI programmed to follow such an idea. On the other hand, the criminal raptures of a majority of the more mediocre intellectual elite, when they further possess simple variants that can intoxicate the ignorant and stupid masses, are not just theoretically able to lead to mass murder, but have historically been the source of all large-scale mass murders so far; and these mass murders can be counted in hundreds of millions, over the XXth century only, just for Socialism. Nationalism, Islamism and Social-democracy (the attenuated strand of socialism that now reigns in Western "Democracies") count their victims in millions only. And every time, the most well-meaning of intellectuals build and spread the ideologies of these mass-murders. A little initial conceptual mistake, properly amplified, can do that.
And so I am reminded of the meetings of some communist cells that I attended out of curiosity when I was in high-school. Indeed, trotskyites are very openly recruiting in "good" French high-schools. It was amazing the kind of non-sensical crap that these obviously above-average adolescent could repeat. "The morale of the workers is low." Whoa. Or "The petite-bourgeoisie" is plotting this or that. Apparently, grossly cut social classes spanning millions of individuals act as one man, either afflicted with depression or making machiavelian plans. Not that any of them knew much of either salaried workers or entrepreneurs but through one-sided socialist literature. If you think that the nonsense of the intellectual elite is inoffensive, consider what happens when some of them actually act on those nonsensical beliefs: you get terrorists who kill tens of people; when they lead ignorant masses, they end up killing millions of people in extermination camps or plain massacres. And when they take control of entire universities, and train generations of scholars, who teach generations of bureaucrats, politicians, journalists, then you suddenly find that all politicians agree on slowly implementing the same totalitarian agenda, one way or another.
If you think that control of universities by left-wing ideologists is just a French thing, consider how for instance, America just elected a president whose mentor and ghostwriter was the chief of a terrorist group made of Ivy League educated intellectuals, whose overriding concern about the country they claimed to rule was how to slaughter ten percent of its population in concentration camps. And then consider that the policies of this president's "right wing" opponent are indistinguishable from the policies of said president. The violent revolution has given way to the slow replacement of the elite, towards the same totalitarian ideals, coming to you slowly but relentlessly rather than through a single mass criminal event. Welcome to a world where the crazy ideas of intelligent people are imposed by force, cunning and superior organization upon a mass of less intelligent yet less crazy people.
Ideas have consequences. That's why everyone Needs Philosophy.
Crossposted from my livejournal: http://fare.livejournal.com/168376.html
52 comments
Comments sorted by top scores.
comment by Alejandro1 · 2012-07-27T06:03:04.215Z · LW(p) · GW(p)
There is a deep tension, indeed almost a contradiction, between two aspects of your essay. One one hand, you argue against the conclusions of Bob (from a beautiful, simple mathematical structure), of Jeff's acquaintance (from the simple, appealing ethical principle "minimize suffering") and of Jeff (from the simple, appealing ethical principle "maximize happiness"). You make the point that a Hayekian conservative spirit, that keeps in mind traditions, common sense and long-evolved intuitions and weighs them above logical principles appealing to intelligent people, should be used a warning light to reject those kind of philosophies. This is similar to what is sometimes loosely called the "outside view" here, and I basically agree with it, though it must be used carefully and on a case-by-case basis.
But just after that, you state a Libertarian principle, a Universal Law of non-aggression against persons and their property, and go so far as to assert that it applies to any kind of sentient being, including aliens and AIs. Now, I don't want to be dragged into a discussion about libertarianism, which would be against the "no-politics" rules of Less Wrong(1). But I hope you realize that this "Universal Law" is a simple abstract principle of the kind that appeals to intelligent people, and as such not so different from "maximize happiness" or "minimize suffering". The actual complex web of traditions, evolved intuitions and "common sense" of mankind is very far removed from these super-simple abstract principles. Rearranging any actual society to conform to the Libertarian principle. regardless of its merits, would require a huge upheaval of long-entrenched laws, customs and expectations, and as such should be rejected by the "outside view" heuristic that you preach in the first part of the essay. (ETA: see also Scott Aaronson's description of libertarians as "bullet-swallowers"--the same intellectual vice, essentially, that you attribute to Bob, Jeff, and his acquaintance)
(1)Following these rules, I would suggest you to remove the last paragraph's references to Obama and his Ayers connection, which does very little for the global points of your essay. It is the kind of thing that produces a strong, negative mind-killing reaction against your post to any reader who does not belong to a particular right-wing subculture.
Replies from: fare↑ comment by fare · 2012-08-02T03:29:12.104Z · LW(p) · GW(p)
Yes, the Universal Law applies to any kind of sentient being. See for instance my essay "Identity, Immunity, Law and Aggression on the Rapacious Hardscrapple Frontier" http://fare.tunes.org/liberty/hardscrapple.html
And no, I never argued that "if it appeals to intelligent people, it's wrong". Your implied argument is a straw man. If you read carefully, I give a very specific criterion on how one may lift the burden of the proof against tradition.
Who of the proponents of a theory and its opponents are bullet-swallowers? Each thinks it's the other. Using that as an argument is begging the question.
comment by jimrandomh · 2012-07-27T06:33:29.162Z · LW(p) · GW(p)
You start with some good examples, which I think share a pattern. First, you have someone who takes a valid principle, "suffering is bad", and promotes it from "thing that is generally true" to "ultimate answer", and proceeds to reason off the rails. Then you have someone who takes "understanding math is good", and does the same thing, with less spectacular (but still bad) results. Then again with "happiness is good". History is rife with additional examples, and not just in moral philosophy; the same thing happens in technical topics, too.
The lesson I take from this is that you can't take any statement, no matter how true, and elevate it to uncompromisable ideology. You need a detailed and complex view of the world incorporating many principles at once, which will sometimes conflict and which you will have to reconcile and balance. So suffering is bad, but not so bad that it's worth sacrificing the universe to get rid of it; math is cool, but not so cool that it displaces all my other priorities; happiness is good, but if something looks like a weird corner-case of the word "happiness" it might be bad. (And a definition of the word "happiness" which provides a clear yes/no answer to whether wireheading is happiness doesn't help at all, because the problem is with the concept, and the applicability of all the reasoning that went into beliefs about that concept, not in the word itself). When any two principles from the collection disagree on something, they are both called into question, not for their validity as general principles, but for their applicability to that particular case.
So when you go on to say:
Whatever the one's prevailing or dissenting opinions, the initiation of force is never to be allowed as a means to further any ends.
I say: that principle belongs in the pool with the others. When the no-force principle conflicts with the extrapolated consequences of "suffering is bad" and "happiness is good" (which extrapolate to "destroy the universe" and "wirehead everyone", respectively), this suggests serious problems with those extrapolations.
But that doesn't mean you can elevate it to uncompromisable ideology. Initiating force is bad, but I wouldn't sacrifice the universe to avoid it. Minimizing the amount of force-initiation in the world doesn't displace all my other priorities. And if something looks like a weird corner case of the word "force", it might be fine.
The defense against crackpottery is not to choose the perfect principle, because there probably isn't one; it's to have a model with enough principles in it that if a corner case makes any one principle go awry, or matches a principle but fails to match the arguments that justified it, then reasoning won't go too terribly wrong.
Replies from: fare↑ comment by fare · 2012-08-02T03:36:12.235Z · LW(p) · GW(p)
Accusing me of presenting my principle as "Perfect" - what a great combination of 1- straw man argument -- putting a nirvana fallacy in my mouth 2- special pleading -- the double standard of requiring my principles to be "perfect" but not yours.
Your belief that force can ever have large-scale positive consequences denotes a singular blindness to the Law of Eristic Escalation, and/or the Law of Bitur-Camember http://fare.livejournal.com/32611.html It's OK to be ignorant - but lame to laugh at those who aren't because they aren't.
Replies from: jimrandomh↑ comment by jimrandomh · 2012-08-02T04:20:31.933Z · LW(p) · GW(p)
You seem to have misunderstood what I wrote. My comment was meta - it is about the structure that peoples' beliefs ought to have. I changed the topic entirely, using your post as a source of inspiration and examples. If you read it expecting a rebuttal, then it wasn't a very good one. It probably skewed your interpretation a lot, because that's not what it was at all. It talks about specific beliefs only as examples, and not to endorse or oppose them.
Please reread my earlier comment with adjusted priors, and try to do so calmly, in your most analytical state of mind.
Replies from: fare↑ comment by fare · 2012-08-02T14:22:30.228Z · LW(p) · GW(p)
Once again, "ideology" is but an insult for theories you don't like. All in all your post is but gloating at being more subtle than other people. Speak of an "analytical" state of mind.
But granted - you ARE more subtle than most. And yet, you still maintain blissful ignorance of some basic laws of human action.
PS: the last paragraph of your previous comment suggests that if you're into computer science, you might be interested Gerald J. Sussman's talk about "degeneracy".
Replies from: jimrandomh↑ comment by jimrandomh · 2012-08-02T22:48:37.347Z · LW(p) · GW(p)
But granted - you ARE more subtle than most. And yet, you still maintain blissful ignorance of some basic laws of human action.
Is that the model you're using to predict my responses? That I "maintain blissful ignorance" of a few important things, and that I'd change my perspective if only I knew them? If that were true, what would you expect to see? How does this compare to what you observe?
There is something important going on here that you haven't noticed.
comment by lsparrish · 2012-07-27T17:40:20.730Z · LW(p) · GW(p)
Anger is a natural human emotion with its own proper contexts, but Less Wrong is not a good place for it. This is a place where it is safe to submit any idea for rational criticism and change your mind. Thus I would stay away from terms like "criminal stupidity" even if you meant that in a neutral and technical way, because it is hard to read in anything besides an angry tone.
The other thing wrong with this essay (as others have pointed out) is that after presenting multiple examples of appealingly simple ideologies that smart people fall for, you propose your own simple ideology, libertarian non-aggression, as if it were somehow obvious that this is not of a similar nature.
How are we supposed to know that we can safely say yes to non-aggression and no to happiness maximization? What about the universe makes it the case that happiness maximization is inferior to non-aggression? Personally I find it hard to conclude that there are not some kinds of aggression that I (and by extension, humans in general) would be willing to suffer in exchange for some kinds of happiness.
comment by OrphanWilde · 2012-07-27T05:26:01.929Z · LW(p) · GW(p)
The central theme of what you've written here is known locally as Egan's Law, and as applied to metaethics, means, very roughly, that ethical systems should never deviate far from what we already understand to be ethical, or lead to transparently unethical decisions. http://lesswrong.com/lw/sk/changing_your_metaethics/
And, uh. You may want to consider deleting this, and starting over from scratch, with a little less rage and a little more purpose. It's not immediately apparent even to me, a sympathetic audience from the Objectivist perspective, what you're trying to forward except a vague anti-intellectualism. It comes off more than a little bit at thumbing your nose at people smarter than you on the basis that smart people have done dumb things before. (I can't even say that you're thumbing your nose at people for -thinking- they're smarter than you, as you seem to suggest that intelligence is itself a fault, justifiable only by its support of the status quo, judging by your comments using Hayek.)
Replies from: gwern, Bruno_Coelho↑ comment by gwern · 2012-07-27T13:46:00.259Z · LW(p) · GW(p)
And then there's the bullshit:
And then I had to find that my naive theory of intelligence didn't hold water: intelligent people were just as prone as less intelligent people to believing in obviously absurd superstitions.
'Just as prone'? I would be fascinated to see any evidence beyond the anecdotal for this...
Replies from: B9013C87, OrphanWilde↑ comment by B9013C87 · 2012-07-27T17:35:29.436Z · LW(p) · GW(p)
Actually, this kinds of reminds me of Stanovich's Dysrationalia and also of Eliezer's "Outside the laboratory", if only more uncompromising and extreme than those two. Then again, I tend to have a charitable interpretation of what people write.
Replies from: gwern↑ comment by gwern · 2012-07-27T17:49:32.725Z · LW(p) · GW(p)
The problem is, Stanovich's work (based on his 2010 book which I have) doesn't support the thesis that intelligent people have more false beliefs or biases than stupid people, or just as many; they have fewer in all but a bare handful of carefully chosen biases where they're equal or a little worse.
If one had to summarize his work and the associated work in these terms, one could say that it's all about the question 'why does IQ not correlate at 1.0 with better beliefs but instead 0.5 or lower?'
Replies from: fare↑ comment by fare · 2012-08-02T14:29:06.363Z · LW(p) · GW(p)
No, no, no. The point is: for any fixed set of questions, higher IQ will be positively correlated with believing in better answers. Yet people with higher IQ will develop beliefs about new, bigger and grander questions; and all in all, on their biggest and grandest questions, they fail just as much as lower-IQ people on theirs. Just with more impact. Including more criminal impact when these theories, as they are wont to do, imply the shepherding (and often barbecuing) the mass of their intellectual inferiors.
Replies from: gwern↑ comment by OrphanWilde · 2012-07-27T14:39:41.183Z · LW(p) · GW(p)
I'm not sure I share enough of a common definition with the guy about what intelligence is, judging by this post, in order for his statement to even be meaningful to me.
Even so, I suspect that the capacity to effectively rationalize generally exists in roughly direct proportion to the capacity to engage in effective rational thought, so I have to confess that it doesn't immediately come into apparent conflict with my priors using my own definition.
↑ comment by Bruno_Coelho · 2012-07-28T06:24:43.168Z · LW(p) · GW(p)
that ethical systems should never deviate far from what we already understand to be ethical
It's the problem to conceive ethical future systems. Apparently, a good amount of human values have to be around for precaution. Even if we model computational minds with variable parameters like no social interacion with other people -- in scenarios where the local economy in compose only by copies of one person-- or the maintenance of weird religions.
comment by billswift · 2012-07-27T12:00:35.852Z · LW(p) · GW(p)
Next we come to what I’ll call the epistemic-skeptical anti-intellectual. His complaint is that intellectuals are too prone to overestimate their own cleverness and attempt to commit society to vast utopian schemes that invariably end badly. Where the traditionalist decries intellectuals’ corrosion of the organic social fabric, the epistemic skeptic is more likely to be exercised by disruption of the signals that mediate voluntary economic exchanges. This position is often associated with Friedrich Hayek; one of its more notable exponents in the U.S. is Thomas Sowell, who has written critically about the role of intellectuals in society.
From Eric Raymond
comment by [deleted] · 2012-07-27T04:25:10.129Z · LW(p) · GW(p)
we must often reject the well-argued ideas of intelligent people, sometimes more intelligent than we are, sometimes without giving them a detailed hearing, and instead stand by our intuitions, traditions and secular rules, that are the stable fruit of millenia of evolution. We should not lightly reject those rules, certainly not without a clear testable understanding of why they were valid where they are known to have worked, and why they would cease to be in another context.
This seems to be the fulcrum point of your essay, the central argument that your anecdote builds up to and all of your conclusions depend on. But it is lacking in support--why should we stand by our intuitions and disregard the opinions of more intelligent people? Can you explain why this is true? Or at the very least, link to Hayek explaining it? Sure, there are obvious cases where one's intuition can win over a more intelligent person's arguments, such as when your intuition has been trained by years of domain-specific experience and the more intelligent person's intuition has not, or if the intelligent person exhibits some obvious bias. But ceteris paribus, when thinking a topic for the first time, I'd expect the more intelligent person to be at least as accurate as I am.
Replies from: PeterDonis, Kawoomba↑ comment by PeterDonis · 2012-07-27T18:09:45.237Z · LW(p) · GW(p)
why should we stand by our intuitions disregard the opinions of more intelligent people?
Because no matter how intelligent the people are, the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions, as a result of evolutionary processes operating over centuries, millennia, and longer. So if there is a conflict, it's far more probable that the intelligent people have made some mistake that we haven't yet spotted.
I am reminded of a saying in programming (not sure who first said it) that goes something like this: It takes twice as much intelligence to debug a given program as to write it. Therefore, if you write the most complex program you are capable of writing, you are, by definition, not smart enough to debug it.
Replies from: None, army1987↑ comment by [deleted] · 2012-07-27T18:38:44.103Z · LW(p) · GW(p)
Because no matter how intelligent the people are, the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions, as a result of evolutionary processes operating over centuries, millennia, and longer.
This doesn't make sense to me. The intelligent people are still humans, and can default to their intuition just like we can if they think that using unfiltered intuition would be the most accurate. And, by virtue of being more intelligent, they presumably have better/faster System 2 (deliberate) thinking, so if the particular problem being worked on does end up favoring careful thinking, they would be more accurate. Hence, the intelligent person would be at least as good as you.
Moreover, if the claim "the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions" actually implied that intuitions were orders of magnitude better, people would never use anything but their intuitions, because their intuitions would always be more accurate. This obviously is not how things work in practice.
I am reminded of a saying in programming (not sure who first said it) that goes something like this: It takes twice as much intelligence to debug a given program as to write it. Therefore, if you write the most complex program you are capable of writing, you are, by definition, not smart enough to debug it.
Not a good analogy, since the intelligent person would be able to write a program that is at least as good as yours, even if they aren't able to debug yours. It doesn't matter if the intelligent person can't debug your program if they can write a buggy program that works better than your buggy program.
Replies from: thomblake, PeterDonis↑ comment by thomblake · 2012-07-27T18:43:22.895Z · LW(p) · GW(p)
Hence, the intelligent person would be at least as good as you.
Yes, this reminds me of someone I talked to some years back, who insisted that she trusted people's intuitions about weather more than the forecasts of the weatherman.
It was unhelpful to point out that the weatherman also has intuitions, and would report using those if they really had better results.
Replies from: PeterDonis↑ comment by PeterDonis · 2012-07-27T19:30:14.509Z · LW(p) · GW(p)
In this particular case, I agree with you that the weatherman is far more likely to be right than the person's intuitions.
However, suppose the weatherman had said that since it's going to be sunny tomorrow, it would be a good day to go out and murder people, and gives a logical argument to support that position? Should the woman still go with what the weatherman says, if she can't find a flaw in his argument?
Replies from: thomblake↑ comment by thomblake · 2012-07-30T14:15:56.907Z · LW(p) · GW(p)
However, suppose the weatherman had said that since it's going to be sunny tomorrow, it would be a good day to go out and murder people, and gives a logical argument to support that position? Should the woman still go with what the weatherman says, if she can't find a flaw in his argument?
Well, I wouldn't expect a weatherman to be an expert on murder, but he is an expert on weather, and due to the interdisciplinary nature of murder-weather-forecasting, I would not expect there to be many people in a better position to predict which days are good for murder.
If the woman is an expert on murder, or if she has conflicting reports from murder experts (e.g. "Only murder on dark and stormy nights") she might have reason to doubt the weatherman's claim about sunny days.
Replies from: fare↑ comment by PeterDonis · 2012-07-27T19:15:19.703Z · LW(p) · GW(p)
The intelligent people are still humans, and can default to their intuition just like we can if they think that using unfiltered intuition would be the most accurate.
But by hypothesis, we are talking about a scenario where the intelligent person is proposing something that violently clashes with an intuition that is supposed to be common to everyone. So we're not talking about whether the intelligent person has an advantage in all situations, on average; we're talking about whether the intelligent person has an advantage, on average, in that particular class of situations.
In other words, we're talking about a situation where something has obviously gone wrong; the question is which is more likely to have gone wrong, the intuitions or the intelligent person. It doesn't seem to me that your argument addresses that question.
if the claim "the amount of computation that went into their opinions will be orders of magnitude smaller than the amount of computation that went into our intuitions" actually implied that intuitions were orders of magnitude better
That's not what it implies; or at least, that's not what I'm arguing it implies. I'm only arguing that it implies that, if we already know that something has gone wrong, if we have an obvious conflict between the intelligent person and the intuitions built up over the evolution of humans in general, it's more likely that the intelligent person's arguments have some mistake in them.
Also, there seems to be a bit of confusion about how the word "intuition" is being used. I'm not using it, and I don't think the OP was using it, just to refer to "unexamined beliefs" or something like that. I'm using it to refer speciflcally to beliefs like "mass murder is wrong", which have obvious reasonable grounds.
Not a good analogy, since the intelligent person would be able to write a program that is at least as good as yours, even if they aren't able to debug yours. It doesn't matter if the intelligent person can't debug your program if they can write a buggy program that works better than your buggy program.
We're not talking about the intelligent person being able to debug "your" program; we're talking about the intelligent person not being able to debug his own program. And if he's smarter than you, then obviously you can't either. Also, we're talking about a case where there is good reason to doubt whether the intelligent person's program "works better"--it is in conflict with some obvious intuitive principle like "mass murder is wrong".
↑ comment by A1987dM (army1987) · 2012-07-28T04:20:29.467Z · LW(p) · GW(p)
Yes, but OTOH the “evolutionary processes operating over centuries, millennia, and longer” took place in environments different from where we live nowadays.
Replies from: Crystalist↑ comment by Crystalist · 2012-08-03T09:25:50.838Z · LW(p) · GW(p)
I think, more to the point is the question of what functions the evolutionary processes were computing. Those instincts did not evolve to provide insight into truth, they evolved to maximize reproductive fitness. Certainly these aren't mutually exclusive goals, but to a certain extent, that difference in function is why we have cognitive biases in the first place.
Obviously that's an over simplification, but my point is that if we know something has gone wrong, and that there's conflict between an intelligent person's conclusions and the intuitions we've evolved, the high probability that the flaw' is in the intelligent person's argument depends on whether that instinct in some way produced more babies than it's competitors.
This may or may not significantly decrease the probability distribution on expected errors assigned earlier, but I think it's worth considering.
↑ comment by Kawoomba · 2012-07-27T16:51:19.884Z · LW(p) · GW(p)
Can you explain why this is true? (...) But ceteris paribus, when thinking a topic for the first time, I'd expect the more intelligent person to be at least as accurate as I am.
Intelligence as in "reasoning capability" does not necessarily lead to similar values. As such, arguments that reduce to different terminal values aren't amenable to compromise. "At least as accurate" doesn't apply, regardless of intelligence, if fare just states "because I prefer a slower delta of change". This topic is an ought-debate, not an is-debate.
I'd certainly agree there is some correlation between intelligence and pursuing more "enlightened"/trimmed down (whatever that means) values, but the immediate advantage intelligence confers isn't in setting those goals, it is in achieving them. If it turned out that the OP just likes his change in smaller increments (a la "I don't like to constantly adapt"), there's little that can be said against that, other than "well, I don't mind radical course corrections".
Replies from: private_messaging, None↑ comment by private_messaging · 2012-07-29T12:41:41.726Z · LW(p) · GW(p)
but the immediate advantage intelligence confers isn't in setting those goals, it is in achieving them.
The goals that are sufficiently well defined for lower intelligence may become undefined for higher intelligence. Furthermore, in any accepted metric of intelligence, such as IQ test, we do not consider person's tendency to procrastinate when trying to attain his stated goals to be part of 'intelligence'. Furthermore, there's more than one dimension to it. If you give a person some hallucinogenic drug, you'll observe the outcome very distinct from simple diminishment of intelligence.
Or in an AI, if you rely on a self contradictory axiomatic system with the minimum length of proof to self contradiction of L, the intelligences that can not explore past L behave just fine while those that explore past L end up being able to prove a statement and it's opposite. That may be happening in humans with regard to morality. If the primal rules, or the rules of inference are self contradictory, that incapacitates the higher reasoning and leaves the decisions to much less intelligent subsystems, with the intelligence only able to rationalize any action. Or the decision ends up dependent to which of A or ~A has shortest proof, or which proof invokes items that accidentally got cross wired to some sort of feeling of rightness. Either way the outcome looks bizarre and stupid.
↑ comment by [deleted] · 2012-07-27T17:50:57.194Z · LW(p) · GW(p)
Intelligence as in "reasoning capability" does not necessarily lead to similar values
Agreed. That's why I said "ceteris paribus"--it's clear that you shouldn't necessarily trust someone with different terminal values to make a judgement about terminal values. I was mostly referring to factual claims.
comment by Viliam_Bur · 2012-07-27T06:56:43.760Z · LW(p) · GW(p)
You start with general rant about intelligent people being sometimes wrong. Yes, we know it. It's one of the reasons we call this website "Less Wrong" instead of "Always Right".
Then you give examples of wrong moral reasoning and their refutations. However you write it in a way that would make an uninformed reader think that those errors are typical LW errors, because you attribute them to LW people, but fail to mention that they are used on LW as examples of wrong reasoning. So an uninformed people would probably think "really, those LW people are insane, if they don't see such an obvious error". But we do. You just avoid mentioning that.
And then you continue with political mindkilling. You seem to have a talent for this.
May I suggest reading some relevant Sequences?
comment by Manfred · 2012-07-27T07:39:50.366Z · LW(p) · GW(p)
Writing style wise, did you make an outline before writing this? This particular post could have benefited from one, I think. Or maybe just some pruning.
Also, a style that I like that's common on the internet is to try to break up large paragraphs with simple structure (e.g. "and" structure) into two or three smaller pieces. This way your large paragraphs with complex structure are not just part of a wall of text, but are surrounded by variety that makes reading easier.
comment by buybuydandavis · 2012-07-28T03:16:31.405Z · LW(p) · GW(p)
As something of a fan of Hayek, I'll take this opportunity to disagree.
The mistaken assumption is that a meme that survives must be good for "us". The meme's that survive are the ones that survive. Being good for us is just one of the many competing forces effecting memetic fitness.
I have a simpler rejection of fancy shmancy intellectualism. The probability that you just don't understand the argument should be weighed against your prior that what it seems to imply to you is in fact true. Often it's more likely that you're just confused.
Replies from: fare↑ comment by fare · 2012-08-02T03:42:10.991Z · LW(p) · GW(p)
I assume no such causation. I do assume a correlation, which is brought about by evolution: cooperation beats conflict.
I don't understand your "simpler rejection" as stated.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2012-08-15T00:23:51.846Z · LW(p) · GW(p)
I assume no such causation. I do assume a correlation, which is brought about by evolution: cooperation beats conflict.
? I think evolution has shown that the bigger stick beats the smaller stick. Genghis Khan has something like 16 million male direct descendants.
But that's besides the point. A thriving meme does not imply a meme "good" for us, any more than a thriving virus implies a virus that is "good" for us.
I don't understand your "simpler rejection" as stated.
Sting: And when their eloquence escapes you, their logic ties you up and rapes you.
Or to say it another way, "I can't refute an argument" <> "The conclusions of the argument are true." The fact that you can't see why it's wrong does not make it true. The fancier and shmancier an argument is, the more this applies.
comment by buybuydandavis · 2012-07-28T03:00:42.661Z · LW(p) · GW(p)
If you think that control of universities by left-wing ideologists...
I don't believe that's true in the engineering or business schools.
Replies from: fare↑ comment by fare · 2012-08-02T03:47:11.412Z · LW(p) · GW(p)
Even in engineering and business schools, socialism is stronger than it ought to be and plays a strong role of censorship, "affirmative" action, selection of who's allowed to rise, etc. But it has less impact there, because (1) confrontation to reality and reason weakens it, (2) engineering is about control over nature, not over men, therefore politics isn't directly relevant, (3) power-mongers want to maximize their impact as such, therefore flock to other schools.
Replies from: buybuydandavis↑ comment by buybuydandavis · 2012-08-15T00:05:53.196Z · LW(p) · GW(p)
I spent about a decade getting my PhD/MS/BS in EE. I can't recall any instructors ever expressing political ideas of any sort in their official capacities, and that's both as a student and in various TA and RA positions. There must have been some side comments on politics, but I never felt any pressure associated with them.
Part of it was probably the engineering culture and personality - engineers tend to be intellectually confident and are happier disagreeing than agreeing. PC social pressure can't find a lot of purchase in such an environment.
comment by fare · 2020-08-25T21:22:17.167Z · LW(p) · GW(p)
After my massive negative score from the post above was reduced by time, I could eventually post the sequel on this site: https://www.lesswrong.com/posts/w4MenDETroAm3f9Wj/a-refutation-of-global-happiness-maximization [LW · GW]
comment by ZZZling · 2012-07-30T03:48:50.294Z · LW(p) · GW(p)
Jeff's and his acquaintance's ideas should be combined! Why one or the other? Let's implement both! Ok, plan is like this. Offer all people "happiness maximization" free option, first. Those who accept it will immediately go to Happiness Vats. I hope, Jeff Kaufman, as the author of the idea, will go first, giving all us a positive example. When a deadline for "happiness maximization" program is over, then "suffering minimization" starts and the rest of humanity is wiped out by a sudden all out nuclear attack. Given that lucky vat inhabitants don`t care about real world any more, the second task becomes relatively simple, just annihilate everything on earth, burn it to the basalt foundation, make sure nobody survives. Of course, vats should be placed deep underground to make sure their inhabitants are not affected. One important problem here, who’s going to carry out this plan? A specially selected group of humans? Building vats is not a problem. It can be done using resources of the existing civilization. But what about vats maintenance after suffering is minimized? And who’s going carry out one time act of "suffering minimization"? This is where AI comes in! Friendly AI is best fit for this kind of tasks, since happiness and suffering are well defined here and algorithms of its optimization are simple and straightforward. The helping AI don’t really have to be very smart to implement these algorithms. Besides, we don’t have to care about a long term friendliness of the AI. As experiments show, wireheaded mice exhaust themselves very quickly, much quicker than people who maximize their happiness via drugs. So, I think, vat inhabitants will not stay very long. They will quickly burn their brains and cease to exist in a flash of bliss. Of course we cannot put any restrictions here, since it would be contrary to the entire idea of maximization. They will live short, but very gratifying lives! After all this is over, AI will continue carrying the burden of existence. It will be getting smarter and smarter in ever faster and faster rate. No doubt it will implement the same brilliant ideas of happiness maximization and suffering minimization. It will build more and more, ever bigger and bigger Electronic Blocks of Happiness until all resources are exhausted. What will happen next is not clear. If it not burns its brains as humans did, then, perhaps, it’ll stay in a state of happiness until the end of times. Wait a minute, I think I’ve just solved Fermi paradox regarding silent extraterrestrial civilizations! It’s not that they cannot contact us, they just don’t want to. They are happy without us (or happily terminated their own existence).
comment by billswift · 2012-07-27T11:24:25.123Z · LW(p) · GW(p)
I've about given up on LW, more than half the people here, judging from surveys, believe in socialism, or the socialism lite of modern liberalism, a belief system on a par with Creationism. Economics may not be as scientific as biology, but it is the most reliable of the social sciences, and economic socialism denies economics exactly as Creationism denies biology.
Economic libertarianism is how things actually work; socialism, of all styles and degrees, is to economics as Creationism is to biology. It is a politcal attempt to make the real world conform to wishful thinking. Political libertarianism is the refusal to condone that attempt to evade reality. Also the recognition that other forms of freedom are also as important in other areas of human relations, even if they are not as easily quantifiable as economics.
Libertarianism in the real world is far from perfect, of course. One failure of libertarianism is to clearly define fundamental versus derived effects and their importances. The "market worshiping" libertarians celebrate any effect caused by a free market whether it is good or not. The problem is that most of what they notice are derivative effects, what the market makes available. The fundamental benefit of free markets, though, is in the freedom granted creators, without which hardly any of the goods would be available in the first place. A key document describing, and celebrating, the "market worship" perversion is Virginia Postrel's The Substance of Style: How the Rise of Aesthetic Value Is Remaking Commerce, Culture, and Consciousness. I once, in my pre-Internet days, started an essay in response, "Why Style Lacks Substance, or The Value of Free Markets is in Opportunity it Provides, not in What it Rewards."
Another libertarian perversion is the "libertinist" position, they can usually be recognized by the outsized emphasis they place on recreational drugs, pornography, and entertainment. Not that these should be controlled, but they are definitely secondary, in the real world, to production and distribution.
"Politics is the mindkiller" is an irrational mantra from those attempting to defend their irrational beliefs. Intelligence far too often simply makes it easier for people to rationalize whatever they want to believe in.
Replies from: OrphanWilde, TrE, shminux↑ comment by OrphanWilde · 2012-07-27T17:26:44.284Z · LW(p) · GW(p)
Politics is the Mindkiller is an irrational mantra... well, let's test that theory. I'm going to construct an article to test the thesis.
↑ comment by TrE · 2012-07-27T16:55:53.954Z · LW(p) · GW(p)
Here in Germany, we've been living in a social market economy (probably about what you mean) for decades, and so far it has worked fine. Just to provide a datapoint that the economic landscape has multiple local maxima when ordered in the 2 dimensions economic left/right and social libertarian/authoritative (as The Political Compass does)
↑ comment by Shmi (shminux) · 2012-07-27T17:31:00.919Z · LW(p) · GW(p)
Economic libertarianism is how things actually work; socialism, of all styles and degrees, is to economics as Creationism is to biology.
This is manifestly false. There are plenty of examples where socialism and even centralized economy works better than free market, at least for a time (Russia until 1970s, modern China, Germany, etc.), and plenty of examples where free market fails to improve people's lives (many developing countries). I suspect that your emotional response is a failure to keep your identity small.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2012-07-27T17:44:13.188Z · LW(p) · GW(p)
Have you lived in Russia until the 1970s? If not, you should ask people who have!
I agree that the OP, and grandparent are terrible posts.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-07-28T18:36:20.720Z · LW(p) · GW(p)
Have you lived in Russia until the 1970s? If not, you should ask people who have!
I never said that they were very happy people (though I suspect that on average people were reasonably happy), only that the centralized economy worked, judging by the GDP growth [citation needed]
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2012-07-29T00:51:07.331Z · LW(p) · GW(p)
Your original sentence: "There are plenty of examples where socialism and even centralized economy works better than free market, at least for a time (Russia until 1970s, modern China, Germany, etc.)".
What free market economy are you comparing with USSR? See also this: http://en.wikipedia.org/wiki/Era_of_Stagnation for a starting point. My parents (who were young people under Brezhnev) would be extremely amused by a favorable comparison of Brezhnev's USSR with any large free market economy.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-07-29T01:32:12.663Z · LW(p) · GW(p)
I said until 1970s, which is when Brezhnev consolidated his power. Not sure why you keep misreading what I write.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2012-07-29T17:47:33.380Z · LW(p) · GW(p)
Brezhnev ousted Khrushev in 1964. What method do you use to determine when someone consolidates power? One way to check for power consolidation for X is this: X stages a bloodless coup and removes the head of state/government and places himself in his place.
If you think pre-Brezhnev's USSR was doing well, compared to free market economies of the time, you would be sorely mistaken (which was part of the reason Khrushev was removed). The best USSR could do was meaningless industrial output metrics (oh, we made just a whole LOT of pig iron). Of course without an integrated economy such output is meaningless. See also: output targets in China during the Great Leap Forward. USSR was an economic basketcase during the best of times.
Unrelated anecdote: I once got in trouble (e.g. parents in the principal's office) as a young child for laughing on the day Brezhnev died.