Smart People are Probably Dangerous
post by Program Den (program-den) · 2023-03-21T06:00:52.904Z · LW · GW · 2 commentsContents
2 comments
Why are smart people probably dangerous? Because they can fool us. They are better than average people are at lying, and hiding their true intentions. Intentions that most likely relate to dominating the rest of us. (How could they not? I mean, wouldn't you use that brain-power to subjugate others if you could? Grab the reins of the world? )
"This sounds a lot like anti-intellectualism", say a few of you. And to that I answer "Of course it does! Smart people — intellectuals, if you will — are dangerous[1]!". This is a well-known fact.
The Khmer Rouge, for example, would target people who wore glasses. Because if you wear glasses it's likely you can read. And if you can read, you can basically make yourself smarter. Becoming exponentially more dangerous. A vicious cycle.
One might state: "This 'reading' of which you speak, sounds much like browsing the internet— a process many of us use to learn things in the 21st century!" Indeed, it does! Ergo, the internet is also leading the world down an ever more dangerous path! Perhaps even more so than reading— since it also combines visual and auditory stimuli!
"So how do we limit the danger?" I hear a few people asking one another, rather worriedly, and to that I say, the answer is obvious: anti-intellectualism! Strict control of information and educational resources. It's the only safe path. Because intelligent people are dangerous people. And the more intelligent they are, the dangerouser they are.
- ^
Here we drop the "probably" because if something can be dangerous, ipso facto, it is dangerous
2 comments
Comments sorted by top scores.
comment by David James (david-james) · 2024-05-26T12:52:08.356Z · LW(p) · GW(p)
I got minimal value from the article as written, but I'm hoping that a steel-man version might be useful. In that spirit, I can grant a narrower claim: Smart people have more capability to fool us, all other things equal. Why? Because increased intelligence brings increased capability for deception.
-
This is as close to a tautology as I've seen in a long time. What predictive benefit comes from tautologies? I can't think of any.
-
But why focus on capability? Probability of harm is a better metric.
-
Now, with that in mind, one should not assume a straight line between capability and probability of harm. One should look at all potential causal factors.
-
More broadly, the "all other things equal part" is problematic here. I will try to write more on this topic when I have time. My thoughts are not fleshed out yet, but I think my unease has to do with how ceteris paribus imposes constraints on a system. The claim I want to examine would go something like this: those constraints "bind" the system in ways that prevent proper observation and analysis.
↑ comment by Program Den (program-den) · 2024-09-07T17:34:57.239Z · LW(p) · GW(p)
Regarding "all things being equal" / ceteris paribus, I think you are correct (assuming I'm interpreting this last bullet-point as intended) in that it "binds" a system in ways that "divorce it from reality" to some extent.
I feel like this is a given, but also that since the concept exists on a "spectrum of isolation", the ones that are closer to the edge of "impossible to separate" necessarily skew/divorce reality further.
I'm not sure if I've ever explicitly thought about that feature of this cognitive device— and it's worth explicitly thinking about! (You might be meaning something else, but this is what I got out of it.)
As for this overall article, it is [what I find to be] humorous satire, so it's more anti-value than value, if you will.
It pokes fun at the idea that we should fear[1] intelligence— which seems to be an overarching theme to many of the "AI safety" posts on LessWrong, and which I find highly ironic and humorous, as so many people here seem to feel (and no few number literally express) that they are more intelligent than the average person (some say it is "society" expressing it, versus themselves, per se— but still).
Thus, to some extent, this "intelligence is dangerous" sentiment is a bit of ego puffery as well…
But to address the rest of your comment, it's cool that you keyed into the "probably dangerous" title element, as yes, it's not just how bad a thing could be, but how likely the thing is to happen, which we use to assess risks to determine if they are "worth" taking.
Does increased intelligence bring increased capability for deception?
It is so hard to separate things! (To hark back a little, lol)
I can't help but think there is a strange relationship here— take Mutually Assured Destruction for instance— at some point, the capability is so high it appears to limit not only the probability— but the capability itself!
I think I will end here, as the M.A.D. angle has me pondering semantics and whatnot… but thanks for the impetus to post!
- ^
whatever terminology you prefer that conveys "intelligence" as a pejorative