When Anthropomorphism Became Stupid
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-16T23:43:01.000Z · LW · GW · Legacy · 12 commentsContents
12 comments
It turns out that most things in the universe don't have minds.
This statement would have provoked incredulity among many earlier cultures. "Animism" is the usual term. They thought that trees, rocks, streams, and hills all had spirits because, hey, why not?
I mean, those lumps of flesh known as "humans" contain thoughts, so why shouldn't the lumps of wood known as "trees"?
My muscles move at my will, and water flows through a river. Who's to say that the river doesn't have a will to move the water? The river overflows its banks, and floods my tribe's gathering-place - why not think that the river was angry, since it moved its parts to hurt us? It's what we would think when someone's fist hit our nose.
There is no obvious reason - no reason obvious to a hunter-gatherer - why this cannot be so. It only seems like a stupid mistake if you confuse weirdness with stupidity. Naturally the belief that rivers have animating spirits seems "weird" to us, since it is not a belief of our tribe. But there is nothing obviously stupid about thinking that great lumps of moving water have spirits, just like our own lumps of moving flesh.
If the idea were obviously stupid, no one would have believed it. Just like, for the longest time, nobody believed in the obviously stupid idea that the Earth moves while seeming motionless.
Is it obvious that trees can't think? Trees, let us not forget, are in fact our distant cousins. Go far enough back, and you have a common ancestor with your fern. If lumps of flesh can think, why not lumps of wood?
For it to be obvious that wood doesn't think, you have to belong to a culture with microscopes. Not just any microscopes, but really good microscopes.
Aristotle thought the brain was an organ for cooling the blood. (It's a good thing that what we believe about our brains has very little effect on their actual operation.)
Egyptians threw the brain away during the process of mummification.
Alcmaeon of Croton, a Pythagorean of the 5th century BCE, put his finger on the brain as the seat of intelligence, because he'd traced the optic nerve from the eye to the brain. Still, with the amount of evidence he had, it was only a guess.
When did the central role of the brain stop being a guess? I do not know enough history to answer this question, and probably there wasn't any sharp dividing line. Maybe we could put it at the point where someone traced the anatomy of the nerves, and discovered that severing a nervous connection to the brain blocked movement and sensation?
Even so, that is only a mysterious spirit moving through the nerves. Who's to say that wood and water, even if they lack the little threads found in human anatomy, might not carry the same mysterious spirit by different means?
I've spent some time online trying to track down the exact moment when someone noticed the vastly tangled internal structure of the brain's neurons, and said, "Hey, I bet all this giant tangle is doing complex information-processing!" I haven't had much luck. (It's not Camillo Golgi - the tangledness of the circuitry was known before Golgi.) Maybe there was never a watershed moment there, either.
But that discovery of that tangledness, and Charles Darwin's theory of natural selection, and the notion of cognition as computation, is where I would put the gradual beginning of anthropomorphism's descent into being obviously wrong.
It's the point where you can look at a tree, and say: "I don't see anything in the tree's biology that's doing complex information-processing. Nor do I see it in the behavior, and if it's hidden in a way that doesn't affect the tree's behavior, how would a selection pressure for such complex information-processing arise?"
It's the point where you can look at a river, and say, "Water doesn't contain patterns replicating with distant heredity and substantial variation subject to iterative selection, so how would a river come to have any pattern so complex and functionally optimized as a brain?"
It's the point where you can look at an atom, and say: "Anger may look simple, but it's not, and there's no room for it to fit in something as simple as an atom - not unless there are whole universes of subparticles inside quarks; and even then, since we've never seen any sign of atomic anger, it wouldn't have any effect on the high-level phenomena we know."
It's the point where you can look at a puppy, and say: "The puppy's parents may push it to the ground when it does something wrong, but that doesn't mean the puppy is doing moral reasoning. Our current theories of evolutionary psychology holds that moral reasoning arose as a response to more complex social challenges than that - in their full-fledged human form, our moral adaptations are the result of selection pressures over linguistic arguments about tribal politics."
It's the point where you can look at a rock, and say, "This lacks even the simple search trees embodied in a chess-playing program - where would it get the intentions to want to roll downhill, as Aristotle once thought?"
It is written:
Zhuangzi and Huizi were strolling along the dam of the Hao Waterfall when Zhuangzi said, "See how the minnows come out and dart around where they please! That's what fish really enjoy!"
Huizi said, "You're not a fish — how do you know what fish enjoy?"
Zhuangzi said, "You're not I, so how do you know I don't know what fish enjoy?"
Now we know.
12 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Kevin_Dick · 2008-08-16T23:56:43.000Z · LW(p) · GW(p)
Doesn't this boil down to being able to "put yourself in another's shoes"? Are mirror neurons what are necessary to carry out moral reasoning?
This kind of solves the pie division problem. If you are capable of putting yourself in the other guy's shoes and still sincerely believing you should get the whole pie, perhaps there is some information about your internal state that you can communicate to the others to convince them?
IS the essence of morality that you should believe in the same division no matter which position you occupy?
comment by J_Thomas2 · 2008-08-17T00:39:29.000Z · LW(p) · GW(p)
We have quick muscles, so we do computation to decide how to organise those muscles.
Trees do not have quick muscles, so they don't need that kind of computation.
Trees need to decide which directions to grow, and which directions to send their roots. Pee on the ground near a tree and it will grow rootlets in your direction, to collect the minerals you give it.
Trees need to decide which poisons to produce and where to pump them. When they get chewed on by bugs that tend to stay on the same leaf the trees tend to send their poisons to that leaf. When it's bugs that tend to stay nearby the tree sends the poisons nearby. Trees can somewhat sense the chemicals that distressed trees near them make, and respond early to the particular sorts of threats those chemicals indicate.
Is all that built into the trees' genes? Do they actually learn much? I dunno. I haven't noticed anything like a brain in a tree. But I wouldn't know what to look for. Our brains use a lot of energy, we have to eat a lot to maintain them. They work fast. Trees don't need that speed.
I don't know how smart trees are, or how fast they learn. The esperiments have not been done.
I don't know how moral animals are that we share no common language with. Those experiments haven't been done either. We can't even design the experiments until we get an operational definition of morality.
What experiment would you perform to decide whether an animal was moral? What experiment would show whether an intelligent alien was moral? What experiment could show whether a human imprisoned for a vicious crime was moral?
If you can describe the experiment that shows the difference, then you have defined the term in a way that other people can reproduce.
comment by poke · 2008-08-17T01:50:23.000Z · LW(p) · GW(p)
If you look through a microscope you'll notice the only major difference between the nervous system and other tissues is that the nervous system exhibits network connectivity. Cells in tissues are usually arranged in such a way that they only connect to their nearest neighbor. Many tissues exhibit electrical activity, communication between cells, coordinated activity, etc, in the same way as neurons. If networks of neurons can be said to be performing computations then so can other tissues. I'm not familar with the biology of trees but I don't see why they couldn't be said to be 'thinking' if we're going to equate thinking with computation.
comment by Ian_C. · 2008-08-17T01:52:31.000Z · LW(p) · GW(p)
People don't believe that inanimate objects contain spirits any more but they do still believe that God can control such items which is almost the same thing. True rationality is realizing that they are not controlled by any mind - their own or God's - but they do what they do because of what they are.
comment by J_Thomas2 · 2008-08-17T03:49:43.000Z · LW(p) · GW(p)
When you try to predict what will happen it works pretty well to assume that it's all deterministic and get what results you can. When you want to negotiate with somebody it works best to suppose they have free will and they might do whatever-the-hell they want.
When you can predict what inanimate objects will do with fair precision, that's a sign they don't have free will. And if you don't know how to negotiate with them, you haven't got a lot of incentive to assume they have free will. But particularly when they're actually predictable.
The more predictable people get the less reason there is to suppose they have spirits etc motivating them. Unless it's information about the spirits you manipulate to predict what they'll do.
comment by Hopefully_Anonymous · 2008-08-17T03:52:49.000Z · LW(p) · GW(p)
To the degree "thinking" or "deciding" actually exists, it's not clear to me that we as individuals are the actual agents, rather than observer subcomponents with an inflated sense of agency, perhaps a lot like the neurons but with a deluded/hallucinated sense of agency.
comment by GNZ · 2008-08-17T07:07:40.000Z · LW(p) · GW(p)
I think we have a definitional issue with "morality" and "should". I cant see why we seem to think it is so beyond the ability of any brain that can process millions of bits per second.*
The good news however is that if we could get a decent definition there is a lot of literature on studying animals for signs of complex human style behaviors.
"It's the point where you can look at a puppy, and say: "The puppy's parents may push it to the ground when it does something wrong, but that doesn't mean the puppy is doing moral reasoning."
err... obviously the puppy IS engaging in complex information processing, using neurons no less and we can prove that with microscopes. So somehow you have provided evidence that you are wrong on this point, and then, have come to the conclusion you are right.
on the other hand there is some validity in the ev psych argument - but only some. This is exactly the sort of story telling and leaping to the assumption that that proves facts that makes so many biologists hate evolutionary psychology.
*In fact in a certain sense I go with what HA seems to be saying about it being unclear if morality (thinking deciding etc) exists in the mystical sense we seem to be aiming for.
comment by Tim_Tyler · 2008-08-17T08:11:14.000Z · LW(p) · GW(p)
The puppy's parents may push it to the ground when it does something wrong, but that doesn't mean the puppy is doing moral reasoning. Our current theories of evolutionary psychology holds that moral reasoning arose as a response to more complex social challenges than that - in their full-fledged human form, our moral adaptations are the result of selection pressures over linguistic arguments about tribal politics.
Right, but dogs know right from wrong, even if they don't have something akin to language. Much as they can catch a ball - even though they don't know how to solve the differential equations that describe the ball's arc.
Dogs have pretty complex social lives - as a result of their pack-hunting ancestry. This no doubt came in useful when it came to their more recent symbiosis with humans.
Replies from: Yosarian2↑ comment by Yosarian2 · 2013-01-25T10:23:11.477Z · LW(p) · GW(p)
Yeah, agreed. Human morality is a very complicated thing, but it does seem like at least some parts of the circuitry we use for moral thinking does exist in other animals, like dogs. For example, dogs are so trainable because they're very good at learning that certain types of behavior are "bad" and they will be punished for them. They can even extrapolate (the dog knows he was yelled at for behavior X, so he knows he probably shouldn't do similar behavior Y either). It's not a fully developed form of moral reasoning, but there certainly is a similar mechanism in place in human children as their parents teach them "right from wrong."
comment by Recovering_irrationalist · 2008-08-17T11:48:31.000Z · LW(p) · GW(p)
I've spent some time online trying to track down the exact moment when someone noticed the vastly tangled internal structure of the brain's neurons, and said, "Hey, I bet all this giant tangle is doing complex information-processing!"
My guess is Ibn al-Haytham, early 11thC while under house arrest after realizing he couldn't, as claimed, regulate the Nile's overflows.
Wikipedia: "In the Book of Optics, Ibn al-Haytham was the first scientist to argue that vision occurs in the brain, rather than the eyes. He pointed out that personal experience has an effect on what people see and how they see, and that vision and perception are subjective. He explained possible errors in vision in detail, and as an example described how a small child with less experience may have more difficulty interpreting what he or she sees. He also gave an example of how an adult can make mistakes in vision due to experience that suggests that one is seeing one thing, when one is really seeing something else."
comment by Richard_Kennaway · 2008-08-17T18:49:07.000Z · LW(p) · GW(p)
Eliezer: Our current theories of evolutionary psychology holds that moral reasoning arose as a response to more complex social challenges than that - in their full-fledged human form, our moral adaptations are the result of selection pressures over linguistic arguments about tribal politics.
You mentioned that explanation earlier, but neither there nor here did you give any reference to evidence for it. Is there any? I don't mean evidence for the general idea of evolutionary psychology, but evidence for this particular claim.
It is not clear just what you are denying of dogs. "In their full-fledged human form" implies that you think there are more rudimentary non-human forms, and you denied "moral reasoning" of the puppy rather than morality -- but it is not clear if you were making a deliberate distinction there. Is "moral reasoning" anything more than morality engaging language in its service?
What do fish enjoy?
comment by Fronken · 2013-01-25T10:46:40.465Z · LW(p) · GW(p)
- in their full-fledged human form, our moral adaptations are the result of selection pressures over linguistic arguments about tribal politics.
Is that known to be true? It seems like a possible story, but other such ideas feel likely without working - game theory, group selection, kin selection, all these sound good but do not truly explain why we feel moral. I do not want to convince myself of such a mistake unless it works in math as well as convinces.