Hot Air Doesn't Disagree
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-08-16T00:42:02.000Z · LW · GW · Legacy · 45 commentsContents
45 comments
Followup to: The Bedrock of Morality, Abstracted Idealized Dynamics
Tim Tyler comments:
Do the fox and the rabbit disagree? It seems reasonable so say that they do if they meet: the rabbit thinks it should be eating grass, and the fox thinks the rabbit should be in the fox's stomach. They may argue passionately about the rabbit's fate - and even stoop to violence.
Boy, you know, when you think about it, Nature turns out to be just full of disagreement.
Rocks, for example, fall down - so they agree with us, who also fall when pushed off a cliff - whereas hot air rises into the air, unlike humans.
I wonder why hot air disagrees with us so dramatically. I wonder what sort of moral justifications it might have for behaving as it does; and how long it will take to argue this out. So far, hot air has not been forthcoming in terms of moral justifications.
Physical systems that behave differently from you usually do not have factual or moral disagreements with you. Only a highly specialized subset of systems, when they do something different from you, should lead you to infer their explicit internal representation of moral arguments that could potentially lead you to change your mind about what you should do.
Attributing moral disagreements to rabbits or foxes is sheer anthropomorphism, in the full technical sense of the term - like supposing that lightning bolts are thrown by thunder gods, or that trees have spirits that can be insulted by human sexual practices and lead them to withhold their fruit.
The rabbit does not think it should be eating grass. If questioned the rabbit will not say, "I enjoy eating grass, and it is good in general for agents to do what they enjoy, therefore I should eat grass." Now you might invent an argument like that; but the rabbit's actual behavior has absolutely no causal connection to any cognitive system that processes such arguments. The fact that the rabbit eats grass, should not lead you to infer the explicit cognitive representation of, nor even infer the probable theoretical existence of, the sort of arguments that humans have over what they should do. The rabbit is just eating grass, like a rock rolls downhill and like hot air rises.
To think that the rabbit contains a little circuit that ponders morality and then finally does what it thinks it should do, and that the rabbit has arrived at the belief that it should eat grass, and that this is the explanation of why the rabbit is eating grass - from which we might infer that, if the rabbit is correct, perhaps humans should do the same thing - this is all as ridiculous as thinking that the rock wants to be at the bottom of the hill, concludes that it can reach the bottom of the hill by rolling, and therefore decides to exert a mysterious motive force on itself. Aristotle thought that, but there is a reason why Aristotelians don't teach modern physics courses.
The fox does not argue that it is smarter than the rabbit and so deserves to live at the rabbit's expense. To think that the fox is moralizing about why it should eat the rabbit, and this is why the fox eats the rabbit - from which we might infer that we as humans, hearing the fox out, would see its arguments as being in direct conflict with those of the rabbit, and we would have to judge between them - this is as ridiculous as thinking (as a modern human being) that lightning bolts are thrown by thunder gods in a state of inferrable anger.
Yes, foxes and rabbits are more complex creatures than rocks and hot air, but they do not process moral arguments. They are not that complex in that particular way.
Foxes try to eat rabbits and rabbits try to escape foxes, and from this there is nothing more to be inferred than from rocks falling and hot air rising, or water quenching fire and fire evaporating water. They are not arguing.
This anthropomorphism of presuming that every system does what it does because of a belief about what it should do, is directly responsible for the belief that Pebblesorters create prime-numbered heaps of pebbles because they think that is what everyone should do. They don't. Systems whose behavior indicates something about what agents should do, are rare, and the Pebblesorters are not such systems. They don't care about sentient life at all. They just sort pebbles into prime-numbered heaps.
45 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by J_Thomas2 · 2008-08-16T01:46:46.000Z · LW(p) · GW(p)
Chaung-Tzu had a story: Two philosophers were walking home from the bar after a long evening drinking. They stopped to piss off a bridge. One of them said, "Look at the fish playing in the moonlight! How happy they are!"
The other said, "You're not a fish so you can't know whether the fish are happy."
The first said, "You're not me so you can't know whether I know whether the fish are happy."
It seems implausible to me that rabbits or foxes think about morality at all. But I don't know that with any certainty, I'm not sure how they think.
Eliezer says with certainty that they do not think about morality at all. It seems implausible to me that Eliezer would know that any more than I do, but I don't know with any certainty that he doesn't know.
comment by Hopefully_Anonymous · 2008-08-16T02:08:17.000Z · LW(p) · GW(p)
J Thomas, whether or not foxes or rabbits think about morality seems to me to be the less interesting aspect of Tim Tyler's comments.
As far as can tell this is more about algorithms and persistence. I aspire to value the persistence of my own algorithm as a subjective conscious entity. I can conceive of someone else who values maximizing the persistence odds of any subjective conscious entity that has ever existed above all. A third that values maximizing the persistence odds of any human who has ever lived above all. Eliezer seems to value maximizing the persistence of a certain algorithm of morality above all (even if it deoptimizing the persistence odds of all humans who have ever lived). Optimizing the persistence odds of these various algorithms seems to me to be in conflict with each other, much the algorithm of the fox having the rabbit in it's belly is in conflict with the algorithm of the rabbit eating grass, outside of the foxes belly. It's an interesting problem, although I do of course have my own preferred solution to it.
comment by Hopefully_Anonymous · 2008-08-16T02:09:29.000Z · LW(p) · GW(p)
"much the" should read "much like the"
comment by GNZ · 2008-08-16T02:27:52.000Z · LW(p) · GW(p)
Hmm it doesnt seem implausible to me at all. W#hat I would want to see is the thing lacking in a fox that prevents it from having that sort of pattern in it's mind. Just like I would want to know that if someone suggested a person was unable to do it.
Replies from: Kennycomment by TGGP4 · 2008-08-16T02:55:41.000Z · LW(p) · GW(p)
I know rabbits are social creatures, so they might have something like what we call morality. I know less about foxes (I think they are less social than wolves) but they could as well. They don't engage in any sort of argument, so I guess they don't have any moral disagreement. Similarly, Hitler did not have any moral disagreement with Jews, gypsies and so on.
comment by Ian_C. · 2008-08-16T03:31:17.000Z · LW(p) · GW(p)
The whole question of "should" only arises if you have a choice and a mind powerful enough to reason about it. If you watch dogs it does sometimes look like they are choosing. For example if two people call a dog simultaneously it will look here, look there, pause and think (it looks like) and then go for one of the people. But I doubt it has reasoned out the choice, it has probably just gone with whatever emotion strikes it at the key moment.
comment by Caledonian2 · 2008-08-16T05:14:25.000Z · LW(p) · GW(p)
Ian C., how is the behavior you describe any different from the vast, vast majority of human decisions?
People don't often resort to reason when determining what they're going to do, and even when they do, they tend to follow the dictates of reason only if their emotions motivate them to do so. If there's a pre-existing emotional position involved, people tend to follow it instead of reason.
comment by jsalvatier · 2008-08-16T05:52:53.000Z · LW(p) · GW(p)
Moreover, even if they did have moralities, they would probably be very very different moralities, which means that the act of doing opposing things does not mean they are disagreeing, they are just maximizing for different criteria. The only reason it's useful to talk about human's disagreeing is that it is very likely that we are optimizing for the same criteria if you look deep enough.
comment by Simon3 · 2008-08-16T05:57:20.000Z · LW(p) · GW(p)
In his argument about whether the rabbit and fox are disgreeing as to the rabbit's proper place, Eliezer says "The rabbit is just eating grass, like a rock rolls downhill and like hot air rises." While not wanting to support the idea that rabbit and fox maintain separate "should-like" intellectual constructs on Bugs' fate, the rabbit's actions are quite distinct from the re-actions of the rock or hot air. The rock's and gas' actions are entirely determined by the circumstances in which they (do not) find themselves.
The nervous system which partially comprises the rabbit gives it the possibility of choosing, admittedly within fairly strict constraints, whether and when it eats a particular patch of grass and whether its environment encourages or discourages exposing itself to even attempt eating grass (remember the fox!).
Those who believe that this choice is in principle as predictable as the movement of rock and hot air are referred to Edelman's quick survey of selectional biological systems (immune system, CNS) in "Second Nature". Rabbits and foxes are usefully thought of as capable of "intending" (Dennett, "The Intentional Stance"); human's seem capable of, if not addicted to, glorifying "intention" as "prescription": "will" as "should".
comment by Tim_Tyler · 2008-08-16T07:09:40.000Z · LW(p) · GW(p)
The point of the fox and rabbit comment was to illustrate how agents with different utility functions might be usefully said to disagree - i.e. they can exhibit disagreement behaviour, such as arguing.
If you don't think foxes and rabbits are moral agents - and so are more like rocks than people, then I think you may be under-estimating their social lives - but more importantly, you need to substitute agents you do regard as moral agents into the example in order to make sense of the example - e.g. choose two separate alien races. Or choose gorillas and chimps.
Agent with different utility functions need not disagree about facts. But they may well disagree over issues such as how resources ought to be allocated - and resource allocation can be a moral issue, e.g. when it results in deaths.
I suppose it could be objected that such agents would not actually argue. They could recognise that they had a fundamental difference in goals, and therefore arguing would be pointless, due to a lack of common premises. However, my expectation is that there would be actual arguments and disagreements as a result - similar to those that occur today over borders and oil.
I note that I have a different perspective on the pebble sorters as well: Eliezer argues that the pebble sorters are not moral agents. Maybe because their endpoints have nothing to do with morality.
However, the pebble sorters are an optimisation process - at least to the extent that they prefer larger piles. I see no reason why they should not estabish a complex society, and eventually develop space travel and interstellar flight - in their quest to get hold of more rocks. I.e. though the pebble sorters have no moral terminal values, they may well develop moral instrumental values - in the process of developing the cooperative society needed to support their mining operations.
comment by Ian_C. · 2008-08-16T07:19:47.000Z · LW(p) · GW(p)
Caledonian - in matters of the heart perhaps people go with emotion, merely rationalizing after the fact, but in other areas - career, finances, etc, I think most people try to reason it out. You need to have more faith in your fellow man :)
comment by Mario2 · 2008-08-16T07:53:01.000Z · LW(p) · GW(p)
I'm not sure I see what is so hard to understand about the Rabbit/Fox scenario. If they both were intelligent creatures, it seems pretty clear that there would be no moral justification in eating the rabbit, and the fox would be obligated to seek out another source of food. If you were to stipulate that the rabbit is the only source of nourishment available to the fox, this still in no way justifies murder. The fox would have a moral obligation to starve to death. The only remaining problem would be whether the fox has an obligation to his species to survive and procreate, but that is a claim that Eliezer already explicitly rejected.
Of course this reasoning only works with species largely similar to our own. I'm still not sure if it would be applicable to species which exhibit no sense of individualism.
comment by Tim_Tyler · 2008-08-16T09:09:48.000Z · LW(p) · GW(p)
If they both were intelligent creatures, it seems pretty clear that there would be no moral justification in eating the rabbit, and the fox would be obligated to seek out another source of food.
Do the fox and rabbit get to call on human arbitration? Your argument might convince a rabbit, but I doubt a fox would buy it:
Foxes are far smarter and more beautiful than rabbits - and they depend on rabbits for their very existence. By contrast, rabbits simply mess up the countryside by covering it with burrows and droppings - they are a pest, and are obviously in need population control - though they seem too stupid to self-impose it.
comment by Vladimir_Nesov · 2008-08-16T09:16:31.000Z · LW(p) · GW(p)
I disagree. Rabbits have the "should" in their algorithm, they search for plans that "could" be executed and converge to the plans satisfying their sense of "good", it is a process similar to one operating in humans or fruit flies and very unlike one operating in rocks. The main difference is that it seems difficult to persuade a rabbit of anything, but it's equally difficult to persuade a drunk Vasya from 6th floor that flooding the neighbors is really bad. Animals (and even fruit flies) can adapt, can change their behavior in response to the same context as a result of being exposed to training context, can start selecting different plans in the same situations. They don't follow the same ritual as humans do, they don't exchange the moral arguments on the same level of explicitness as humans, but as cognitive algorithms go, they have all the details of "should". Not all of humans are able to be persuaded by valid moral arguments, and need really deep reconfiguration before such arguments would start working, in ways not yet accessible to modern medicine, in ways equally unlike the normal rituals of moral persuasion. What would a more intelligent, more informed rabbit want? Would rabbits uploaded in rabbit-Friendly environment experience moral progress?
Reconstructing the part of the original argument that I think is valid, I agree that rabbits don't posses the facilities for moral argument in the same sense as humans do, but it is a different phenomenon from the fundamental should of cognitive algorithms. Discussing this difference requires understanding the process of human-level moral argument, and not just process of goal-directed action, or process of moral progress. There are different ways in which the behavior changes, and I'm not sure that there is a meaningful threshold at which adaptation becomes moral progress; there might be.
Replies from: Kenny↑ comment by Kenny · 2013-05-14T00:06:14.155Z · LW(p) · GW(p)
This is cogent and forceful, but still wrong I think. There's something to morality beyond the presence of a planning algorithm. I can't currently imagine what that might be tho, so maybe you're right that the difference is one of degree and not kind.
I think part of the confusion is that Elizier is distinguishing morality as a particular aspect of human decision-making. A lot of the comments seem to want to include any decision-making criteria as a kind of generalized morality.
Morality may just be a deceptively simple word that covers extremely complex aspects of how humans choose, and justify, their actions.
comment by Mario2 · 2008-08-16T09:18:26.000Z · LW(p) · GW(p)
"Your argument might convince a rabbit, but I doubt a fox would buy it"
Change the fox's name to Beverly and the rabbit's name to Herman. I don't care how much smarter or better looking Beverly is, she still doesn't have the right to kill and eat Herman.
comment by Jadagul · 2008-08-16T09:49:04.000Z · LW(p) · GW(p)
But Mario, why not? In J-morality it's wrong to hurt people, both because I have empathy towards people and so I like them, and because people tend to create net positive externalities. But that's a value judgment. I can't come up with any argument that would convince a sociopath that he "oughtn't" kill people when he can get away with it. Even in theory.
There was nothing wrong with Raskolnikov's moral theory. He just didn't realize that he wasn't a Napoleon.
comment by Tim_Tyler · 2008-08-16T11:46:54.000Z · LW(p) · GW(p)
Substitute humans for the foxes, then. Imagine humans were totally dependent on rabbits for their nutritional needs. Would you argue that it's right for the human race to commit suicide through starvation, to avoid killing any more rabbits? What about the extermination of our entire species? Would the human race have no moral value, if we were obligate rabbit-carnivores.
IMO, few would lose much sleep over the rabbit slaughter: humans value other humans far more than we value rabbits.
comment by Mario2 · 2008-08-16T12:06:40.000Z · LW(p) · GW(p)
Why not? Because it's wrong. I can sense that it's wrong, even if I have no other evidence, and the fact that nearly everyone else agrees is pretty good confirmation. That's not proof, I suppose, but I'm willing to behave as if I had proof because there is not enough disconfirmatory information.
I believe in the moon because I can see the moon, I trust my eyes, and everyone else seems to agree. I would continue to believe in the absence of any other evidence. If I couldn't see the moon I might not believe, and that would also be rational. I can see, however, and I trust my moral sense as my as I trust any other sensory organ. Just as with a sociopath, I think you would have a hard time proving the existence of a specific lunar crater to a blind person, but the fact that she lacks the capability to see it isn't evidence that it isn't there. People that can see can see it, and that will have to be enough.
comment by Nuncio_Salvage · 2008-08-16T13:07:38.000Z · LW(p) · GW(p)
The rabbit and fox are processing and resolving their moral arguments the same way humans often do with each other. They don’t verbally argue, because they don’t speak. To determine their behaviors as meaningless because they don’t hold a human conversation is anthropomorphism.
As fish are an example of prototype vertebrates, we can reasonably infer that all animals that shared a common ancestor with fish retained similar biological hardware and share the similar social adaptations. This hardware does not exist in rocks or hot air “creatures”.
When humans observe fish behavior (or rabbits or foxes), it is incorrect to claim, “see how they behave like us,” because our biological machinery is additional complexity overlaid on their base systems.
Don’t assume human social sophistication is all that special just because we have it. Since humans are “merely animals” biologically it is a survival adaptation like any other and over the long term may prove to be an experimentally blip in the historical timeline. There is certainly a case to be made that our lofted intelligence and fictitious free will is superfluous to survival as evidenced by older and abundant animal species around us.
In chronological and evolutionary terms it is more logical to claim that core human social behavior is that of fish (or rabbits or foxes).
A couple of articles to consider:
"Now, fish are regarded as steeped in social intelligence, pursuing Machiavellian strategies of manipulation, punishment and reconciliation, exhibiting stable cultural traditions, and cooperating to inspect predators and catch food."
Recent research had shown that fish not only recognised individual "shoal mates" but monitored the social prestige of others, and tracked relationships.
They had also been observed using tools, building complex nests and bowers, and exhibiting impressive long-term memories.
Our inner fish extends beyond physicality. New research reveals that many fish display a wide range of surprisingly sophisticated social behaviors, pursuing interpersonal, interfishal relationships that seem almost embarrassingly familiar.
“Fish have some of the most complex social systems known,” Michael Taborsky, a behavioral ecologist at the University of Bern in Switzerland, said. “You see fish helping each other. You see cooperation and forms of reciprocity.”
Regarding the broader theme (if I'm not too far off the mark). Unlike the rock and air and arguably the Pebblesorters (since I get the impression that they aren’t biological as we know it), the rabbit and fox are biological machines and as such even our rudimentary understanding of their organic systems lends to understanding their moral positions. They are determined to survive. They will seek food and shelter. They will seek fit mates to reproduce. They will nurse their young. They will do some of that complex fish behavior and add a few tricks of their own.
Belief in Artificial Intelligence in a box suffers from the human superstition of disembodied consciousness that we tend to attribute to our Gods and even ourselves in the form of souls (since we classicaly consider ourselves Gods on Earth). However, the meat in our heads and bodies, and the selective recorded experience of this meat in the environment, is us -- along with our fishy programming and all sorts of other “leftover” genetic data.
An AI (or moral agent) needs a body to work in the way that people typically conceive of an artificial person attempting to experience emotions and creativity and all the other sentient and sapient behaviors. The AI needs senses and a means to respond to the environment and all the other trappings and limitations of a corporal form, probably best done with biological materials or approximate synthetics. Even so, without genetic heritage there is small chance that it would behave human or even animal-like unless great pains were taken to program in our survival tendencies starting with our “surprisingly sophisticated social” fish brains.
In other words, it’s not that easy to separate morality from biology. It’s software inherent to the hardware and the historical pressures that shaped them both.
comment by shaun · 2008-08-16T13:39:48.000Z · LW(p) · GW(p)
I think its erroneous to apply some thing so human as morality to something non-human. I think it is correct to infer that animals have some semblance of what we call morality, but it would look black when we see it as white. Animals do not seem to fight with the moral implications of their actions, more that they do what they see needs to be done. For the most part animals are primarily instinctual(from our perspective), but then they also feel emotions, like when they get hurt, or are protecting their young etc... So I would venture to say that yes, they are free thinking, because they are in control of their own lives. If they were robotic, emotionless creatures, then dogs would not protect their owners, lions would not have territorial disputes, and monkeys would not groom each other. But they do these things and so much more. I think animals should be judged by the laws of their own world and not the laws of ours. It'd be like trying to preach the gospel to grass.
comment by J_Thomas2 · 2008-08-16T14:17:59.000Z · LW(p) · GW(p)
Konrad Lorenz claimed that dogs and wolves have morality. When a puppy does something wrong then a parent pushes on the back of its neck with their mouth and pins it to the ground, and lets it up when it whines appropriately.
Lorenz gave an example of an animal that mated at the wrong time. The pack leader found the male still helplessly coupled with the female, and pinned his head to the ground just like a puppy.
It doesn't have to take language. It starts out with moral beliefs that some individuals break. I can't think of any moral taboos that haven't been broken, except for the extermination of the human species which hasn't happened yet. So, moral taboos that get broken and the perps get punished for it. That's morality.
It happens among dogs and cats and horses and probably lots of animals. It isn't that all these behaviors are in the genes, selected genetically by natural selection over the last million generations or so. They get taught, which is much faster to develop but which also has a higher cost.
comment by J_Thomas2 · 2008-08-16T14:27:20.000Z · LW(p) · GW(p)
If you were to stipulate that the rabbit is the only source of nourishment available to the fox, this still in no way justifies murder. The fox would have a moral obligation to starve to death.
How different is it when soldiers are at war? They must kill or be killed. If the fact that enemy soldiers will kill them if they don't kill the enemy first isn't enough justification, what is?
Should the soldiers on each side sit down and argue out the moral justification for the war first, and the side that is unjustified should surrender?
But somehow it seems like they hardly ever do that....
comment by poke · 2008-08-16T14:57:05.000Z · LW(p) · GW(p)
This demonstrates quite nicely the problem with the magical notion of an "internal representation." (Actually, there's two magical notions, since both "internal" and "representation" are separately magical.) You could easily replace "internal representation" with "soul" in this essay and you'd get back the orthodox thinking about humans and animals of the last two thousand years. Given that there is both no evidence nor any chance of evidence either for "internal representations" or "souls" and neither is well-defined (or defined at all), you might as well go ahead and make the substitution. This entire essay is pure mysticism.
comment by Caledonian2 · 2008-08-16T15:13:31.000Z · LW(p) · GW(p)
Why should other creatures' ideas about what we should do convince us of anything?
That isn't how logical arguments work. The mere fact that someone supports a position is not evidence in favor of that position. Where's the appeal to commonly-accepted principles? Where's the demonstration that the position arises from those principles?
comment by DonGeddis · 2008-08-16T15:32:04.000Z · LW(p) · GW(p)
You know, Eliezer, I've seen you come up with lots of interesting analogies (like pebblesorters) to explain you concept of morality. Another one occurred to me that you might find useful: music. It seems to have the same "conflict" between reductionist "acoustic vibrations" vs. Beethoven, as morality. Not to mention the question of what aliens or AIs might consider to be music. Or, for that matter, the fact that there are someone different kinds of music in different human cultures, yet all sharing some elements but not necessarily others.
And, in the end, there is no simple rule you can define, which distinguishes "music" vibrations from "non-music" vibrations.
Well, probably you don't need more suggestions. But I thought the "music ~= morality" connection was at least interesting enough to consider.
comment by Thom_Blake · 2008-08-16T15:44:27.000Z · LW(p) · GW(p)
Aristotelians may not be teaching physics courses (though I know of no survey showing that) but they do increasingly teach ethics courses. It makes sense to think of what qualities are good for a fox or good for a rabbit, and so one can speak about them with respect to ethics.
However, there is no reason to think that they disagree about ethics, since disagreement is a social activity that is seldom shared between species, and ethics requires actually thinking about what one has most reason to do or want. While it makes sense to attribute the intentional stance to such animals to predict their behavior (a la Dennett), that still leaves us with little reason to regard them as things that reason about morality.
That said, there is good reason to think that dogs, for instance, have disagreements about ethics. Dogs have a good sense of what it takes to be a good dog, and will correct each other's behavior if it falls out of line (as mentioned w.r.t. wolves above).
comment by Michael_Drake · 2008-08-16T16:01:39.000Z · LW(p) · GW(p)
It makes sense to say that rabbits and foxes have interests in a way that rocks and air don't. To be sure, they don't have the competence to represent these interests within a moral framework. Perhaps, though, they have something proto-normative about them. (In a slogan: Interests precede oughts.)
comment by Will_Pearson · 2008-08-16T16:07:08.000Z · LW(p) · GW(p)
Okay lets say morality is like primality. What happens if I get morality wrong? If my ability to test primes is faulty my encryptions are more likely to be broken and I will give 5 presents to my 3 nephews and expect them to be able to divide them evenly, causing familial tension. The latter example would still be true even if societies primality detection is broken. What are the consequences if I and society get morality wrong? Lets say society thinks it is not immoral to kill babies under 6 months (not that it is a good thing to do, but neutral). What effects would the evil actions have, such that we could recognise them and potentially correct our morality detection?
comment by J_Thomas2 · 2008-08-16T16:19:52.000Z · LW(p) · GW(p)
This series of Potemkin essays makes me increasingly suspicious that someone's trying to pull a fast one on the Empress.
Agreed. I've suspected for some time that -- after laying out descriptions of how bias works -- Eliezer is now presenting us with a series of arguments that are all bias, all the time, and noticing how we buy into it.
It's not only the most charitable explanation, it's also the most consistent explanation.
comment by Tim_Tyler · 2008-08-16T16:20:57.000Z · LW(p) · GW(p)
Heh. When I said the fox and the rabbit "argue passionately about the rabbit's fate" readers were not supposed to take that completely literally.
The idea was that different terminal values can lead to disagreements and conflicts over how to use resources.
Much as different value systems have led to conflict over the Al-Aqsa Mosque in Jerusalem.
comment by Mario2 · 2008-08-16T17:05:51.000Z · LW(p) · GW(p)
"How different is it when soldiers are at war? They must kill or be killed."
I think there is an important distinction between "kill or die" and "kill or be killed." The wolf's life may be at stake, but the rabbit clearly isn't attacking the wolf. If I need a heart transplant, I would still not be justified in killing someone to obtain the organ.
comment by JamesAndrix · 2008-08-16T17:18:56.000Z · LW(p) · GW(p)
Our optimization target is also very different than any moral explanations we might give for our behavior (good or bad behavior)
comment by J_Thomas2 · 2008-08-16T17:29:15.000Z · LW(p) · GW(p)
I think there is an important distinction between "kill or die" and "kill or be killed." The wolf's life may be at stake, but the rabbit clearly isn't attacking the wolf. If I need a heart transplant, I would still not be justified in killing someone to obtain the organ.
Mario, you are making a subtler distinction than I was. There is no end to the number of subtle distinctions that can be made.
In warfare we can distinguish between infantrymen who are shooting directly at each other, versus infantry and artillery or airstrikes that dump explosives on them at little risk to themselves.
We can distinguish between soldiers who are fighting for their homes versus soldiers who are fighting beyond their own borders. Clearly it's immoral to invade other countries, and not immoral to defend your own.
I'm sure we could come up with hundreds of ways to split up the situations that show they are not all the same. But how much difference do these differences really make? "Kill or die" is pretty basic. If somebody's going to die anyway, and your actions can decide who it will be, do you have any right to choose?
comment by retired_urologist · 2008-08-19T15:54:24.000Z · LW(p) · GW(p)
@Tim Tyler:IMO, few would lose much sleep over the rabbit slaughter: humans value other humans far more than we value rabbits.
PETA? Vegans?
@Mario: I believe in the moon because I can see the moon, I trust my eyes, and everyone else seems to agree.
This is so far from true that jokes are made about it: One evening, two blondes in Oklahoma, planing to spend their vacations in Florida, are having cocktails and gazing at the moon. The first asks, "Which do you think is closer, the moon or Florida?" The second responds, "Helloooo. Can you see Florida?"
@J Thomas: Eliezer is now presenting us with a series of arguments that are all bias, all the time, and noticing how we buy into it.
A heartfelt thanks. As a late-comer, I was unaware of the technique, and too lame to notice it on my own. Me perspective has changed. Is there any way to delete old comments, or is that similar to the desire to upload more intelligence?
comment by Tim_Tyler · 2008-08-20T08:34:40.000Z · LW(p) · GW(p)
Re: PETA
Maybe, but if we were obligate rabbit-carnivores - as in the scenario - I doubt PETA would have as many members.
Humans today kill and eat far smarter animals than rabbits for food - and we are not obligate carnivores.
Different interests have historically contributed to many disagreements. About global warming. About whether cigarettes cause cancer. About energy policy. About the safety of foods and drugs. About gender roles in society. And about whether to go to war.
The different interests arise from genetic differences, from differences in development (phenotypic plasticity) and from differences in circumstances (e.g. accidents of geography).
Something like "whether cigarettes cause cancer" can be reasonably regarded as a fact. But whether you work for a cigarette company or a health insurance company has historically made quite a difference to which side of the argument about the issue you wind up on.
I think that illustrates how differences in optimisation target can "bleed over" into differences about facts.
comment by retired_urologist · 2008-08-20T12:09:18.000Z · LW(p) · GW(p)
Thanks, Tim Tyler, for the insight. I am trying to learn how to think differently (more effectively), since my education and profession actually did not include any courses, or even any experience, in clear thinking, sad to say. As you can see from some of my previous comments, I don't always see the rationale of your thoughts, to the point of discarding some of them out of hand, e.g., your series of observations on this topic, in which you dismiss possibly the best-referenced work on the diet subject without reading it, because you felt that some of the author's previous work was inadequate, yet your own references were lame.
I know there is a strong bias on this board about the arrogance of doctors, especially given their rather well-documented failure to make a positive impact on overall health care in the USA. I abhor the "doctor arrogance" as well. Any arrogance seen in my posts is unintentional, and comes not from being a "arrogant doctor", but from the failing of being an "arrogant person", a quality that seems widespread in many of the OB commenters. The more I learn about how such "ninja-brained" people think, the less I have to be arrogant about. I'm here to learn.
comment by TheOtherDave · 2010-11-10T03:13:23.431Z · LW(p) · GW(p)
My own feeling is that any system that can move from one environment (E1) into another (E2) and compare its representations of those environments in order to decide whether to stay in E2 or return to E1 has something that shares a family relationship with what I call "morality" in a human; the distinction at that point is one of how sophisticated the representation-comparing system, and whether its terminal values are commensurable.
Rocks and hot air can't do this at all; rabbits and foxes and humans and pebblesorters all can.
A human's system for doing this is significantly more sophisticated than a rabbit's, though. And I would agree that there is a point where increased sophistication crosses a threshold and deserves a new label. So, sure, if we want to say that what a rabbit has doesn't deserve the label "morality" but is rather a mere evaluative function, that's fine. I probably agree.
Ditto for chimpanzees, bonobos, and six-day-old infants, if you like. In each of those cases I think an argument could be made for drawing the line a little bit lower, but it's an acceptable place to draw the line. OTOH, denying the label "morality" to the evaluative function of a healthy adult human who happens to disagree with you is going too far. (Not that anyone here is doing that.)
It's a lot like talking about when a human becomes an "adult." It's a perfectly meaningful distinction, and some answers are clearly wrong -- a six-day-old infant simply isn't an adult and that's all there is to it -- but there's a large grey zone within which where you draw the line is just a matter of convention.
comment by Sniffnoy · 2010-11-10T20:46:00.944Z · LW(p) · GW(p)
I think there's a problem with the article's rabbit:fox::rocks:hot air analogy. The rabbit and the fox "disagree" in that they are working towards conflicting goals; however, the hot air does not in anyway prevent the rock from falling nor the rock the hot air from rising. It's only analogous if you perform a bit of moral deixis; to actually make the analogy work, we would want, say, a heavy rock being carried in a baloon.
Replies from: timtyler↑ comment by timtyler · 2013-09-01T12:48:48.970Z · LW(p) · GW(p)
I thought it was a bad analogy too. The foxes and rabbits have conflicting goals. However the falling human and the rising hot air have mutually compatible goals which can be simultaneously satisfied. It seems like a very different situation to me. I think there was a lack of sympathetic reading here.
Generally, one should strive to criticise the strongest argument you can imagine, not a feeble characature.