Homogeneity vs. heterogeneity (or, What kind of sex is most moral?)

post by PhilGoetz · 2009-05-22T23:25:04.300Z · LW · GW · Legacy · 79 comments

You've all heard discussions of collective ethics vs. individualistic ethics.  These discussions always assume that the organism in question remains constant.  Your task is to choose the proper weight to give collective versus individual goals.

But the designer of transhumans has a different starting point.  They have to decide how much random variation the population will have, and how much individuals will resemble those that they interact with.

Organisms with less genetic diversity place more emphasis on collective ethics.  The amount of selflessness a person exhibits towards another person can be estimated according to their genetic similarity.  To a first approximation, if person A shares half of their genes with people in group B, person A will regard saving their own life, versus saving two people from group B, as an even tradeoff.  In fact, this generalizes across all organisms, and whenever you find insects like ants or bees, who are extremely altruistic, you will find that they share most of their genes with the group they are behaving altruistically towards.  Bacterial colonies and other clonal colonies can be expected to be even more altruistic (although they don't have as wide a behavioral repertoire with which to demonstrate their virtue).  Google kin selection.

Ants, honeybees, and slime molds, which share more of their genes with their nestmates than humans do with their family, achieve levels of cooperation that humans would consider horrific if it were required of them.  Consider these aphids that explode themselves to provide glue to fill in holes in their community's protective gall.

The human, trying to balance collective ethics vs. individual ethics, is really just trying to discover a balance point that is already determined by their sexual diploidy.  The designer of posthumans (for instance, an AI designing its subroutines for a task), OTOH, actually has a decision to make -- where should that balance be set?  How much variation should there be in the population (whether of genes, memes, or whatever is most important WRT cooperation)?

A strictly goal-oriented AI would supervise its components and resources so as to optimize the trade-off between "exploration" and "exploitation".  (Exploration means trying new approaches; exploitation means re-using approaches that have worked well in the past.)  This means that it would set the level of random variation in the population according to certain equations that maximize the expected speed of optimization.

But choosing the level of variation in a population has dramatic ethical consequences.  Creating a more homogenous population will increase altruism, at the expense of decreasing individualism.  Choosing the amount of variation in population strictly by maximizing the speed of optimization would mean rolling the dice as to how much altruism vs. individualism your society will have.

In light of the fact that you have a goal to solve, and a parameter setting that will optimize solving that goal; and you also have a fuzzy ethical issue that has something to say about how to set that same parameter; anyone who is not a moral realist must say, Damn the torpedos: Set the parameter so as to optimize goal-solving.  In other words, simply define the correct moral weight to place on collective versus individual goals, as that which results when you set your population's genetic/memetic diversity so as to optimize your population's exploration/exploitation balance for its goals.

Are you comfortable with that?

79 comments

Comments sorted by top scores.

comment by MichaelBishop · 2009-05-23T15:02:54.875Z · LW(p) · GW(p)

I think this post could use additional explanation. Am I alone?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-24T00:02:07.677Z · LW(p) · GW(p)

I rewrote it to spell it out in tedious detail.

Replies from: conchis, JGWeissman
comment by conchis · 2009-05-24T10:31:10.735Z · LW(p) · GW(p)

Could you spell out in even more tedious detail what you mean by the following?

The human, trying to balance collective ethics vs. individual ethics, is really just trying to discover a balance point that is already determined by their sexual diploidy.

The obvious interpretation of this sentence seems to commit the naturalistic fallacy; is there another meaning that I'm missing?

Replies from: JGWeissman, PhilGoetz, PhilGoetz
comment by JGWeissman · 2009-05-24T18:24:26.853Z · LW(p) · GW(p)

As near as I can tell from your link, this Naturalistic Fallacy means disagreeing with G. E. Moore's position that "good" cannot be defined in natural terms. It seems to be a powerful debating trick to convince people that disagreeing with you is a fallacy.

Further, Phil's statement does not even define "good", it describes how people define "good". It is not a fallacy to describe a behavior that commits a fallacy.

I wonder, would you have realized these issues yourself, if you had tried to explain how the fallacy applies to the statement? Or would it have helped me to realize that you meant something else, and there is indeed a problem here?

Replies from: conchis
comment by conchis · 2009-05-24T20:14:27.143Z · LW(p) · GW(p)

Apologies, I should have made it clearer that I was referring to the naturalistic fallacy in it's casual sense, which denies the validity of drawing moral conclusions directly from natural facts. (I assumed that this usage was common enough that I didn't need to spell it out; that assumption was clearly false.)

Pace your second paragraph, it seemed to me that Phil was trying to do this, and others seem to have interpreted the post in this way too. But I admit that part of the vagueness in my phrasing was due to the fact that I was (and still am) having trouble figuring out exactly what Phil is trying to say.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-24T21:29:11.307Z · LW(p) · GW(p)

Ah, it might have helped to instead call it the more specific Appeal to Nature, one of several usages discussed in the article you referenced. Even so, I don't think Phil was drawing moral conclusions directly from natural facts. He was saying, given these natural facts, this behavior works, and it is the fact that it works that causes people to call it moral. The position seems to be Consequentialism as an explanation for other people's behavior.

As I interpreted the article, Phil is saying that, however much we frame our moral thinking and discussion in terms of abstract ethics, ultimately our conclusions are determined by natural facts about us and our society, that is, we somehow decide the moral thing is what works, even if that is not our explicit reasoning. This sets up the question that, if we are in a situation to determine the natural facts that force our ethics, is there in fact a higher ethical principle such that we should fix our nature to fit that ethical principal? And does this conflict with other goals?

Replies from: MichaelBishop
comment by MichaelBishop · 2009-05-24T23:07:34.365Z · LW(p) · GW(p)

You are right, it would have been better to cite Appeal to Nature. But I insist that Phil did commit this fallacy. Quoted from his longer comment in this thread:

I'm dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-25T00:13:42.636Z · LW(p) · GW(p)

What if he had said:

Moral claims, such as valuing vegetarianism or celibacy, that require people to change their usual behavior in a way that decreases their fitness should not be accepted without a compelling reason that addresses the loss to the person so constrained.

Would that have committed the fallacy? Would it still support his point? Should we, because Phil committed this fallacy in answering a question about his article, discard the whole article?

Also note, Phil does not seem to have any problem with a person denying their nature to increase their fitness.

Replies from: MichaelBishop
comment by MichaelBishop · 2009-05-25T05:41:27.168Z · LW(p) · GW(p)

I'm not sure I understand your altered Phil quote either: "What if he had said..." If I do understand it, we still disagree. It'd be helpful if you answered your own questions. Here are a few for you.

Do you believe it is morally good to take actions which increase our fitness?
Do you think it is sometimes/always bad to take actions which decrease our fitness?

Do you take fitness to have terminal or intrinsic moral value?

My answers: "Not necessarily," "Sometimes," and "No."

I'll try to answer your questions tomorrow, but I'll have to be asking for clarification.

Replies from: JGWeissman
comment by JGWeissman · 2009-05-25T18:31:43.432Z · LW(p) · GW(p)

I intrinsically value my experience of life, and to the extent that it causes others to have life experiences that they similarly value, I find that my fitness has instrumental value. (Though I tend to value memetic fitness over genetic fitness.)

People instinctively have values that promote genetic fitness (though most don't value genetic fitness itself). One should consider if a loss of genetic fitness reflects a loss to one of these values.

The modified quote does not Appeal to Nature (or if it does, Appealing to Nature is not always wrong). That a behavioral restriction reduces fitness is a reasonable red flag that it may be reducing the person's actual utility, and I don't think it is controversial that you should not do arbitrarily. The compelling reason may be that the loss of fitness has nothing to do with anything the person values, but it does promote something else that really is valued. But it is not wrong to desire an explicit reason for changing one's behavior. Every improvement is a change, but not every change is an improvement.

I think that both the modified and original quote are really a side point to the issue Phil was discussing. What might be more pertinent is that a moral system, whether it is good or not, that causes its followers to decrease their fitness, will be "punished" in that it will become less common than moral systems that promote fitness. This nicely supports the idea that we could promote a good moral system, if we identified one, if we could fix certain parameters so that morality does increase fitness.

And no, we should not dismiss an article because its author made a mistake in answering a question about it. If no one is able to address an objection to a critical part of the article, then we should consider dismissing it.

comment by PhilGoetz · 2009-05-26T18:57:11.456Z · LW(p) · GW(p)

I find the response of the LW community schizophrenic. If someone writes a post advocating moral realism, they get jumped on for being religious. Yet when I wrote this post asking whether there is some evolution-free moral code that should influence the choices of organism-designers, I got jumped on for even presenting as one possibility that moral realism might be false.

Claiming that the "naturalistic fallacy" is a fallacy is, I think, identical to defending moral realism. You can't even define the naturalistic fallacy without presuming moral realism.

Replies from: jimrandomh, thomblake
comment by jimrandomh · 2009-05-26T19:32:00.588Z · LW(p) · GW(p)

Your post was poorly received because it tackled a confusing topic, and failed to bring clarity. All of the supposed counter-arguments are just confabulations. I expect Less Wrongers to jump on any attempt to analyze morality in abstract terms, regardless of its conclusion, because there's an extensive body of philosophical literature showing that such attempts produce only concentrated confusion.

Also, you shouldn't expect different individuals within a community to all advocate consistent positions. If they did, that would mean that either the question was an easy one and not worthy of further discussion, or the community was broken and suffering from groupthink.

comment by thomblake · 2009-05-26T19:36:01.602Z · LW(p) · GW(p)

I agree with jimrandomh. One should expect different people to have different opinions. Furthermore, we're most likely to respond to things that we find obviously wrong - thus, where the community is not in consensus, expect to be 'jumped on' no matter which position you advocate.

-a moral realist of sorts

comment by PhilGoetz · 2009-05-24T15:22:53.126Z · LW(p) · GW(p)

Humans are not really free moral agents when deciding on a balance between collectivism & individuality. Given their particular mechanism for reproducing, and given a particular society with a particular distribution of their genetic relatedness to the people they typically encounter, you can compute how much an average person in that society can be expected to value their own life, vs. the lives of others. An individual human can reason about it, and choose a different valuation; but they're then trying to act in a way inconsistent with their psychology, and a way that will lower their own genetic fitness.

I'm dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness. (Please do not respond with pointers to literature on how vegetarianism is healthy for you. For the sake of argument, presume you are a hunter-gatherer in a cold climate.) You can sit down and reason out that eating other creatures who have feelings and thoughts is bad, but when you then conclude that lions are evil for eating zebras, you've gone wrong somehow. I don't buy the argument that the rules are different for humans because they can reflect on the rules.

(Also, please don't respond by saying that genetic fitness shouldn't matter to thoughtful modern people. Genetic fitness matters, because a moral system must be evolutionarily stable.)

A designer of a posthuman society is free, in a way that a human is not, to choose that balance. The problem presented by this post is that the parameter they would adjust to choose that balance, is also one of the key parameters to adjust to choose an exploration/exploitation tradeoff. The former is an ethical issue; the latter is an optimization issue. How much weight do you give to ethics vs. goal attainment? Does it make any sense for, let's say, a singleton AI, to hold an ethical viewpoint that causes it to act less optimally?

Replies from: Vladimir_Golovin, conchis, Alicorn, MichaelBishop, loqi
comment by Vladimir_Golovin · 2009-05-24T17:25:17.270Z · LW(p) · GW(p)

I'm dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness.

Genetic fitness matters, because a moral system must be evolutionarily stable.

This implies that tragedies of the commons are acceptable under such moral system.

(Under the "tragedy of the commons" I mean a situation where "selfish" members of a population take advantage of a communally-maintained resource while not contributing to it, thus gaining free energy to outreproduce the "good citizens", thereby increasing the frequency of the genes for cheating and reducing the frequency of the genes for maintenance of the common resource. Any population consisting only of "good citizens" is evolutionarily unstable because it is vulnerable to an invasion of "selfish" mutations.)

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-26T18:45:44.020Z · LW(p) · GW(p)

This implies that tragedies of the commons are acceptable under such moral system.

You have it exactly backwards. I made the statement because a moral system that encourages tragedies of the commons is not evolutionarily stable and hence not acceptable.

If you still think it does, please provide an explanation this time.

comment by conchis · 2009-05-24T16:44:49.787Z · LW(p) · GW(p)

Thanks for the clarification. I think I have a somewhat clearer idea what you're getting at now, but the distinction you are attempting to draw between ethics and goal attainment still seems wrongheaded to me. As the designer of a posthuman society, your ethics determine your goals; I don't see how the two are supposed to come into conflict in the way you suggest. (Maybe the demands of evolution will place constraints on the psychologies of feasible post-humans, but that's a rather different point.)

Nitpick:

An individual human can reason about it, and choose a different valuation; but they're then trying to act in a way inconsistent with their psychology

As it's currently stated, I don't think this claim makes any sense. If they can do it, then it's consistent with their psychology.

comment by Alicorn · 2009-05-24T15:51:15.295Z · LW(p) · GW(p)

Why must a moral system be evolutionarily stable?

Replies from: MichaelBishop, PhilGoetz, JGWeissman
comment by MichaelBishop · 2009-05-24T23:15:35.205Z · LW(p) · GW(p)

Yes, and furthermore, what does Phil mean by "evolutionarily stable"? I'm not asking for the definition of an evolutionarily stable state but rather an explanation for what Phil means by it in this context.

comment by PhilGoetz · 2009-06-10T16:14:25.657Z · LW(p) · GW(p)

Because it won't last if it isn't. If you propose a moral system, knowing that the inevitable consequences of people adopting this system is that they will be exploited by defectors and the system will collapse, leaving an immoral and low-utility society of defectors, that's not moral.

Replies from: Alicorn
comment by Alicorn · 2009-06-10T16:16:25.884Z · LW(p) · GW(p)

If you propose a moral system, knowing that the inevitable consequences of people adopting this system is that they will be exploited by defectors and the system will collapse, leaving an immoral and low-utility society of defectors, that's not moral.

This only follows if you're a consequentialist.

comment by JGWeissman · 2009-05-24T18:45:35.445Z · LW(p) · GW(p)

I think Phil may be saying that persistent moral systems must be evolutionary stable, though that raises the question why the moral system needs to be persistent. One might argue that a species that can't support its existence in a moral way should accept its own extinction (that is, the individual members of the species should accept the extinction of the whole species), along with the moral system that led to that conclusion.

comment by MichaelBishop · 2009-05-24T16:10:44.106Z · LW(p) · GW(p)

This does help clarify things. Unfortunately, Conchis is right, you're committing the naturalistic fallacy.

Replies from: timtyler
comment by timtyler · 2009-05-24T22:44:33.402Z · LW(p) · GW(p)

I think we can safely put the naturalistic fallacy in the "out-of-date philosophical claptrap" dustbin.

Replies from: MichaelBishop
comment by MichaelBishop · 2009-05-24T23:03:09.194Z · LW(p) · GW(p)

Tim, what do you mean by this?

Replies from: timtyler
comment by timtyler · 2009-05-25T13:40:05.034Z · LW(p) · GW(p)

That G.E. Moore's account of goodness is over 100 years old, and is too confused to be worth bothering with.

Replies from: MichaelBishop
comment by MichaelBishop · 2009-05-25T16:25:29.479Z · LW(p) · GW(p)

I wasn't attempting a broad defense of G.E. Moore's account of goodness. I was just trying to point out what I considered mistaken moral reasoning without wasting too many words (I'm a slow writer.)

I interpret your comments to have the following connotations, "It isn't worth referencing the naturalistic fallacy like you did." Well, I'm sorry, but I believe that referencing the naturalistic fallacy (though it would have been more precise to reference Appeals to Nature) communicated something important to a lot of people. I don't know a more efficient way to do that.

For the record, I'm not voting your comments down. But my guess is that they wouldn't be voted down if you explained (or even linked to an explanation) of your criticism.

Replies from: timtyler
comment by timtyler · 2009-05-25T17:15:40.787Z · LW(p) · GW(p)

Those connotations roughly capture my intention. Claiming that someone's is invoking a fallacy is a kind of put-down. However, if the claimed fallacy is just someone's opinion (about what they think the word "good" ought to refer to) it doesn't work to well.

I am unimpressed by Moore's claims. Labelling your intellectual opponents' thinking as fallacious when it is not is an underhand debating tactic that gets no respect from me. Moore wasn't even on the better side of the argument. He opposed naturalism and reductionism. It should be no mystery why I think his views sucked - it's because they did.

Replies from: MichaelBishop
comment by MichaelBishop · 2009-05-25T17:49:14.370Z · LW(p) · GW(p)

Tim, I wish our exchange could be a bit more amiable, but you caused me to read up on some stuff that may be changing the way I think. For this I thank you.

I've already acknowledged that "Appeal to Nature" is a more precise concept and that is what I might be inclined to reference in similar situations in the future. I'm even willing to question that practice. If you have time to provide some preferred concepts/vocabulary, that would be great.

Do you agree that improving one's genetic fitness should be a terminal value for people? Do you agree that Phil seemed to imply that?

Replies from: timtyler, timtyler
comment by timtyler · 2009-05-25T18:23:08.746Z · LW(p) · GW(p)

Phil claimed that genetic fitness mattered for ethics - which it probably does. For example, the Shakers believed that everyone should be celibate - and now there aren't any of them around any more.

Replies from: Alicorn
comment by Alicorn · 2009-05-25T18:28:52.398Z · LW(p) · GW(p)

There would still be Shakers around if they had been able to keep up the practice of adopting children indefinitely. According to Wikipedia, that only stopped working when adoption became the province of the state. Wikipedia also says that there are still four Shakers today and people may join them if they like.

comment by timtyler · 2009-05-25T18:15:28.980Z · LW(p) · GW(p)

People can choose their own values. Inclusive genetic fitness seems like a reasonable-enough maximand to me - because it is mine - see:

http://alife.co.uk/essays/nietzscheanism/

comment by loqi · 2009-05-27T01:22:05.823Z · LW(p) · GW(p)

I'm dubious of any moral claim, such as valuing vegetarianism or celibacy, that requires people to act contrary to their natures in a way that decreases their fitness. [...] but when you then conclude that lions are evil for eating zebras, you've gone wrong somehow. I don't buy the argument that the rules are different for humans because they can reflect on the rules.

(Emphases mine). Please settle on a set of applicable subjects. I'm with you that moral claims apply to "people". I don't think many here will claim that humans exhaust that space in theory, or that lions inhabit it currently.

comment by JGWeissman · 2009-05-24T01:11:31.459Z · LW(p) · GW(p)

The "tedious" detail makes your point clearer.

Though your linked resources for Exploration and Exploitation don't help much. One keeps giving me page load errors, and the other appears to be a symposium description and schedule. We have had some discussion of the concept here on LW. And maybe it would help to add an inline explanation that it is the issue of trading off between strategies that are known to work well, and trying other strategies to find out how well they work.

comment by AndySimpson · 2009-05-24T18:05:21.452Z · LW(p) · GW(p)

As other commenters have suggested, what is moral is not reducible to what is natural. This assumption, which underlies the entire post, is left totally un-addressed. I understand that genetic fitness is relevant to morality because people must endure, but this doesn't seem to demand that the extent of morals be fitness. I would love a post that explains morality as inherently and solely about fitness.

This post flies from one topic to another very quickly, and I can't understand all the connections between topics. Why is the human designer of transhumanity suddenly free to choose a new moral chassis for his creation, and why should he care about the moral success of the transhumans? Shouldn't he create a transhumanity that maximizes his own fitness?

More broadly, are we talking about real transhumans or a human-designed strong AI?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-26T18:42:23.765Z · LW(p) · GW(p)

As other commenters have suggested, what is moral is not reducible to what is natural. This assumption, which underlies the entire post, is left totally un-addressed.

You are about as wrong as it is possible to be. The point of the post is that there is a parameter which goal-optimization provides a setting for, but which also has moral implications.

If I believed that what was natural was moral, there would be no issue. You would simply set that parameter in a way that is best for goal-seeking, and be done with it.

Why is the human designer of transhumanity suddenly free to choose a new moral chassis for his creation, and why should he care about the moral success of the transhumans? Shouldn't he create a transhumanity that maximizes his own fitness?

Now you're the one saying that what is natural is moral. See, as I said, that's what the post is about. If what is natural is moral, then your comment would be the obvious conclusion.

comment by jimrandomh · 2009-05-23T14:47:53.727Z · LW(p) · GW(p)

Insects and molds are more like cells than organisms. My T-cells routinely sacrifice themselves for my benefit, and there's nothing horrific about that.

comment by Vladimir_Nesov · 2009-05-23T11:29:56.901Z · LW(p) · GW(p)

This post is ridden with unstated unsubstantiated assumptions, and its presentation is borderline insane (even if the discussed issue could be rescued by a full rewrite). Let's not go there. Voted down.

Replies from: conchis, PhilGoetz
comment by conchis · 2009-05-23T11:53:49.107Z · LW(p) · GW(p)

A slightly more charitable version of Vladimir's comment: I'm not sure you've made enough effort to overcome the inferential distance between yourself and much of your readership here.

comment by PhilGoetz · 2009-05-23T23:15:19.532Z · LW(p) · GW(p)

That's a harsh accusation to make without supporting it in any way.

After all the posts and comments I've made on LW, you should realize that the odds are much greater that you failed to understand my post, than that I am insane. I'm disappointed in you.

I doubt that you're confused by assumptions, since this post contains far fewer assumptions than anything else you're likely to read today. What is confusing is that it removes many of the assumptions you rely on in everyday conversation - such as that society is made of humans, who are sexually diploid, and face certain ethical problems and have a certain range of possible actions available to them - and doesn't explicitly say where it stops removing assumptions.

Replies from: Vladimir_Nesov, steven0461
comment by Vladimir_Nesov · 2009-05-23T23:44:44.452Z · LW(p) · GW(p)

I referenced the word "insane" with the Raising the Sanity Waterline article, thus qualifying it, taking, for example, belief in God as a kind of insanity in the intended sense.

Judging by the rating of your post, my impression about there being something wrong with it is shared by other readers. My comment was an attempt to express what in particular I found to be wrong: presentation is extremely confused.

By "unstated unsubstantiated assumptions" I mean the things like:

  • "your task is to choose the proper weight to give collective versus individual goals" (what weight? what kind of framework are you working from?),
  • starting to talk about "the transhuman" (what's that exactly? how did it get in the article?),
  • "organisms with less genetic diversity" (genetic diversity? what does it have to do with transhumans?),
  • ethics being determined by "sexual diploidy" (where's that come from in the article? explanation please),
  • "when people are software", "a more insightful AI" (you are assuming a specific futuristic model now)
  • "exploration" and "exploitation" (you are selecting a specific algorithmic problem; why?)
comment by steven0461 · 2009-05-23T23:24:20.666Z · LW(p) · GW(p)

He said "its presentation is borderline insane", not "its author is insane". Argumentative hygiene, please.

(Is there a case for valuing some kinds of insanity because the best contributors to a rationalist group are not always the best individual rationalists for division-of-labor reasons? Should we ever think in terms of "psychiatric diversity"?)

comment by smoofra · 2009-05-23T03:53:07.709Z · LW(p) · GW(p)

It sounds an awful lot like you're looking to evolution for moral guidance. That's never a good idea.

Replies from: timtyler, PhilGoetz
comment by timtyler · 2009-05-23T10:34:12.846Z · LW(p) · GW(p)

Have you heard of cultural evolution? That is the source of many people's moral systems.

Replies from: smoofra
comment by smoofra · 2009-05-23T16:46:00.280Z · LW(p) · GW(p)

I was talking about biological evolution. Cultural evolution is an entirely different thing.

Replies from: PhilGoetz, timtyler
comment by PhilGoetz · 2009-05-23T23:14:11.128Z · LW(p) · GW(p)

Cultural evolution is not identical to biological evolution. That is trivial. In response to your comment, whether it is good to look to evolution for moral guidance depends on the properties of systems that evolution tends to produce. Cultural and biological evolution are equally appropriate domains to look at for an answer to the question.

comment by timtyler · 2009-05-23T20:07:29.631Z · LW(p) · GW(p)

Well, technically both are evolution, since both involve inherited variation and differential replicative success. Culture is part of biological evolution. For one thing, culture is an aspect of biology - not part of geology - and also, culture and DNA co-evolve.

So, whether you like to admit it or not, your ethics are a product of biological evolution.

Replies from: smoofra
comment by smoofra · 2009-05-24T13:56:45.181Z · LW(p) · GW(p)

Well, technically both are evolution.....

I'm not going to argue about the definition of the word "evolution". When I suggested Phil was looking to evolution for moral guidance I was specifically thinking of biological evolution. I should have been more explicit.

My point was that biological evolution is actually horrifyingly immoral, and should not be looked to as a guide or inspiration for morality in human affairs.

So, whether you like to admit it or not, your ethics are a product of biological evolution.

I have never denied it. But my ethics aren't good because of the process that coughed them up. They are good in spite of that process.

Replies from: timtyler
comment by timtyler · 2009-05-24T15:18:02.533Z · LW(p) · GW(p)

Evolution has produced moral agents more than once - for example, see:

http://www.assortedscribbles.com/posts/PopScience/Dolphins_saving_people:_dolphins_and_altruism

I don't see much of an alternative to concluding that evolution repeatedly produces moral systems among big-brained animals. That the whole reproduction-variation-selection cycle is responsible for creating human morality - in just the same way that it is responsible for all other problems of adaptive fit in the universe.

If you want to argue that some other process is responsible, then I think you need to look carefully at whatever that proposed process is, and ask yourself whether there is, in fact, a population of entities, with variation and selection.

Also, if you are thinking of cultural evolution as somehow not "biological", then I reckon you need to reconsider that as well. Humans are not the only animals that exhibit cultural transmission. It is a widespread phenomenon in biology. We just do it a bit better than most. Cultural evolution is biological in nature, by the definition of the term "biology".

Replies from: smoofra
comment by smoofra · 2009-05-24T20:46:39.017Z · LW(p) · GW(p)

Evolution has produced moral agents more than once

This is a fair point, though the warm fuzzy factor on that sort of thing is so high I'd advise taking it all with a grain of salt unless you've gone over the details yourself. You're certainly right that other animals besides humans exhibit altruism and other behaviors that we would consider moral. I don't know of any animal other than humans thinks about morality though. Are there other animals that keep track of immoral behavior in others? (not a rhetorical question).

Now having said all that, I need to point out that you've still entirely missed my point. Lets say evolution really did invent full-scale human morality twice. So what. It invented eyes twice too. Just because it can invent eyes doesn't mean it can see, and just because it invented morality doesn't make it moral. When somebody tries to justify a choice by saying "evolution does X, therefore X is right", that's an error. I'm not sure that's what Phil was saying, but it sounded close enough to set off alarm bells in my head.

If you want to argue that some other process is responsible

I don't. Whatever dolphins do or or don't do, evolution invented it.

Also, if you are thinking of cultural evolution as somehow not "biological", then I reckon you need to reconsider that as well.

You're mincing words. Yea, it's biological, and it's also physical, because when you get down to it we're all made of quarks. Cultural evolution is not, however, the same process as biological evolution. I trust it is obvious enough to you and everyone here what concepts the words "cultural evolution" and "biological evolution" refer to, and that they are not in fact the same thing.

Replies from: timtyler
comment by timtyler · 2009-05-24T21:15:32.981Z · LW(p) · GW(p)

Dawkins argues that nature is red in tooth-and-claw - and thus represents a poor source of moral guidance. However, my view is that this conception of nature has been rather exploded by subsequent authors - such as Robert Wright with Non-Zero. Nature is actually rather cooperation-friendly. It looks like a quite reasonable source of moral guidance to me - since it came up with humans.
Of course, it is not itself moral. It is not even an agent, let alone a moral agent.

The proposed idea is not that "cultural evolution" and "biological evolution" are the same thing.

The idea is that these concepts are nested as follows:

(physical(biological(cultural))).

They are not disjoint, like this:

(physical) (biological) (cultural).

I don't think non-cultural evolution is a particularly useful natural category - but if you really want to have a name for it, perhaps consider "nuclear evolution" - since most non-cultural inheritance is on the cellular level. However please don't refer to it as "biological evolution": that is just wrong.

Replies from: smoofra
comment by smoofra · 2009-05-25T03:57:50.200Z · LW(p) · GW(p)

It looks like a quite reasonable source of moral guidance to me - since it came up with humans.

A broken clock is right twice a day.

I don't think non-cultural evolution is a particularly useful natural category

It the thing that designed the DNA of all life on earth. It's useful to be able to talk about That Thing That Designed Life without people randomly bursting in saying that whatever you said about evolution is wrong because cultural evolution is a counterexample. Nobody was talking about cultural evolution, and it just isn't relevant to this discussion.

However please don't refer to it as "biological evolution": that is just wrong.

I'm getting a bit tired of debating semantics with you. You can call it fig pudding for all I care.

Replies from: timtyler
comment by timtyler · 2009-05-25T16:09:08.187Z · LW(p) · GW(p)

The idea that non-cultural evolution is responsible for all the earth's DNA seems like a fundamental misconception to me. In particular, our DNA is the product of gene-meme co-evolution - and something similar is probably true of many other animals. Evolution is responsible for the planet's DNA, not some cut-down version of evolution that excludes cultural inheritance. However, I observe that you don't seem very interested in this discussion - which is fine.

comment by PhilGoetz · 2009-05-23T05:55:31.122Z · LW(p) · GW(p)

More like I'm asking whether to override evolution. But now I'm curious. Never a good idea? You have a better idea? I think it is more true that not looking to evolution for moral guidance is not a good idea. If you think that your ethics are above your biology, we're not going to have much of a conversation.

Replies from: smoofra
comment by smoofra · 2009-05-23T17:10:30.645Z · LW(p) · GW(p)

More like I'm asking whether to override evolution. But now I'm curious. Never a good idea?

Really, it's never a good idea. Morality may have been produced by evolution, but that does not not not mean we can go looking at the behaviors evolution comes up with as examples of moral behavior. Morality is only one aspect of the behavior of one species. Evolution may have invented morality, but it also invented AIDS, and cannibalism, and war, and sharks, and parasitic wasps.

An exploding aphid is not acting morally. There aren't different moral rules for aphids because they live in genetically homogeneous groups. In fact, there aren't moral rules for aphids at all. The aphid is just being an aphid. It has nothing whatsoever to do with morality, any more than a rock or a robot or a quasar does.

If you think that your ethics are above your biology, we're not going to have much of a conversation.

I think my ethics are above the process that invented my biology.

comment by timtyler · 2009-05-24T20:30:01.836Z · LW(p) · GW(p)

Another way of looking at the issue:

Imagine an intellgent queen, with a range of different types of sterile workers. Since the queen is smart, she it isn't limited to using the same genome in each type of worker - she can just put in the genes that are useful. Does this diversity reduce the level of cooperation between the workers? Not really - the genes of each worker have but one way to immortality - help the queen to reproduce.

In other words, the hypothesis that the level of cooperation depends on the proportion of shared genes is only a convenient rule of thumb, and should not be taken as being a golden rule.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-07-27T02:42:05.198Z · LW(p) · GW(p)

True - the assumptions of evolutionary theory break down when you can control individual genes in individuals.

comment by steven0461 · 2009-05-24T10:05:15.407Z · LW(p) · GW(p)

I'm still not getting the point even after the rewrite. Aren't collective goals best served by pure collectivists with no regard for their self-interest? (Note that real humans in a communist system are not pure collectivists.) A collectivist can always just copy the individualist strategy when that is expected to best serve the collective goal.

It also looks to me like the post isn't careful enough in distinguishing fitness-maximizing vs. adaptation-executing.

Replies from: PhilGoetz, PhilGoetz
comment by PhilGoetz · 2009-05-26T21:05:34.393Z · LW(p) · GW(p)

It also looks to me like the post isn't careful enough in distinguishing fitness-maximizing vs. adaptation-executing.

Now that I know what you mean by adaptation-executing, I can tell you that it isn't relevant. Here are the concepts I'm using:

  • Exploitation vs. exploration

  • Kin selection

These are concepts used for goal optimization and fitness maximization. Adaptation execution becomes relevant only after evolution has operated, in some particular historical context.

comment by PhilGoetz · 2009-05-24T15:33:37.579Z · LW(p) · GW(p)

It also looks to me like the post isn't careful enough in distinguishing fitness-maximizing vs. adaptation-executing.

I invite you to distinguish between them.

Replies from: MichaelBishop
comment by MichaelBishop · 2009-05-24T15:48:55.203Z · LW(p) · GW(p)

When the environment changes more rapidly, or adaptations are adopted more slowly, adaptation-execution drifts further from fitness-maximization.

Replies from: AndySimpson
comment by AndySimpson · 2009-05-24T17:16:05.390Z · LW(p) · GW(p)

Also, organisms are always adaptation-executors rather than direct fitness-maximizers.

Replies from: timtyler
comment by timtyler · 2009-05-24T19:51:30.048Z · LW(p) · GW(p)

What are you guys talking about, exactly? Phil describes evolution as an optimisation process - which seems fair enough to me. Are you three "adaptation-execution" folk trying to deny that evolution acts as an optimisation process? If not, what does all this have to do with Phil's original post?

Replies from: MichaelBishop
comment by MichaelBishop · 2009-05-24T22:56:49.844Z · LW(p) · GW(p)

I'm not sure exactly what point Steven was making, I was merely responding to Phil's challenge to distinguish between fitness-maximization and adaption-execution.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-26T18:43:13.804Z · LW(p) · GW(p)

What do you mean by adaptation-execution?

Replies from: Vladimir_Nesov
comment by JamesCole · 2009-05-23T03:16:51.973Z · LW(p) · GW(p)

So evolutionarily, the less diversity w/in members of a species the more their behavior is oriented towards their group. I'll take it here that that's the case (I don't know enough about the subject-matter to confidently judge). But I don't think this embodies an ethics. It's just the way evolution builds things, and just because evolution builds things one way doesn't mean that it is "ethical".

Is it satisfactory to simply define the correct moral weight to place on collective versus individual goals, as that which results when you set your population's genetic/memetic diversity so as to optimize your population's exploration/exploitation balance for its goals?

I don't think it's clear what the relation is you are suggesting b/ween collective/individual and exploration/exploitation.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-23T05:54:19.425Z · LW(p) · GW(p)

High diversity = exploration Low diversity, many copies of previously-successful organisms or memes or thought processes = exploitation

Low diversity => high cooperation High diversity => feelings and morals that emphasize individuality

Given a goal and an environment, there is some optimal balance between exploration and exploitation. But that balance also strongly influences the resulting balance between collectivist and individualist ethics.

Replies from: conchis, timtyler
comment by conchis · 2009-05-23T13:10:40.855Z · LW(p) · GW(p)

How does (intra-human) cultural variation in individualism/collectivism feed into this?

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-23T23:11:48.163Z · LW(p) · GW(p)

Those are finer-grained variations, and aren't explained by sexual diploidy, since all humans are sexually diploid.

Replies from: conchis
comment by conchis · 2009-05-24T17:53:16.101Z · LW(p) · GW(p)

To the extent that such finer-grained variation is possible, it suggests that the constraint you're positing isn't actually that much of a constraint.

Maybe I'm still missing the point of the post.

comment by timtyler · 2009-05-23T10:39:33.460Z · LW(p) · GW(p)

Are you assuming that evolutionary experiments have to be embodied in individual agents? If so, it seems like an incorrect assumption to me. An ecosystem can explore a search space and generate innovative solutions - even if each generation of individuals consists entirely of identical clones.

Killing sentient organisms in order to explore a search space seems wasteful, unnecessary and barbaric.

Replies from: PhilGoetz
comment by PhilGoetz · 2009-05-23T23:07:01.055Z · LW(p) · GW(p)

I am assuming that the agents of the ethical system are self-interested, and that requires them to have identities. This assumption isn't necessarily true, but it covers a large enough space of possible worlds to be worth considering. It's also a part of the space that is easier for us to understand than its complement.

Not sure where the "killing sentient organisms" comes in. That also introduces a set of assumptions. When, at work, I send 500,000,000,000 BLAST jobs out to the computing grid, to run on 700 computers comprising 2800 CPUs, with 5,000 different outer-loop starting points, how many organisms am I killing when I end the run?

Replies from: timtyler
comment by timtyler · 2009-05-24T09:48:41.069Z · LW(p) · GW(p)

I am not sure you got my point. I'll try again. To efficiently search a space, you need some variation in the trials that are performed. However, that variation does not necessarily need to be embodied in the genomes of intelligent agents. It could be in the form of variations in lab experiments performed. Progress today does not depend on genetic variation between humans. It depends on memetic variation - and the memes are usually not embodied as agents that are conscious or do much cooperating. As far as I can tell, if you understand this, your original questions seem to fall apart.

comment by timtyler · 2009-05-23T10:32:44.905Z · LW(p) · GW(p)

Re: The human, trying to balance collective ethics vs. individual ethics, is really just trying to discover a balance point that is already determined by their sexual diploidy. The transhuman, OTOH, actually has a decision to make -- where should that balance be set?

It seems like a rather vague question - since it doesn't specify a scale of measurement - or how far in the future we are looking. Look far enough forwards, and there may be only one organism - in which case the issue doesn't arise.

Replies from: randallsquared
comment by randallsquared · 2009-05-27T13:05:37.651Z · LW(p) · GW(p)

Look far enough forwards, and there may be only one organism

The speed of light strongly implies that this won't happen.

Replies from: timtyler
comment by timtyler · 2009-05-27T14:27:38.640Z · LW(p) · GW(p)

The speed of light is not really much of an issue here. What would eventually cause problems is if universal expansion drove the living regions of the universe into causally disconnected regions - as can supposedly happen in some cosmologies - but that possibility seems a long way off.

Replies from: randallsquared
comment by randallsquared · 2009-05-27T18:51:19.506Z · LW(p) · GW(p)

I think we may mean different things by "one organism", then. I think I'd say that processes would have to be fairly tightly coupled (even if through other processes) to be parts of the same organism, but that couldn't easily be achieved with even minute-sized delays.

Replies from: timtyler
comment by timtyler · 2009-05-27T19:10:57.993Z · LW(p) · GW(p)

The idea of all living things forming one big organism involves the end of evolution via natural selection, due to the lack of independent actors for there to be competition between.

I have some essays explaining what I think is meant by the idea:

http://alife.co.uk/essays/one_big_organism/ http://alife.co.uk/essays/self_directed_evolution/ http://alife.co.uk/essays/the_second_superintelligence/

Like you say, you seem to be talking about something quite different.