Conjuring An Evolution To Serve You

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-19T05:55:56.000Z · LW · GW · Legacy · 26 comments

Contents

26 comments

GreyThumb.blog offers an interesting analogue between research on animal breeding and the fall of Enron.  Before 1995, the way animal breeding worked was that you would take the top individual performers in each generation and breed from them, or their parents.  A cockerel doesn't lay eggs, so you have to observe daughter hens to determine which cockerels to breed.  Sounds logical, right?  If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs.

Behold the awesome power of making evolution work for you!  The power that made butterflies - now constrained to your own purposes!  And it worked, too.  Per-cow milk output in the US doubled between 1905 and 1965, and has doubled again since then.

Yet conjuring Azathoth oft has unintended consequences, as some researchers realized in the 1990s.  In the real world, sometimes you have more than animal per farm.  You see the problem, right?  If you don't, you should probably think twice before trying to conjure an evolution to serve you - magic is not for the unparanoid.

Selecting the hen who lays the most eggs doesn't necessarily get you the most efficient egg-laying metabolism.  It may get you the most dominant hen, that pecked its way to the top of the pecking order at the expense of other hens.  Individual selection doesn't necessarily work to the benefit of the group, but a farm's productivity is determined by group outputs.

Indeed, for some strange reason, the individual breeding programs which had been so successful at increasing egg production now required hens to have their beaks clipped, or be housed in individual cages, or they would peck each other to death.

While the conditions for group selection are only rarely right in Nature, one can readily impose genuine group selection in the laboratory.  After only 6 generations of artificially imposed group selection - breeding from the hens in the best groups, rather than the best individual hens - average days of survival increased from 160 to 348, and egg mass per bird increased from 5.3 to 13.3 kg.  At 58 weeks of age, the selected line had 20% mortality compared to the control group at 54%.  A commercial line of hens, allowed to grow up with unclipped beaks, had 89% mortality at 58 weeks.

And the fall of Enron?  Jeff Skilling fancied himself an evolution-conjurer, it seems.  (Not that he, like, knew any evolutionary math or anything.)  Every year, every Enron employee's performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses.  Unfortunately, as GreyThumb points out:

"Everyone knows that there are many things you can do in any corporate environment to give the appearance and impression of being productive. Enron's corporate environment was particularly conducive to this: its principal business was energy trading, and it had large densely populated trading floors peopled by high-powered traders that would sit and play the markets all day. There were, I'm sure, many things that a trader could do to up his performance numbers, either by cheating or by gaming the system. This gaming of the system probably included gaming his fellow traders, many of whom were close enough to rub elbows with.

"So Enron was applying selection at the individual level according to metrics like individual trading performance to a group system whose performance was, like the henhouses, an emergent property of group dynamics as well as a result of individual fitness. The result was more or less the same. Instead of increasing overall productivity, they got mean chickens and actual productivity declined. They were selecting for traits like aggressiveness, sociopathic tendencies, and dishonesty."

And the moral of the story is:  Be careful when you set forth to conjure the blind idiot god.  People look at a pretty butterfly (note selectivity) and think:  "Evolution designed them - how pretty - I should get evolution to do things for me, too!"  But this is qualitative reasoning, as if evolution were either present or absent.  Applying 10% selection for 10 generations is not going to get you the same amount of cumulative selection pressure as 3.85 billion years of natural selection.

I have previously emphasized that the evolution-of-foxes works at cross-purposes to the evolution-of-rabbits; there is no unitary Evolution God to praise for every beauty of Nature.  Azathoth has ten million hands.  When you conjure, you don't get the evolution, the Maker of Butterflies.  You get an evolution, with characteristics and strength that depend on your exact conjuration.  If you just take everything you see in Nature and attribute it to "evolution", you'll start thinking that some cute little conjuration which runs for 20 generations will get you artifacts on the order of butterflies.  Try 3.85 billion years.

Same caveat with the wonders of simulated evolution on computers, producing a radio antenna better than a human design, etcetera.  These are sometimes human-competitive (more often not) when it comes to optimizing a continuous design over 57 performance criteria, or breeding a design with 57 elements.  Anything beyond that, and modern evolutionary algorithms are defeated by the same exponential explosion that consumes the rest of AI.  Yes, evolutionary algorithms have a legitimate place in AI.  Consult a machine-learning expert, who knows when to use them and when not to.  Even biologically inspired genetic algorithms with sexual mixing, rarely perform better than beam searches and other non-biologically-inspired techniques on the same problem.

And for this weakness, let us all be thankful.  If the blind idiot god did not take a million years in which to do anything complicated, It would be bloody scary.  3.85 billion years of natural selection produced molecular nanotechnology (cells) and Artificial General Intelligence (brains), which even we humans aren't going to get for a few more decades.  If there were an alien demideity, morality-and-aesthetics-free, often blindly suicidal, capable of wielding nanotech and AGI in real time, I'd put aside all other concerns and figure out how to kill it.  Assuming that I hadn't already been enslaved beyond all desire of escape.  Look at the trouble we're having with bacteria, which go through generations fast enough that their evolutions are learning to evade our antibiotics after only a few decades' respite.

You really don't want to conjure Azathoth at full power.  You really, really don't.  You'll get more than pretty butterflies.

26 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by J_Thomas · 2007-11-19T13:02:03.000Z · LW(p) · GW(p)

There are lots of examples of unexpected selective outcomes.

A story -- a long time agon a swedish researcher tried to increase wheat yields by picking the biggest wheat kernels to plant. In only 5 generations he had a strain of wheat that produced 6 giant wheat kernels per stalk.

When scale insects were damaging citrus fruits, farmers tried to poison them with cyanide. They'd put a giant tent over the whole tree and pump in the cyanide and kill the scale insects. Plants can be immune to cyanide but no animal that depends on respiration can be. And yet in only 5 years or so they got resistant scale insects. The resistant insects would -- when anything startling happen -- sit very still and hold their breath for half an hour or so.

If you want to do directed evolution, you do better to do it in controlled conditions. Take your results and test them carefully and make sure they're what you want before you release them. Microbiologists who want mutants for research commonly take 20 or 100 mutants who survive the conditions they're selected to survive, and test until they get a few that appear to be just what they want. Eliminate the rest.

So, for example, to find a mutant that has a high mutation rate -- start with a strain of bacteria that has at least 4 selectable traits. Say, they don't survive without threonine, don't survive without isoleucine/valine, don't survive penicillin, and don't survive rifampicin. So you grow up a hundred billion or so of them and then you centrifuge them down and resuspend them in medium that doesn't have threonine. Most of them die. Wait for the survivors to grow, and then centrifuge them down and resuspend them in medium that doesn't have isoleucine/valine. Most of them die. Wait for the survivors to grow, and centrifuge them down and resuspend them in medium that has penicillin. Do it a fourth time with rifampicin. Plate them out on media that has lactose (when the originals couldn't use lactose). Some of the colonies will be large and some small, pick a colony that has lots of little warts of bigger growth, because it gets lactose-using mutants even while the colony is growing. A strain that has a hundred times the mutation rate can be easily selected this way. It started out at frequency around 10^-8. After the first selection cycle it was frequency around 10^-6. By the fourth round it was common. Sometimes you can get a mutation rate around 1000 times the normal rate. Much above that and it doesn't survive well.

Take one colony per try because you don't want to test multiple colonies and then find out they're the same mutation over again.

comment by Caledonian2 · 2007-11-19T13:48:48.000Z · LW(p) · GW(p)

The examples seem to demonstrate the weaknesses of selective breeding rather than evolution. Human intent and imperfect knowledge appear to be poor substitutes for the blind, mindless processes of nature.

Hmmmm...

Replies from: gwern
comment by gwern · 2018-11-11T16:04:10.414Z · LW(p) · GW(p)

It's worth remembering that the chicken experiment was specifically designed to elicit that effect, and chickens are unusual in being confined to extremely small cages with other chickens. That doesn't happen with cows or apples or wheat or... As far as I know, animal/plant breeders typically totally ignore such indirect genetic effects/group-level effects (or even model them away, absorbing them into fixed/random effects), along with ignoring apparently vital stuff like epistasis/dominance, and yet the dumb simple selection methods based on additivity work fine and still realize all the improvements they are supposed to. Yields go up reliably every year.

comment by Silas · 2007-11-19T14:35:03.000Z · LW(p) · GW(p)

Eliezer_Yudkowsky: I'm not sure I see the relevance of evolutionary theory to Enron. According to the characterization you quoted, the problem was that the stakes were so high that people cheated. Why do evolution's insights help me see that? That mishap can be explained through poor incentive alignment: what was optimal behavior for a trader was not regarded by Enron as optimal behavior. The disutility to Enron of "false profits" was not reflected in an individual trader's utility curve.

So Skilling picked a bad incentive structure. Does everyone who picks a bad incentive structure fancy himself an evolution conjurer?

Replies from: emhs
comment by emhs · 2013-11-30T02:04:35.472Z · LW(p) · GW(p)

So Skilling picked a bad incentive structure. Does everyone who picks a bad incentive structure fancy himself an evolution conjurer?

If one thinks of evolution as the process of deriving "better" results through a selection criteria and a change process, then yes, Skilling was conjuring evolution, though he did not realize it. He established a selection criteria (individual performance numbers) and the employees themselves provided the change process. As he repeatedly selected against the weakest performers (according to his insufficiently rational criteria), the employees changed through what they found the easiest way to achieve "better performance". The company evolved as the employees changed their behavior.

comment by J_Thomas · 2007-11-19T15:11:25.000Z · LW(p) · GW(p)

Skilling was selecting badly. The 10% he discarded each year might have included some he should have kept, and vice versa.

Similarly, God at one point said he was going to get rid of evil people and keep good people and so people would get better. I don't see much evidence that's worked well.

Evolution happens, but if you want to harness it for your own goals you have to be very careful. Try to arrange it so you can throw away your mistakes.

comment by Roland3 · 2007-11-19T17:44:14.000Z · LW(p) · GW(p)

3.85 billion years of natural selection produced molecular nanotechnology (cells) and Artificial General Intelligence (brains), which even we humans aren't going to get for a few more decades.

Does that mean that the singularity is at least a few decades away?

Replies from: wizzwizz4
comment by wizzwizz4 · 2020-02-28T08:23:21.180Z · LW(p) · GW(p)

I sincerely hope so. Look at the progress we've made since you wrote this comment. We need to make that much progress several times over before we're ready to actually start trying to build the things, unless we fancy dying (or worse).

comment by Leroy_Wakeley · 2007-11-19T18:46:32.000Z · LW(p) · GW(p)

Has anyone tried breeding the smartest nonhuman primates (chimps, bonobos?) for intelligence? If not, what could one expect to achieve by doing this for 10 generations? To what extent are the genes for intelligence additive? That is, if there are multiple distinct genes that increase intelligence via distinct mechanisms, does having all these genes give you the sum of the intelligence boosts of having these genes individually?

comment by Tom_McCabe2 · 2007-11-19T20:57:44.000Z · LW(p) · GW(p)

"You really don't want to conjure Azathoth at full power. You really, really don't. You'll get more than pretty butterflies."

How confident are you that there aren't any mad scientists reading OB, looking for the perfect tool to do something randomly destructive?

comment by Silas · 2007-11-19T21:56:04.000Z · LW(p) · GW(p)

Tom_McCabe: If you're worried about mad-scientist OB readers obtaining the tools for random destruction from this site, you're too late. Robin_Hanson already gave them the perfect idea.

Picture

comment by Carl_Shulman · 2007-11-19T21:58:38.000Z · LW(p) · GW(p)

On the Enron point:

An article in today's NY Times claims that a major danger to investment banks is the empowerment and growth of whichever division happens to be benefiting from transitory financial cycles. Since members of a particular department have specialized skills that are less valuable in other areas, they tend to be biased in favor of excessive investment of resources in their areas. If the mortgage market booms for too long you will wind up with a high frequency of mortgage people in the executive corps and reduced ability to cut loose if risks appear dangerously high for the firm.

Supposedly, part of Goldman Sachs' chart-topping success during the recent credit crunch (although it has been very successful for more or less its entire history) comes from the creation of a powerful independent institution with veto powers and less bias towards particular investment classes.

http://www.nytimes.com/2007/11/19/business/19goldman.html?pagewanted=2&ei=5087&em&en=a3db4f1df6a297ef&ex=1195621200 "At Goldman, the controller's office--the group responsible for valuing the firm's huge positions--has 1,100 people including 20 PhDs. If there is a dispute, the controller is always deemed right unless the trading desk can make a convincing case for an alternate valuation. The bank says risk managers swap jobs with traders and bankers over a career and can be paid the same multimillion-dollar salaries as investment bankers."

comment by Richard_Hollerith2 · 2007-11-19T22:12:51.000Z · LW(p) · GW(p)

Let me answer a slightly different question: how confident are you that the benefits of publicizing the destructive potential of genetic algoritms outweighs the risks?

I am pretty confident that people setting out intentionally to do destruction on the scale addressed here are rare compared to people who do large-scale destruction as an unintentional side effect of trying to do good or at least ethically neutral things. Most evil is done by people who believe themselves to be good and who believe their net-evil deeds are net-good or net-neutral.

People of course differ in their definition of the good, but almost everyone capable of affecting them agree that certain outcomes (e.g. toasting the planet) are evil.

comment by RBH2 · 2007-11-19T22:43:09.000Z · LW(p) · GW(p)

Put more simply, in artificial evolution you get exactly what the fitness function you've written asks for, even when you don't know what it's actually asking for.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-19T22:56:23.000Z · LW(p) · GW(p)

Put more simply, in artificial evolution you get exactly what the fitness function you've written asks for

You don't even necessarily get that. The animal breeders thought they were asking for more eggs. They did get some eggs, but with side effects, and not nearly as many eggs as they could have gotten, if they'd used a different breeding format with the same fitness function: fitness=eggs, vehicle=group instead of fitness=eggs, vehicle=individual.

They could well have gotten fewer eggs by breeding for eggs, as Enron did, if the chickens had discovered enough negative-sum tricks, as Enron did.

comment by douglas3 · 2007-11-20T00:08:35.000Z · LW(p) · GW(p)

Fantastic post!! This certainly applies to the HGP and GMO's. An excellent page about this: http://www.psrast.org/strohmnewgen.htm Eliezer. there are many things I'm sure we disagree about. But, we must not allow ourselves to become enslaved beyond all desire of escape. Is it OK if I love you for that sentiment?

comment by Tom_McCabe2 · 2007-11-20T00:33:15.000Z · LW(p) · GW(p)

"Let me answer a slightly different question: how confident are you that the benefits of publicizing the destructive potential of genetic algoritms outweighs the risks?"

I am quite confident of that. I wanted to know how seriously everyone else had considered the risk.

comment by Caledonian2 · 2007-11-20T00:54:15.000Z · LW(p) · GW(p)

Genetic algorithms have potential, period. It's human beings that will because that potential to be used unwisely. Publicizing the power of the algorithm might -might- cause enough people to be wary of what they can do for balancing principles to come into play. Trying to limit the knowledge will eventually be more harmful than not.

comment by Nick_Tarleton · 2007-11-20T01:15:22.000Z · LW(p) · GW(p)

Tom, I don't take the risk seriously. Richard Hollerith said well why. I don't think people who just want to do something randomly destructive are good at the kind of long-range planning and collaboration needed to be seriously threatening, and if they were, they wouldn't need us to give them extremely general ideas.

I was reminded of something Michael Vassar said on SL4 (emphasis mine):

For all of our arrogance, most Transhumanists grossly overestimate the abilities of ordinary humans. This is substantially a consequence of how folk psychology works, and fails to work for outliers, but also a consequence of typically limited and iscolated life experience. Unfortunately, it has serious consequences when predicting the future. Our estimates of the likely behavior of large scale groups, the effort that will be devoted to a particular research objective, or the time until some task is accomplished are all grossly distorted. For many transhumanists this means that boogie men such as "terrorists" are imagined as something that never was, disutility maximizers, and the resultant threats of bioterror and nanoterror are overestimated by many orders of magnitude. For almost all transhumanists this means an underestimation of inertia, leading Chris Phoenix's fears of pre-emptory arms races and Nick Bostrom's utopian dreams of world government and benign regulation of dangerous tech.
comment by Caledonian2 · 2007-11-20T01:55:16.000Z · LW(p) · GW(p)

Most of the truly frightening possibilities are simply too unlikely. To produce them from a genetic algorithm, you'd have to expend massive amounts of resources creating a specific environment that would select for the traits you desired - no mean feat. Imagine what it would take to develop a hypervirulent and extremely lethal plague through an algorithm on purpose. The requirements are crushing.

Making people more aware of such algorithms, and their potential, might force a shift in the way food animals are dealt with so that they don't act as a optimalization procedure for plagues.

comment by Ford · 2011-03-17T20:56:14.170Z · LW(p) · GW(p)

As an evolutionary biologist with an interest in practical applications to agriculture and to human longevity, I think your emphasis on the slow pace of evolution is misplaced. It took most of life's 3.85 billion year history to evolve multicellularity, but that slowness seems to mainly reflect lack of selection for multicellularity over most of that period. With strong selection, primitive multicellularity can evolve quickly under lab conditions ( Boraas,M.E. 1998 "Phagotrophy by a flagellate selects for colonial prey: A possible origin of multicellularity" and current work in my lab).

Your point about individual vs. group selection is correct and important, though. Individual selection, like free-market competition, is an effective way of making certain kinds of improvements. But some form of group selection (the chicken example, or small-plot trials in plant breeding) is often key to improvements missed by individual-based natural selection. See my 2003 review article and forthcoming book on Darwinian Agriculture.

comment by taryneast · 2011-04-11T12:32:20.420Z · LW(p) · GW(p)

"You really don't want to conjure Azathoth at full power. You really, really don't. You'll get more than pretty butterflies."

One more example proving the Mythos rule that one should learn "Bind Azathoth" before casting "Summon Azathoth"

:)

comment by [deleted] · 2016-02-23T08:31:13.428Z · LW(p) · GW(p)

Invocation of hypothetical (often ideologically linked) expectations of the future as if they were deterministic processes is rampant.

comment by Zack_M_Davis · 2020-06-01T06:32:28.628Z · LW(p) · GW(p)

(This post could be read as a predecessor to the Immoral Mazes sequence [? · GW].)

comment by Emiya (andrea-mulazzani) · 2020-10-21T11:02:03.665Z · LW(p) · GW(p)

It seems Enron did what corporations normally do, just faster. If I remember correctly the percentage of psychopaths between corporate managers is around six times the normal ratio.

comment by Caridorc Tergilti (caridorc-tergilti) · 2022-05-26T22:07:27.469Z · LW(p) · GW(p)

Extremely cool evolution experiment where E. coli bacteria evolve to eat citrate along with many other interesting happenings.