Thou Art Godshatter

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-13T19:38:56.000Z · LW · GW · Legacy · 83 comments

Contents

83 comments

Before the 20th century, not a single human being had an explicit concept of "inclusive genetic fitness", the sole and absolute obsession of the blind idiot god [? · GW]. We have no instinctive revulsion of condoms or oral sex. Our brains, those supreme reproductive organs [? · GW], don't perform a check for reproductive efficacy before granting us sexual pleasure.

Why not? Why aren't we consciously obsessed with inclusive genetic fitness? Why did the Evolution-of-Humans Fairy create brains that would invent condoms? "It would have been so easy," thinks the human, who can design new complex systems in an afternoon.

The Evolution Fairy, as we all know, is obsessed with inclusive genetic fitness. When she decides which genes to promote to universality, she doesn't seem to take into account anything except the number of copies a gene produces. (How strange!)

But since the maker of intelligence is thus obsessed, why not create intelligent agents - you can't call them humans - who would likewise care purely [? · GW] about inclusive genetic fitness? Such agents would have sex only as a means of reproduction, and wouldn't bother with sex that involved birth control. They could eat food out of an explicitly reasoned belief that food was necessary to reproduce, not because they liked the taste, and so they wouldn't eat candy if it became detrimental to survival or reproduction. Post-menopausal women would babysit grandchildren until they became sick enough to be a net drain on resources, and would then commit suicide.

It seems like such an obvious design improvement - from the Evolution Fairy's perspective.

Now it's clear, as was discussed yesterday, that it's hard to build a powerful enough consequentialist [? · GW]. Natural selection sort-of reasons consequentially, but only by depending on the actual consequences. Human evolutionary theorists have to do really high-falutin' abstract reasoning in order to imagine the links between adaptations and reproductive success.

But human brains clearly can imagine these links in protein. So when the Evolution Fairy made humans, why did It bother with any motivation except inclusive genetic fitness?

It's been less than two centuries since a protein brain first represented the concept of natural selection. The modern notion of "inclusive genetic fitness" is even more subtle, a highly abstract concept. What matters is not the number of shared genes. Chimpanzees share 95% of your genes. What matters is shared genetic variance, within a reproducing population - your sister is one-half related to you, because any variations in your genome, within the human species, are 50% likely to be shared by your sister.

Only in the last century - arguably only in the last fifty years - have evolutionary biologists really begun to understand the full range of causes of reproductive success, things like reciprocal altruism and costly signaling. Without all this highly detailed knowledge, an intelligent agent that set out to "maximize inclusive fitness" would fall flat on its face.

So why not preprogram protein brains with the knowledge? Why wasn't a concept of "inclusive genetic fitness" programmed into us, along with a library of explicit strategies? Then you could dispense with all the reinforcers. The organism would be born knowing that, with high probability, fatty foods would lead to fitness. If the organism later learned that this was no longer the case, it would stop eating fatty foods. You could refactor the whole system. And it wouldn't invent condoms or cookies.

This looks like it should be quite possible in principle. I occasionally run into people who don't quite understand consequentialism, who say, "But if the organism doesn't have a separate drive to eat, it will starve, and so fail to reproduce." So long as the organism knows this very fact, and has a utility function that values reproduction, it will automatically eat. In fact, this is exactly the consequentialist reasoning that natural selection itself used to build automatic eaters.

What about curiosity? Wouldn't a consequentialist only be curious when it saw some specific reason to be curious? And wouldn't this cause it to miss out on lots of important knowledge that came with no specific reason for investigation attached? Again, a consequentialist will investigate given only the knowledge of this very same fact. If you consider the curiosity drive of a human - which is not undiscriminating, but responds to particular features of problems - then this complex adaptation is purely the result of consequentialist reasoning by DNA, an implicit representation of knowledge: Ancestors who engaged in this kind of inquiry left more descendants.

So in principle, the pure reproductive consequentialist is possible. In principle, all the ancestral history implicitly represented in cognitive adaptations can be converted to explicitly represented knowledge, running on a core consequentialist.

But the blind idiot god isn't that smart. Evolution is not a human programmer who can simultaneously refactor whole code architectures. Evolution is not a human programmer who can sit down and type out instructions at sixty words per minute.

For millions of years before hominid consequentialism, there was reinforcement learning [? · GW]. The reward signals were events that correlated reliably to reproduction. You can't ask a nonhominid brain to foresee that a child eating fatty foods now will live through the winter. So the DNA builds a protein brain that generates a reward signal for eating fatty food. Then it's up to the organism to learn which prey animals are tastiest.

DNA constructs protein brains with reward signals that have a long-distance correlation to reproductive fitness, but a short-distance correlation to organism behavior. You don't have to figure out that eating sugary food in the fall will lead to digesting calories that can be stored fat to help you survive the winter so that you mate in spring to produce offspring in summer. An apple simply tastes good, and your brain just has to plot out how to get more apples off the tree.

And so organisms evolve rewards for eating, and building nests, and scaring off competitors, and helping siblings, and discovering important truths, and forming strong alliances, and arguing persuasively, and of course having sex...

When hominid brains capable of cross-domain consequential reasoning began to show up, they reasoned consequentially about how to get the existing reinforcers. It was a relatively simple hack, vastly simpler than rebuilding an "inclusive fitness maximizer" from scratch. The protein brains plotted how to acquire calories and sex, without any explicit cognitive representation of "inclusive fitness".

A human engineer would have said, "Whoa, I've just invented a consequentialist! Now I can take all my previous hard-won knowledge about which behaviors improve fitness, and declare it explicitly! I can convert all this complicated reinforcement learning machinery into a simple declarative knowledge statement that 'fatty foods and sex usually improve your inclusive fitness'. Consequential reasoning will automatically take care of the rest. Plus, it won't have the obvious failure mode where it invents condoms!"

But then a human engineer wouldn't have built the retina backward, either.

The blind idiot god [? · GW] is not a unitary purpose, but a many-splintered attention. Foxes evolve to catch rabbits, rabbits evolve to evade foxes; there are as many evolutions as species. But within each species, the blind idiot god is purely [? · GW] obsessed with inclusive genetic fitness. No quality is valued, not even survival, except insofar as it increases reproductive fitness. There's no point in an organism with steel skin if it ends up having 1% less reproductive capacity.

Yet when the blind idiot god created protein computers, its monomaniacal focus on inclusive genetic fitness was not faithfully transmitted. Its optimization criterion did not successfully quine. We, the handiwork of evolution, are as alien to evolution as our Maker is alien to us. One pure utility function splintered into a thousand shards of desire.

Why? Above all, because evolution is stupid [? · GW] in an absolute sense. But also because the first protein computers weren't anywhere near as general as the blind idiot god, and could only utilize short-term desires.

In the final analysis, asking why evolution didn't build humans to maximize inclusive genetic fitness, is like asking why evolution didn't hand humans a ribosome and tell them to design their own biochemistry. Because evolution can't refactor code that fast, that's why. But maybe in a billion years of continued natural selection that's exactly what would happen, if intelligence were foolish enough to allow the idiot god continued reign.

The Mote in God's Eye by Niven and Pournelle depicts [? · GW] an intelligent species that stayed biological a little too long, slowly becoming truly enslaved by evolution, gradually turning into true fitness maximizers obsessed with outreproducing each other. But thankfully that's not what happened. Not here on Earth. At least not yet.

So humans love the taste of sugar and fat, and we love our sons and daughters. We seek social status, and sex. We sing and dance and play. We learn for the love of learning.

A thousand delicious tastes, matched to ancient reinforcers that once correlated with reproductive fitness - now sought whether or not they enhance reproduction. Sex with birth control, chocolate, the music of long-dead Bach on a CD.

And when we finally learn about evolution, we think to ourselves: "Obsess all day about inclusive genetic fitness? Where's the fun in that?"

The blind idiot god's single monomaniacal goal splintered into a thousand shards of desire. And this is well, I think, though I'm a human who says so. Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?

Being a thousand shards of desire isn't always fun, but at least it's not boring. Somewhere along the line, we evolved tastes for novelty, complexity, elegance, and challenge - tastes that judge the blind idiot god's monomaniacal focus, and find it aesthetically unsatisfying.

And yes, we got those very same tastes from the blind idiot's godshatter. So what?

83 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by manuelg · 2007-11-13T19:57:25.000Z · LW(p) · GW(p)

Godshatter? What I may or may not have shat out of my divine anus is of no concern of yours.

Signed, God (big bearded guy in the sky)

Replies from: MatthewBaker
comment by MatthewBaker · 2011-10-26T19:23:44.496Z · LW(p) · GW(p)

Get big enough in the beyond to come down to where I live God, I wont send you back to the slow zone or anything ;) -Pham

comment by James_D._Miller · 2007-11-13T20:21:27.000Z · LW(p) · GW(p)

Eliezer, you wrote:

"Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?"

Won't our descendants who do have genes or code that causes them to maximize their genetic fitness come to dominate the billions of galaxies. How can there be any other stable long term equilibrium in a universe in which many lifeforms have the ability to choose their own utility functions?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-08T04:19:06.304Z · LW(p) · GW(p)

Genetic fitness refers to reproduction of individuals. The future will not have a firm concept of individuals. What is relevant is control of resources; this is independent of reproduction.

Furthermore, what we think of today as individuality, will correspond to information in the future. Reproduction will correspond to high mutual information. And high mutual information in your algorithms leads to inefficient use of resources. Therefore, evolution, and competition, will at least in this way go against the future correlate of "genetic fitness".

Replies from: diegocaleiro, Timo
comment by diegocaleiro · 2010-11-24T08:14:28.460Z · LW(p) · GW(p)

Wow, too big an inferential distance Phil. No idea what you are tallking about here "what we think of today as individuality, will correspond to information in the future."

Would you mind giving a few more details? Curiosity striking...

Replies from: Lambda, None
comment by Lambda · 2012-03-24T03:16:50.250Z · LW(p) · GW(p)

Would you mind giving a few more details? Curiosity striking...

I've been lurking for a while, and this is my first post, but:

Would you mind giving far fewer details? Consciously imposed conjunction-aversion striking...

FTFY. Instead of asking for a single detailed story, we should ask for many simple alternative stories, no?

Obviously, this doesn't countermand your complaint about inferential distance, which I totally agree with.

comment by [deleted] · 2016-04-21T21:31:44.139Z · LW(p) · GW(p)

Still waiting for OP to deliver...

It's probably just something stupid like he thinks humans will upload on computers and he thinks he knows how future society-analogues will function.

comment by Timo · 2016-01-15T23:32:39.811Z · LW(p) · GW(p)

This /seems/ to contain great insight that I can't comprehend yet. Yes, please, how do I learn to see what you see?

Replies from: aaq
comment by aaq · 2016-05-14T15:57:05.055Z · LW(p) · GW(p)

I'm very wary of this post for being so vague and not linking to an argument, but I'll throw my two cents in. :)

The future will not have a firm concept of individuals.

I see two ways to interpret this:

  1. You could see it as individuals being uploaded to some giant distributed AI - individual human minds coalescing into one big super-intelligence, or being replaced by one; or
  2. Having so many individuals that the entire idea of worrying about 1 person, when you have 100 billion people per planet per quadrant or whatever, becomes laughable.

The common thread is that "individuality" is slowly being supplanted by "information" - specifically that you, as an individual, only become so because of your unique inflows of information slowly carving out pathways in your mind, like how water randomly carves canyons over millions of years. In a giant AI, all the varying bits that make up one human from another would get crosslinked, in some immense database that would make Jorge Luis Borges blush; meanwhile, in a civilization of huge, huge populations, the value of those varying bits simply goes down, because it becomes increasingly unlikely that you'll actually be unique enough to matter on an individual level. So, the next bottleneck in the spread of civilization becomes resources.

This is probably my first comment on this site - feel free to browbeat me if I didn't get my point across well enough.

comment by Tom_McCabe2 · 2007-11-13T20:33:47.000Z · LW(p) · GW(p)

For everyone who hasn't read A Fire Upon The Deep (Vinge): Godshatter is the term he uses for a superintelligence ramming data and thought patterns into a human brain.

comment by Scott_Scheule · 2007-11-13T20:48:50.000Z · LW(p) · GW(p)

And "Thou art God" comes from Stranger in a Strange Land.

comment by Doug_S. · 2007-11-13T20:51:37.000Z · LW(p) · GW(p)

The human brain also has the complicating factor of memes and what might be called "inclusive memetic fitness." If you hypothesize that human behavior is influenced by two different sets of selfish replicators, we could certainly have an equilibrium in which natural selection doesn't produce behavior that maximizes the fitness of only one of them. (Incidentally, does it seem to anyone else that humans are "designed" to have unwanted pregnancies?)

(Also, "The Mote In God's Eye" isn't necessarily the best example. The Moties aren't specifically motivated by the desire to maximize inclusive genetic fitness, but their biology requires them to reproduce as much as possible whether their brains think they should or not. "Protector" might be a better choice, as it describes highly intelligent individuals that have maximizing the reproductive success of their offspring as their primary conscious motivation.)

Replies from: diegocaleiro
comment by diegocaleiro · 2010-11-24T08:21:23.620Z · LW(p) · GW(p)

Unwanted pregnancies and 'Unwanted pregnancies', if one cannot tell the difference, maybe it is because it is starting to disappear. I mean, theoretically we should tend more and more towards "oops, I forgot to take my pill today" and "Oh, don't worry, just this one time without a condom"

About the equelibrium between two sets of replicators. Awesome as it looks, it doesn't seem feasible from a game theoretical point of view. We are not the product of two replicators, we are the product of two KINDS of replicators. Each replicator, gene or meme, is fighting his own fight, and will not necessarily coalesce only with his kind. They are not tribes fighting one another, I suggest this is an atypical occurence of Mind Projection Fallacy.

Replies from: tlhonmey
comment by tlhonmey · 2022-06-07T00:15:44.940Z · LW(p) · GW(p)

If there weren't people who had a strong desire, not just for sex, but to actually have a child, and a willingness to go to extreme measures to do so, then sperm banks wouldn't be a thing.

Given the number of people who specifically, and openly desire to make babies, postulating a subconscious desire that might push them to "forget" their contraception isn't unreasonable.  Especially given that cycle timing and coitus interruptus have been staples of human sexual behaviour since...  Well...  At least as far back as we have any records about such things.  Dawn of civilization.

The two sets of replicators reminds me of an article I read about a species of birds that seems to be splitting into effectively four sexes.  Male and female, but then also coloring patterns that have formed a stable loop that alternates back and forth.  If the loop were unstable they'd split into two species, but it alternates generations regularly, so they keep mixing, but in a pattern of four.

comment by Pyramid_Head · 2007-11-13T21:07:51.000Z · LW(p) · GW(p)

Why is Eliezer so obsessed with the "high-falutin'" expression?

comment by TGGP4 · 2007-11-13T21:32:23.000Z · LW(p) · GW(p)

"Obsess all day about inclusive genetic fitness? Where's the fun in that?" Might not our descendants evolve to consider it fun?

I agree with James Miller on the unstable equilibrium. I figure I'll be dead by then though.

comment by Robin_Hanson2 · 2007-11-13T21:59:51.000Z · LW(p) · GW(p)

I don't find it particular comforting that we are made of many small shards of desire, rather than a single unified desire. We like the fact that the universe is at least locally dominated by creatures who like what we like. This will always be true, that the majority will see a world of creatures mostly like themselves.

comment by Nick_Tarleton · 2007-11-13T22:29:43.000Z · LW(p) · GW(p)

I don't find it particular comforting that we are made of many small shards of desire, rather than a single unified desire.

It seems like this is a large part of what makes us eudaemonic agents, in Bostrom's terminology. However, the most admired people are frequently those who display more of a deep commitment than usual to one or a few passions.

comment by Dave3 · 2007-11-13T23:47:25.000Z · LW(p) · GW(p)

"Thou Art Godshatter"! Finally, a name for my Christian/Prog/Electronica combo!

comment by J_Thomas · 2007-11-14T00:42:51.000Z · LW(p) · GW(p)

We have no instinctive revulsion of condoms or oral sex. Our brains, those supreme reproductive organs, don't perform a check for reproductive efficacy before granting us sexual pleasure.

If we could barely arrange to have enough sex to cause 2 pregnancies per lifetime, then we would have a revulsion of condoms, oral sex, etc.

If for example we spent almost all of each year alone, and once a year men and women would meet on a sandy beach (or just offshore) and have sex, and that would be the only chance for women to get +regnant until the next year, men would compete intensely for the women who seem to be fertile, and the losers and the apparently-infertile women could console each other. When you only get one chance you don't waste it.

But humans use sex for social signalling. A man who publicly explained that he intended never to have sex except when he was trying to cause a pregnancy, might find himself at a disadvantage among some women.

But human brains clearly can imagine these links in protein. So when the Evolution Fairy made humans, why did It bother with any motivation except inclusive genetic fitness?

How are we supposed to tell? Like, we need food for energy and for building materials. When you don't get enough energy you feel that. When you don't get enough of some building material you feel that too, and you might learn to recognise that particular feeling. I've read about africans who specificly get hungry for meat, who identify the specific feeling of protein deficiency. How many others are there? Each individual amino acid? Each individual vitamin? I think instead you get rewarded by the feeling you get when there's enough of everything and no glut of something that causes problems. And it's up to your behavioral reinforcement system to notice foods that give you that feeling.

So humans love the taste of sugar and fat, and we love our sons and daughters. We seek social status, and sex. We sing and dance and play. We learn for the love of learning.

There maybe wasn't much chance to overdose on sugar or far in the old days. Or salt, if you were inland. Give it a few more generations and we might do that sort of thing less. Are Pima amerindians more susceptible to the amount of sugar we eat, or do they eat more because they can? I expect that's been tested but I don't know the answer myself. Maybe they haven't been exposed to sugar for as long, so they don't have the defenses against it we do.

You can learn strategies to play chess. Wouldn't it be nice if evolution had provided us with a chess-fitness optimiser? Instead of thinking about strategies, you just make the right move. But that requires the problem of winning at chess to be already solved.

It might very well turn out that in our future people who successfully reproduce will tend to be people who want to, and who figure out how to. There's some of that now, asnd maybe those people wind up with fitter children than the ones who become parents through carelessness. That might be hard to measure just now. Maybe the big majority of the children come from parents who give little thought to how they'll take care of their children, and the ones who're cautious wind up under-reproducing as a result. In the long run the evolutionary process will note what's worked whether we manage to measure it or not.

comment by Nick_Tarleton · 2007-11-14T00:57:52.000Z · LW(p) · GW(p)

If we could barely arrange to have enough sex to cause 2 pregnancies per lifetime, then we would have a revulsion of condoms, oral sex, etc.

If that were true in the ancestral environment, and we had access to contraception in the AE, yes. I doubt a person who now found themselves in this situation would develop this revulsion.

I've read about africans who specificly get hungry for meat, who identify the specific feeling of protein deficiency.

[anecdote] Is this surprising? I've always been able to tell whether or not I need proteins/carbohydrates/fat (usually acting accordingly), and kind of assumed other people had the same sense. Of course, this is a fairly weak sense; calorie-rich food still tastes good when I've had too much, and it still takes willpower to stop eating it. [/anecdote]

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-22T21:57:05.961Z · LW(p) · GW(p)

TAWME, but I'm not sure if it is a consciously learned introspective behavior or something that I just picked up or developed without effort. FWIW I've only really noticed and acted on it for the last year or two.

Replies from: MaxNanasy
comment by MaxNanasy · 2012-05-20T08:34:37.906Z · LW(p) · GW(p)

What does "TAWME" mean?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-05-20T09:46:58.146Z · LW(p) · GW(p)

"This Agrees With My Experience".

comment by Nick Hay (nickjhay) · 2007-11-14T01:59:02.000Z · LW(p) · GW(p)

Eliezer: poetic and informative. I like it.

comment by Stefan_Pernar · 2007-11-14T02:34:32.000Z · LW(p) · GW(p)

@Elizier, you are slowly changing your point of view and are on a path to rethink old thoughts. Save yourself some time and go read the Principia Cybernetica Web. Only after that you will be able to tread on new ground.

@Nick Tarleton, yes - avoiding a dystopia of non-eudomatic agents is a challenge.

comment by Eric_Falkenstein · 2007-11-14T02:46:54.000Z · LW(p) · GW(p)

As a chicken is a way for an egg to create another egg, I would like to 'tell my genes to jump in a lake', as Steven Pinker puts it, but considering so many of my preferences are in sync with my genes, I have the feeling they are very good at getting me to rationalize their preferences. I don't think there's intrinsic meaning in anything, but when I see connections, or patterns, in music or jokes or anything, that I haven't noticed before, I find that meaningful, pleasurable in a way my genes can't understand. But my love for my kids, and the meaning it gives me, clearly the gene gremlins are at work

comment by Tiiba2 · 2007-11-14T02:51:22.000Z · LW(p) · GW(p)

You know, you're getting repetitive. What does this post add to all the other related posts? "Evolutions" are stupid and slow. Okay. But I would guess that many people here want to know your thoughts about AI. I do.

comment by Stefan_Pernar · 2007-11-14T03:12:53.000Z · LW(p) · GW(p)

@Tiiba, my paper on friendly AI theory should provide an answer to your question.

comment by J_Thomas · 2007-11-14T03:56:44.000Z · LW(p) · GW(p)

"I doubt a person who now found themselves in this situation would develop this revulsion."

By about the second generation a lot would. They would mostly be descended from people who hadn't used them. There is a minority that has a revulsion for condoms now. The idea of giving up practically your only change to have children, deliberately, would start seeming strange when everybody in the world had parents who hadn't done it. Cultures change faster when that happens.

comment by Tiiba2 · 2007-11-14T04:02:55.000Z · LW(p) · GW(p)

"@Tiiba, my paper on friendly AI theory should provide an answer to your question."

I don't see any connection between my question and your answer. At least one of us is confused.

comment by Stefan_Pernar · 2007-11-14T05:07:47.000Z · LW(p) · GW(p)

I base friendliness (universally) on the mechanism of natural selection and claim in short "that is good what increases fitness". You can find more on my blog at http://jame5.com

comment by Tiiba2 · 2007-11-14T05:58:41.000Z · LW(p) · GW(p)

You don't understand. I'm asking ELIEZER what he is thinking. His homepage says that he has some fresh ideas about AI that are not yet published, yet he continues to write about evolution, rehashing the same idea every day. That is what I said. I don't even know what question you're answering.

comment by Daniel_Humphries · 2007-11-14T06:32:55.000Z · LW(p) · GW(p)

"Being a thousand shards of desire isn't always fun, but at least it's not boring."

I like that. I have a feeling Lord Gautama would have liked it too.

I will venture to say that Eliezer's habit (this isn't the first instance) of teasing out the same subject again and again from slightly different angles is highly illuminating for me, at least. (And, I suspect, for him as well... though that's conjecture).

I'm a bit slower than your average Overcoming Bias lurker, it would seem from the level of discourse here. Sometimes I think I barely grasp what everyone is even talking about, though I try to read the background links people provide. But I'm an intelligent person in general, and I have an interest in the concepts and methods Robin, Eliezer and the rest hash out in this space. You could argue that all humans do, whether they realize it or not. Either Eliezer added something new to this post, or reading post after post on this has finally hammered the point through my brain. But this evening I feel like I finally get it, and by "it" I mean merely the most basic concepts.... I grasped them abstractly right away, but more important for overcoming one's own operating biases is really getting it in a way that will allow one to spot one's faulty reasoning in the past and the future.

comment by Stefan_Pernar · 2007-11-14T06:39:15.000Z · LW(p) · GW(p)

@Tiiba, trust me - I am quite certain that I do, but this is not the right forum - PM me if you want to continue off this blog.

comment by TGGP4 · 2007-11-14T07:20:21.000Z · LW(p) · GW(p)

@Elizier, you are slowly changing your point of view and are on a path to rethink old thoughts. I didn't notice any evidence of that. He said that he had greatly changed his view in the past, but that was before he started blogging here. What have you seen since then that makes you think that?

comment by Stefan_Pernar · 2007-11-14T08:33:40.000Z · LW(p) · GW(p)

@TGGP: This forum really is not the right place to get into details. It would not be fair towards Eliezer and that I posted something at all is an embarrassing revelation in regards to my intellectual vanity. Mea culpa.

comment by The_Monster_from_Polaris · 2007-11-14T09:07:35.000Z · LW(p) · GW(p)

Consider the Laestadians (look them up in Wikipedia if you haven't heard of them). They tend to have lots of children; one TV program some years ago mentioned that families with 10 children are common among them.

Unless a lot of those children abandon (or at least modify) their parents' faith, the future belongs to them and similar groups.

Religion can be a powerful fitness maximizer.

Replies from: tlhonmey
comment by tlhonmey · 2022-06-07T00:01:56.472Z · LW(p) · GW(p)

Alternatively, consider the various sects in history which have thought that the world was evil and therefore bringing children into it was doing them great harm.  Needless to say, the majority of them seem to have died out...

comment by Lake · 2007-11-14T11:02:02.000Z · LW(p) · GW(p)

@ Tiiba # 1: Without wishing to second-guess Eliezer, I'd suggest that his prolonged examination of the buggy, ad-hoc character of human intelligence may be intended to preface a discussion AI, its goals and methods. After all, the contrast with human intelliegence could be illuminating.

comment by Lake · 2007-11-14T11:05:14.000Z · LW(p) · GW(p)

That missing word: "of".

comment by Stuart_Armstrong · 2007-11-14T11:11:51.000Z · LW(p) · GW(p)

The blind idiot god's single monomaniacal goal splintered into a thousand shards of desire.

This would explain why our formalised moral systems are either hideously complicated, or fail to capture important parts of our morality... We just have far more urges wants and needs than we realise.

comment by Unknown · 2007-11-14T12:33:05.000Z · LW(p) · GW(p)

As many comments have suggested, now that evolution has produced creatures that can consciously seek goals, and also has instilled in some of these creatures, to some extent, the goal of bearing and raising children, all that evolution needs to do is to reinforce this desire, and in time it will manage to produce a conscious fitness maximizer.

Abandoning biology is not a way to avoid this result, since biology is not the problem, but reproduction and its historical consequences. Leaving behind biology could even speed up the process dramatically.

Maybe the alien god, despite being blind, slow, and stupid, will get its way in the end. In the distant future, intelligent fitness maximizers might laugh at the ridiculous idea, now long extinct, that it is better to have a random collection of unrelated desires for no reason except historical accident, than to seek the unified goal of fitness. After all, they'll say, obviously nothing is worth seeking except fitness. And besides, seeking anything else is self-destructive.

If this comes to pass, what would be wrong with it? If the wrongness is only from our point of view, why should our point of view have more validity than theirs? Since we have reason to think they would be find their goal boring or miserable, we have no obvious reason to be horrified at the idea of this society.

Replies from: EniScien
comment by EniScien · 2022-02-04T13:48:03.998Z · LW(p) · GW(p)

It's wrong because our function is different. Functions is wrong or true only for other functions.

comment by J_Thomas · 2007-11-14T12:53:08.000Z · LW(p) · GW(p)

Just as an aside, fitness maximizers usually have to accept a finite population size in a finite biome with a finite carrying capacity. There's the possible goal of expanding into the galaxy and neighboring galaxies, but in the short run we have a finite carrying capacity.

And a fitness maximizer that is too successful has to accept it needs to preserve a lot of diversity in its gene pool or else face problems that would essentially reduce carrying capacity.

A conscious fitness maximizer at some point must realise that it survives by maintaining its numbers in a diverse population, rather than maximizing the frequency of its genes.

comment by Lake · 2007-11-14T13:12:50.000Z · LW(p) · GW(p)

@ Unknown: Well, one reason why our point of view is more valid than their's is that we exist and they don't.

In addition, it is probably worth stressing that inclusive fitness is not, strictly speaking, the goal of anything at all. Goals only make sense relative to intentions, values and so forth - the usual accoutrements of mentality. These are all things that we humans (and perhaps some other creatures) possess, but which evolution, and our genes, do not. No minds, you see. Despite appearances.

This said, there might be something to be said for engineering or breeding descendants whose drives are more harmonious than our own. For instance, they might be happier. Still, there's no particular reason why we should choose to make inclusive fitness the goal of all their striving, as opposed to something else.

comment by Stefan_Pernar · 2007-11-14T13:14:15.000Z · LW(p) · GW(p)

@J Thomas, the trick lies in ensuring continued co-existence.

comment by Recovering_irrationalist · 2007-11-14T13:33:48.000Z · LW(p) · GW(p)

@Stefan: I enjoyed your book and was fascinated by your FAI perspective, but your comments here could be read as overly self-promoting, which would be counterproductive. An evil, paranoid maniac might even imagine you write comments to maximize how many links to your blog you can cram onto a page! Maybe limiting the links to yourself might curb such insanity in your audience.

comment by Stefan_Pernar · 2007-11-14T13:47:13.000Z · LW(p) · GW(p)

@Recovering irrationalist, good points, thank you - I just wanted to save time and space by linking to relevant stuff on my blog without repeating myself over and over. My apologies for overdoing it. I guess I feel like talking to a wall or being deliberately ignored due to the lack of feedback. I shall curb my enthusiasm and let things take its course. You know where to find me.

comment by brendon · 2007-11-14T18:27:37.000Z · LW(p) · GW(p)

[anecdote] Is this surprising? I've always been able to tell whether or not I need proteins/carbohydrates/fat (usually acting accordingly)....

sorry, guys, this wisdom-of-the-body stuff hasn't held up that well. i've given the link below for a lengthy but thorough account of studies that were done on rats, for the two or fewer people here who might be interested. while there is some evidence for behavioral changes based on mineral deficiency, it's extremely complicated and the changes in the animal's behavior are not that "accurate" (in the sense that the animal truly seeks out the depleted nutrient). bottom line, "In many experimental situations, animals do not choose an optimal diet. This is especially the case for omnivores." i hope this wasn't entirely off-topic; just wanted to clean out this little rafter....

http://ajpregu.physiology.org/cgi/content/full/279/2/R357

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-14T19:34:30.000Z · LW(p) · GW(p)

This would explain why our formalised moral systems are either hideously complicated, or fail to capture important parts of our morality... We just have far more urges wants and needs than we realise.

Congratulations to Stuart Armstrong on nailing my hidden subtext.

(Albeit even the hideously complicated moral systems still don't capture a fraction of our morality.)

@Tiiba: You seem to think I can just blurt out my AI ideas. I've tried that. It doesn't work.

Having watched other AIfolk "explaining" their ideas, I know very well how to convince someone that you've just conveyed an AI theory - just pick a word like "complexity", "emergence", or "Bayesian" and call it the secret of the universe; or draw a big diagram full of connected boxes with suggestive names drawn from cognitive science. Unlike these other AIfolk, I've actually learned a little about how intelligence works, and so I know this would be unhelpful and dishonest.

Bayes is the secret of the universe, but believing this statement will not help you.

If you seek enlightenment upon this matter of AI, then I must ask whether you've read existing textbooks such as Machine Learning by Mitchell, Probabilistic Reasoning in Intelligent Systems, Artificial Intelligence: A Modern Approach (2nd Ed), and Elements of Statistical Learning. Recommended in that order.

@Unknown: I am horrified by the thought of humanity evolving into beings who have no art, have no fun, and don't love one another. There is nothing in the universe that would likewise be horrified, but I am. Morality is subjectively objective: It feels like an unalterable objective fact that love is more important than maximizing inclusive fitness, and the one who feels this way is me. And since I know that goals, no matter how important, need minds to be goals in, I know that morality will never be anything other than subjectively objective.

With all that said, I hope you won't mind if I use objective language to say:

"Evolving into obsessive replicators would be a waste of humanity's potential. They might not mind, just as sociopaths don't mind killing, but I mind. I will avoid such a future with every power of my intelligence."

comment by J_Thomas · 2007-11-14T20:56:49.000Z · LW(p) · GW(p)

Brendon, you can't expect a learning system to quickly get an exact solution to a problem in N simultaneous equations. But when improvements result in a sense of well-being, they might tend to gradually zero in on solutions. So for nutrition you need sufficient energy and your body might have pre-programmed goals for repair and growth, and whatever helps meet those targets could provide that sense of well-being that announces something worked.

Simpler than having thousands of individual goals programmed in.

comment by Richard_Hollerith2 · 2007-11-14T21:08:10.000Z · LW(p) · GW(p)

"Being a thousand shards of desire isn't always fun, but at least it's not boring."

I like that. I have a feeling Lord Gautama would have liked it too.

I alway thought the exact opposite, that Lord Gautama had a profound experience that made him relatively indifferent to the thousand shards. Specifically, a full-blown ecstatic or mystical experience is a million of times more pleasurable than any other experience the mystic has had or will have, which I always thought would make one less attached to ordinary pleasures and ordinary reinforcers. Once a religion becomes a popular movement or part of the ruling class's justification, its leaders are tempted to modify it to broaden its appeal, which is how I always thought Buddhism acquired the habit of promising an end to suffering. One of my friends had a profound mystical experience, personally attests to the "a million of times more pleasurable", is very scrupulous and truthful and derives no reputational benefit from having had the experience. (He says that I am practically the only one with whom he has ever discussed his experience in any detail.)

Moreover, it is my hypothesis that being indifferent to the thousand shards is a powerful enhancer of mental and moral clarity in the right conditions. One of these conditions is that the indifference be not so total or so early in onset that it extinguishes curiosity during the person's youth, which of course is just totally pernicious in an environment as rich in true scientific information as our environment is. Another adherent of this hypothesis is academician John Stewart. Mystical experience is quite risky and dangerous; competent supervision is recommended and I would suppose is available at low or no cost to students who show high potential. Another common adverse outcome of mystical experience seem to be to make the person more confident in his beliefs, especially about the moral and political environment, and of course people tend to be too confident about their beliefs already.

comment by brendon2 · 2007-11-14T22:50:00.000Z · LW(p) · GW(p)

J Thomas : "So for nutrition you need sufficient energy and your body might have pre-programmed goals for repair and growth, and whatever helps meet those targets could provide that sense of well-being that announces something worked."

this sort of system might work for thirst or even carbs and protein but would be pretty bad at things like getting you to eat balanced amounts of vitamins and minerals. for instance, your diet could be vitamin b12 poor for months or maybe longer before you would feel the pinch (your body stores the vitamin pretty well, i'm told), and i doubt you would then start 'craving' vitamin b12 rich foods - particularly because there had never been an appetite-satiety relationship you could have picked up on, even unconsciously.

comment by douglas · 2007-11-15T01:49:32.000Z · LW(p) · GW(p)

Some new info re: evolution you might want to consider before taking the gene view of evolution to its logical conclusions:

http://www.springerlink.com/content/qh67113u60887314/ "Although we agree that evolutionary theory is not undergoing a Kuhnian revolution, the incorporation of new data and ideas about hereditary variation, and about the role of development in generating it, is leading to a version of Darwinism that is very different from the gene-centred one that dominated evolutionary thinking in the second half of the twentieth century."

http://www.sciencedaily.com/releases/2003/09/030929054959.htm how new thinking applies to societies

Replies from: themusicgod1
comment by themusicgod1 · 2013-10-31T18:13:50.658Z · LW(p) · GW(p)

Is not your second link dealt with by http://lesswrong.com/lw/iv/the_futility_of_emergence/ or am I misreading one of the two? It seems to leave the main causal mechanism abstract enough to prove anything.

comment by Pyramid_Head2 · 2007-11-15T02:34:36.000Z · LW(p) · GW(p)

That still doesn't explain why Eliezer has been using the expression "high-falutin'" so much. Is it from some recently read book, perhaps?

comment by J_Thomas · 2007-11-15T02:52:05.000Z · LW(p) · GW(p)

Brendon, I find your reasoning plausible. I don't know how true it is. I don't want to give myself pernicious anemia to test it, so I'll settle for saying it looks plausible.

If you have a vitamin deficiency, and you get a dose of the vitamin that makes you somewhat less deficient, will you feel better within a few hours? If so then it might be reinforced. On the other hand, one single experience of nerve poisoning a few hours after eating a particular new food can be enough to establish a lifelong distaste for that food.

comment by Caledonian2 · 2007-11-15T04:56:46.000Z · LW(p) · GW(p)
Specifically, a full-blown ecstatic or mystical experience is a million of times more pleasurable than any other experience the mystic has had or will have

This seems unlikely - it's far more probable that mystical experiences are highly satisfying rather than so intensely pleasurable.

comment by Daniel_Humphries · 2007-11-15T07:49:11.000Z · LW(p) · GW(p)

Richard:

I suppose this counts as threadjacking, but this thread seems about played out, so I'll respond to your response to my off-topic aside.

I'm interested in what you say. I don't think it's necessarily off base. But my little cheeky comment was in reference to the Buddhist concept of anatta, or non-self. That is, Eliezer's insistence that there is no purposeful unifying force behind what we experience as "our" desires reminded me of an analogous teaching of the Buddha. Evolution can be seen as a unifying force, I suppose, since it is the common wellspring of our desires, but as Eliezer is rightly at pains to point out, it is decidedly not purposeful. "A thousand shards of desire" is what we are left with.

One of the key concepts of Buddhist meditation and scholarship is that desires are ultimately independent of the desirer. [Note: I differentiate serious, classical Buddhism, which has a ridiculously large set of founding texts and canonical commentaries, from pop Buddhism. or the selective Western brand of Buddhism which takes the concepts that have appeal for people brought up in a society where the dominant religious traditions are monotheistic and authoritarian (the West, that is) while leaving behind the less sexy teachings which are in fact the core of the practice.] In the first stages of serious meditation, before you achieve any mystical bliss or whatnot, it becomes quite clear that the thoughts and desires that we take for granted as "our own" are in fact caused by specific conditions and fall away when those conditions cease. That's the practice-based observation. The theoretical concept that springs from that is that, in fact, we build our mistaken sense of a unified "I" out of these falsely-apprehended experiences. (I say theoretical because my personal inquiries have not yet fully borne this out... perhaps they will, perhaps not... there are Buddhist scholars and monks who claim to know this to be ontologically true... I have reasons to doubt them, but I also have reasons to believe them... further inquiry is required).

Of course this teaching comes from a time before any understanding of evolutionary theory, and is practiced today by people who, broadly speaking, still don't have any real understanding of such (yours truly included!). I don't want to throw around too much sloppy thinking here, but I will suggest that there may be more than one angle at which to come to an understanding. Both disciplined scientific inquiry and disciplined meditational inquiry are (properly) undertaken with a desire to get at an understanding of reality while systematically eliminating misapprehensions and biases as they arise.

Anyway, all that is not to refute what you said, but to explain my comment.

I will take issue with your positing that the teachings on the end of suffering were added by later theocrats or rulers who wanted to broaden its appeal for the masses. In the oldest texts we have (written down around 2200 BC, after 300 or so years surviving in an oral tradition the fidelity of which has been shown in other contexts to be remarkable), the Buddha teaches again and again about suffering. Several places in the sutras he is quoted as saying, "I teach one thing: suffering and its end." The teaching on the Four Noble Truths (said to be the first teaching he ever gave though admittedly that's pretty hard to ascertain for sure) is the central teaching of the Buddhist canon. Many, many, many of the Buddha's teachings came in for debate, abandonment and wholesale tortion as they spread to various different societies with their own cultural norms and mores and institutions and languages. But the teachings on suffering and its end are the same in Tibet as they are in Sri Lanka as they are in Japan. You might argue that the original teaching was somehow a cynical appeal to the masses (I am very much inclined to say it was not), but it's clearly not a later corruption.

I'm very interested in parallels between the kind of ruthlessly rational inquiry displayed by the thinkers on this blog and that displayed by the early Buddhist, including the Buddha himself. I find myself looking for ways to reconcile the two. Of course, in even admitting that, I'm busting myself! If I have my desired conclusion in mind as I sift through the evidence, I have already forgotten the central teachings of Overcoming Bias! ... I'll press on though, catching myself where I can! ;)

comment by Richard_Hollerith2 · 2007-11-15T19:04:10.000Z · LW(p) · GW(p)

Nit: surely you mean "220 BC," not "2200 BC".

I will take issue with your positing that the teachings on the end of suffering were added by later theocrats or rulers who wanted to broaden its appeal for the masses."

I stand corrected. Thank you for your thoughtful reply.

I find myself looking for ways to reconcile the two. Of course, in even admitting that, I'm busting myself! If I have my desired conclusion in mind as I sift through the evidence, I have already forgotten the central teachings of Overcoming Bias!

Hmm. I wonder whether in ordinary cases it is okay to construct tentative models of reality at a profligate pace provided one remains sufficiently eager to revise and discard. I'm pretty sure that I derive pleasure when one of the tentative models I have constructed is destroyed by a counterexample or counter-evidence (and that this pleasure is caused by the same mechanism that causes the pleasure I get when I learn a new fact) and that that pleasure outweighs the pleasure I derive from feeling certain that I am right. In particular, I hypothesize that my early experience desensitized me to doubt including doubt about my own morality -- feelings that most people who did not have my experience seem to find quite aversive.

I believe that our environment is "awash in evidence" in that most hypotheses we need to entertain to lead a very effective and very ethical life have the property that if a person ignores evidence for the hypothesis, the only thing he sacrifices is time because the mere passage of time will bring more evidence for the hypothesis. Now of course I recognize exceptions to this general observation. I am willing to believe for example that in competitive situations like military combat or wheeling and dealing in business or simply in buying and selling, the person who pays closer attention to scarce evidence can have a decisive advantage. (Hmm: these situations also seem to share the property that denying the opponent information about one's situation is often decisive.) But in the main it remains true IMO.

In summary, the worst cognitive biases seem to me to be those in which the person is actively motivated by the human reward circuitry to ignore certain classes of evidence in a consistent manner. I propose that in comparison, merely ignoring most but not all evidence on some point and profligately building causal models on scant evidence are minor sins. Consequently, I advise paying close attention to one's emotional responses around belief formation and belief rejection.

Since that proposition seems to contradict a point Eliezer has made several times, I will counter the possibility that I will be misunderstood by saying that I agree with him at least 98% of the time and have personally learned far more from his writings than I have from any other author since 2001, when I discovered his writings. How much to trust or to give our loyalty to our emotions might be the biggest place he and I disagree, with my maintaining that it is critical for a person who aspires to be a culture leader to ignore as much as practical species-typical emotional associations when choosing one's beliefs and terminal values.

I advise a young person who wishes to become a mature adult who is not an arrant slave to species-typical cognitive biases to pay copious attention to what thoughts and beliefs cause him pleasure and which cause discomfort. I suggest that over the long term, if a person begins the project while still a teenager, a person has quite a bit of control over his emotional responses -- can for example probably cause himself to become an adult who takes great pleasure in learning new scientific information.

Two hints on that one. First, being rewarded (with e.g. money or grades) for learning will tend to extinguish the "intrinsic" motivation to learn which is so valuable. So if you must undergo the formal educational system, be as indifferent to grades as practical. Second, the pleasure to be derived from learning or from exercising scientific or technical creativity is minor compared to the pleasure a teenager can derive from success in the popularity game that high school is famous for, sex and perhaps dominating opponents on the athletic field. If you can manage to derive most of your pleasure from learning during the critical age from about 14 to 17 -- by making a point not to develop the habit of getting your pleasure from the three more powerful reinforcers I just mentioned, then you will have gone a long way to setting yourself up for "good emotional responses" throughout your adulthood. (Before the age of 14, most people will not have sufficient executive skills to engage in such a program of "emotional shaping", but if you think you do have the skills or if you have adults you trust helping you, I say go for it.)

Buddhist pursuits of the type Humphries engages in seems to be a fine aid to becoming a relatively-unbiased adult, particular what the Buddhists have to say about cultivating an observing self.

Let me counter the possibility I will be misunderstood by saying that I have no practical experience educating young people except what I have learned from observing myself and listening to the recollections of a handful of friends. Still so much of what I read about pedagogy strikes me as misguided that I chose to speak out.

I am threadjacking of course, but I consider it not worth the costs to try to keep the conversation in neat little boxes especially once a thread has aged for a few days. I'll of course defer to the judgement of original poster and the owner of the blog.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-16T20:10:25.000Z · LW(p) · GW(p)

Humphries and Hollerith, your comments would be too long even if they were on-topic. However you can resubmit the comments to an Open Thread, after which they will be deleted here. Thank you.

comment by Richard_Hollerith2 · 2007-11-17T18:18:30.000Z · LW(p) · GW(p)

If they're too long for this page, I suggest that they're too long for an Open Thread, too. I have copied Humphries' latest and my two comments to my web site and emailed Humphries with a notification of what I did (followed by an offer to delete his words from my site if that is his preference).

comment by Tim_Tyler · 2008-08-09T16:44:00.000Z · LW(p) · GW(p)

Deep Blue has many desires too. It knows that a knight is three times as desirable as a pawn - unless the pawn is well advanced. It knows about the value of the centre, and the importance of quiescence - and so on.

The important point to realise is that these desires all represent imperfections. They are not useful features - to be retained and deliberately implemented in future designs, but rather simple heuristics intended to deal with hardware and software limitations - and that in the future their preservation may well lead to mistakes, errors - and losses.

Also: nature gave us brains. Brains help organisms deal with a varying environment. Part of the purpose of the brain is, I claim, is to reassemble your desires into evolution's intended "target" - a.k.a making grandchildren. The target itself cannot be built in directly - because of the limited space in the genome, and because of the varying nature of the environment.

The brain does this reassembly successfully in some individuals. They realise that high calorie foods are not good for them - because their environment is not the one their ancestors evolved in. They wake up to the idea that advertisers are trying to play the imperfections of their mind like an organ, for their own ends, and make efforts to compensate. And they understand the consequnces of the use of contraceptive devices. The in-built desires are subjugated in favour of higher level goals. If your brain has not managed such a reconstruction, you may want to consider the hypothesis that it is broken or malfunctioning - and thus to wonder if there is anything you can do to fix it.

comment by PhilGoetz · 2010-05-08T04:15:16.171Z · LW(p) · GW(p)

The Mote in God's Eye by Niven and Pournelle depicts an intelligent species that stayed biological a little too long, slowly becoming truly enslaved by evolution, gradually turning into true fitness maximizers obsessed with outreproducing each other. But thankfully that's not what happened. Not here on Earth. At least not yet.

This is an interesting and important paragraph; and it explains some things about Eliezer's views. It's important enough to justify. But I don't see evidence for the idea that evolution gets more oppressive as time passes. Is this a trend in historical data? No; organisms acquire more degrees of freedom as they become more complex.

The unspoken assumption is that organisms continue to evolve, yet don't increase in complexity - imagining humans to continue to evolve, yet without passing beyond the human stage. Perhaps moties would be the result. We see here in the US that evolution very rapidly rewards cultures and religions that forbid birth control and/or encourage large families.

On the other hand, these cultures' and religions' growth in number of humans does not result in an equal growth in money and power and control of resources.

I don't have an answer; but this idea, almost skipped over, that evolution will inevitably lead to bad things, is a powerful motivator of FAI, CEV, and all such take-over-the-universe schemes. So it needs much more explication than a one-sentence reference to a Niven novel.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-07-22T22:09:37.551Z · LW(p) · GW(p)

The question is indeed interesting, but the presumed answer is a powerful motivator for whom? Even if human evolution will lead to a super-amazing future of greatness, I doubt that future would be as super-amazing as a correctly implemented FAI; avoiding dystopian evolutionary existential catastrophes has never been listed as main reason for wanting to build a friendly really powerful optimization process by anyone I've talked to. Most don't think humanity will even get that far.

But I'm curious as to what your intuitions are regarding the probably counterfactual world where humans continue evolving for a long, long time.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-07-28T17:23:51.297Z · LW(p) · GW(p)

Eliezer has a bias against evolution, and a bias against randomness, as exhibited in his series ending in Worse than Random, which is factually correct in the details, but misleading in the real world, as demonstrated by repeated times when his acolytes have used it to attack probabilistic search, probabilistic models, etc.

My take all along has been that something about evolution has caused it to reliably make the world a more complicated, more interesting, and better place; and evolution, with randomness, is the only process that can be trusted to continue this. Any attempt to control and direct the course of change will just lock in the values of the controller.

I see E's story about the moties as being one possible source of his bias against evolution, and hence against randomness.

Replies from: NancyLebovitz, Jonii
comment by NancyLebovitz · 2010-07-29T08:02:20.936Z · LW(p) · GW(p)

My assumption is that it isn't really possible to take charge of evolution. You might be able to have less undirected biological evolution, but only by having memetically-driven evolution. Things are still going to have random influences.

comment by Jonii · 2010-07-29T08:49:22.497Z · LW(p) · GW(p)

Any attempt to control and direct the course of change will just lock in the values of the controller.

Exactly. This should be obviously what we need to do.

.....

Evolution is blindly optimizing for those that produce more offspring. Eventually, those specifically aiming for this would do this more optimally than those who didn't. Meaning that eventually only those whose main goal is to mate would dominate. Evolution marches on.

Why this has not happened before is related to the fact that there has not been human level, scheming animals on this planet earlier. Animals that can't plan years ahead would benefit very little from having an urge towards fitness maximizing. Adaptions to be executed are what needs to be optimized and what matters vastly more on that level.

comment by Liron · 2010-06-01T03:23:19.871Z · LW(p) · GW(p)

<3

Replies from: Nisan
comment by Nisan · 2010-06-01T03:57:20.018Z · LW(p) · GW(p)

?

comment by octopod · 2010-08-18T18:08:01.567Z · LW(p) · GW(p)

I'm mostly in agreement with this, but feel I must point out that from the perspective of social primate evolution the "sex only when it will result in offspring" paradigm is a perversion invented (or at least reinvented) by modern humans. Sex is primarily a bonding mechanism, as evidenced by the fact that sexual desire is mediated as much by social circumstances as by other considerations. Of course, social standing is ultimately directed at improving genetic fitness, but sex has been repurposed by the primate social system so that, essentially, it improves fitness in two ways rather than just the one you seem to be seeing. Given this, and the fact that the important number is (as some evolutionary biologists have pointed out) not the number of children born but the number of one's children who themselves reproduce, and you have a perfectly good reason why humans in every place and time have been trying like he'll to invent reliable birth control, for those numerous times when the "social bonding" part is desired but not the "potentially getting pregnant" part.

comment by TheOtherDave · 2010-10-29T21:25:33.892Z · LW(p) · GW(p)

I agree with the thrust here, but it does seem that you're conflating two different distinctions.

More specifically: you contrast explicit cognitive representations with implicit genetic representations (1), and it's not always clear when you are talking about the distinction between implicit and explicit representations, and when you are talking about the difference between cognitive and genetic ones.

And it seems to matter: if I ask why my genetic representations aren't recapitulated as cognitive ones, the kind of answer you give here is a fine one, but if I ask why my implicit representations aren't recapitulated as explicit ones, that answer is insufficient. I am ignorant not only of what "my genes want," but also of much of what "my brain wants," and the stubborn implicitness of that second kind of information is not proximally due to evolution's inability to quickly refactor code.

I don't think any of that actually alters your main point, which is primarily about genetic vs. cognitive representations. Still, it's worth emphasizing that not all cognitive representations are explicit ones, and there are good reasons for that over and above the genetic "godshatter" effect.

(1) I'm using "representation" here in a very loose way, admittedly.

comment by Polymeron · 2011-02-25T19:42:29.724Z · LW(p) · GW(p)

This is possibly the best creation myth I've ever read. Possibly because unlike other creation myths, this one is actually true.

You've found amazing poetry in this grand cycle of gene warfare. But now I must wonder: How self-contained are all these desires? Will we evolve some of them to extinction? It is very hard, and somewhat disconcerting, to think of what today is human as only a passing phase on an endless continuum. Yet to assume humanity would always remain as it is seems both unrealistic, and unsatisfying - we want to see growth and novelty. So I guess I hope we will become more complex, more interesting... Rather than get narrowed toward a less fragmented sense of purpose.

comment by Varan · 2013-05-08T17:40:52.097Z · LW(p) · GW(p)

It just seems the evolution has failed to build a Friendly (to the evolution) AI.

comment by [deleted] · 2015-02-24T16:28:13.574Z · LW(p) · GW(p)

Why not become a pure reproductive consequentialist?

Reading these posts I notice a preference for altruism, utilitarianism and rejecting some of the intuitions that natural selection gave us. Moreover, almost everyone working on evolutionary psychology takes a lot of effort to avoid the naturalistic fallacy: Not confusing what is with what ought to be (see Richard Dawkins - "The Selfish Gene" or Steven Pinker - "The Blank Slate").

Still I am wondering what is so "good" about altruism? Knowing that our preference for altruism also developed by natural selection, because it either benefits our genes in other humans (W.D. Hamilton - Kin Altruism ) or leads to reciprocal benefits for ourselfs (Robert Trivers - Reciprocal Altruism ), or it least did in small hunter-gatherer tribes in the ancestral environment. Utilitarianism is now projecting this altruism that we naturally feel towards friends and family (which was good for our genes) onto humanity as a whole (which probably isn't). Usually there is the assumption that every human life is worth the same.

I agree that you can't take your values from evolution, but why assume that there are any (objective) values at all? Why not embrace nihilism? Why not become a pure reproductive consequentialist?

Some practical consequences of this value system (some of them pretty weird):

  • Valuing your family, esp. your children more than strangers (anybody does that intuitively anyway)

  • Valuing your friends more than strangers (because they reciprocate; people also naturally do this)

  • Sacrificing your life for your children if that improves their survival more than it reduces your chance for future surviving children (you also see this very often in the real world, fathers drowning to save their children, etc.; Of course you could do the math much better than our adaptation which basically says "save drowning children")

  • Switching to full altruism after your chance (or plan) to have future children has fallen to zero. Of course, still caring more about your relatives more than others (grandparents do this a lot, also Bill Gates would be an example)

  • Ignoring your will to have sex unless you plan to have children or it becomes distracting and reduces your ability to achieve your other goals.

  • Going to the sperm bank (spreading your genes and getting paid for it. That's what I call a win-win situation.)

  • Avoiding fatty and sugary food, following the paleo diet. (to improve your direct fitness and sexual attractiveness)

  • Not having any higher moral values whatsoever. Following your moral intuitions only when they are useful to other goals.

  • Basically acting like Gordon Gekko from Wall Street. Only that you would try to turn your money and power into a lot of children, likely from different women. (Like the Aztec or Inca emperors which had thousands of women. Unfortunately for the inclusive fitness of today's powerful men this has become nearly impossible. It's better for the average man I guess.)

I am not planning to act out this slightly silly idea in my life. Still I am astonished how well it approximates what people actually do considering the change from our ancestral environment. I would like to hear your thoughts.

Replies from: mikhail-larionov
comment by Mervinkel (mikhail-larionov) · 2024-07-10T17:56:34.479Z · LW(p) · GW(p)

I was heavily thinking about this topic in the past few weeks before stumbling across this post and your comment, and I appreciate both.

Ultimately, I agree with your conclusion. What’s more, I think this (becoming a pure reproductive consequentialist) is also inevitable from the evolutionary standpoint.

It’s already clear that pure hedonistic societies (“shards of desire” et al) are on a massive decline. The collective West, with an average US fertility rate of something like 1.6 per woman, is going to die off quickly.

But the gap will be filled, and it will be filled with the programming that re-enables higher reproductive fitness.

My take, though, is that you don’t have to be radical about either of those strategies. You don’t have to maximize your fertility to the absolute best by sacrificing all joy. I think you just have to maximize it to some reasonable subjective degree. Arguably, having fun should have a positive impact on your gene propagation — as long as you efficiently propagate!

So my personal choice is to follow all the strategies from your comment and some more — except the ones that are not fun. And treat the rest of the activities (fun but pointless) as inevitable cost of slow evolution, but not blame myself for this since this is not really my fault.

This excludes sperm banks but includes maximizing offspring by various other joyous ways.

This poses some interesting challenges though. Brute-forcing the problem of limited resources to pass to your offspring, you still have the challenge of limited bonding opportunities with the mothers, which may be detrimental to the children and hurt their own reproduction (which is critical, as also mentioned in the comments).

I wonder what is the optimal number of human offspring for one male, given that at some higher numbers, further increase seems to be detrimental to the sum of group fitness.

comment by Rafael Harth (sil-ver) · 2021-11-24T10:32:43.414Z · LW(p) · GW(p)

14 years later, I notice that Eliezer missed the other reason why evolution didn't design organisms that have fitness maximization as an explicit motivation. It's not just that it can't plan well enough to get there, it's also that such a motivation would have a disadvantage compared to a set of heuristics: higher computational cost. A hypothetical mind only concerned with fitness maximization would probably have to rediscover a bunch of heuristics like "excessive pain is bad" to survive practice. (At that point, it would indeed have an advantage in that it could avoid many of the failure modes of heuristics.)

Replies from: Q Home, martin-randall
comment by Q Home · 2022-07-15T23:51:09.325Z · LW(p) · GW(p)

Reading the post I didn't understand this:

  • Could evolution really build a consequentialist? The post itself kind of contradicts that.
  • Could a consequentialist really foresee all consequences without having any drives (such as curiosity)?

I think your critique about computational complexity is related to the 1st point.

comment by Martin Randall (martin-randall) · 2024-06-21T15:23:31.202Z · LW(p) · GW(p)

Sort of covered here ("along with"):

Why wasn't a concept of "inclusive genetic fitness" programmed into us, along with a library of explicit strategies?

comment by tlhonmey · 2022-06-06T21:23:00.762Z · LW(p) · GW(p)

I would submit that most other species on the planet, were they to rise to our level of intelligence, would not bother inventing condoms.  In most other species, the females generally have no particular interest in sex unless they want babies.

Humans though, are weird.  Because of our long phase of immaturity, and the massive amount of work involved in raising a child, we need really strong social bonds.  Evolution, being a big fan of "The first thing I stumble across that gets the job done is the solution" repurposed sex into a pair-bonding trigger, and then, as our ancestors' offspring required longer and longer care, divorced it from any specific attempt to make a baby at that particular moment.

Now fast forward to the point where infant mortality drops and churning out babies as fast as possible is no longer the best strategy.  But we still need the pair bonding because the length of childhood hasn't gotten any shorter, and it still goes way better with two sets of hands to look after the little one.  Evolution would probably come up with another quick hack for this...  (One might suggest that it already has in the form of oral sex.)  But it will take a while.  Our brains are faster.

Evolution now will simply need to favor genetics that introduce an explicit desire for children, rather than the other behaviours which used to inevitably lead to them.  Which...  There are a lot of people out there for whom not wanting children is a dealbreaker when looking for a potential spouse.  So it seems like it's already on top of that one too.