In Praise of Boredom

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-18T09:03:29.000Z · LW · GW · Legacy · 104 comments

If I were to make a short list of the most important human qualities—

—and yes, this is a fool's errand, because human nature is immensely complicated, and we don't even notice all the tiny tweaks that fine-tune our moral categories, and who knows how our attractors would change shape if we eliminated a single human emotion—

—but even so, if I had to point to just a few things and say, "If you lose just one of these things, you lose most of the expected value of the Future; but conversely if an alien species independently evolved just these few things, we might even want to be friends"—

—then the top three items on the list would be sympathy, boredom and consciousness.

Boredom is a subtle-splendored thing.  You wouldn't want to get bored with breathing, for example—even though it's the same motions over and over and over and over again for minutes and hours and years and decades.

Now I know some of you out there are thinking, "Actually, I'm quite bored with breathing and I wish I didn't have to," but then you wouldn't want to get bored with switching transistors.

According to the human value of boredom, some things are allowed to be highly repetitive without being boring—like obeying the same laws of physics every day.

Conversely, other repetitions are supposed to be boring, like playing the same level of Super Mario Brothers over and over and over again until the end of time.  And let us note that if the pixels in the game level have a slightly different color each time, that is not sufficient to prevent it from being "the same damn thing, over and over and over again".

Once you take a closer look, it turns out that boredom is quite interesting.

One of the key elements of boredom was suggested in "Complex Novelty":  If your activity isn't teaching you insights you didn't already know, then it is non-novel, therefore old, therefore boring.

But this doesn't quite cover the distinction.  Is breathing teaching you anything?  Probably not at this moment, but you wouldn't want to stop breathing.  Maybe you'd want to stop noticing your breathing, which you'll do as soon as I stop drawing your attention to it.

I'd suggest that the repetitive activities which are allowed to not be boring fall into two categories:

Let me talk about that second category:

Suppose you were unraveling the true laws of physics and discovering all sorts of neat stuff you hadn't known before... when suddenly you got bored with "changing your beliefs based on observation".  You are sick of anything resembling "Bayesian updating"—it feels like playing the same video game over and over.  Instead you decide to believe anything said on 4chan.

Or to put it another way, suppose that you were something like a sentient chessplayer—a sentient version of Deep Blue.  Like a modern human, you have no introspective access to your own algorithms.  Each chess game appears different—you play new opponents and steer into new positions, composing new strategies, avoiding new enemy gambits.  You are content, and not at all bored; you never appear to yourself to be doing the same thing twice—it's a different chess game each time.

But now, suddenly, you gain access to, and understanding of, your own chess-playing program.  Not just the raw code; you can monitor its execution.  You can see that it's actually the same damn code, doing the same damn thing, over and over and over again.  Run the same damn position evaluator.  Run the same damn sorting algorithm to order the branches.  Pick the top branch, again.  Extend it one position forward, again.  Call the same damn subroutine and start over.

I have a small unreasonable fear, somewhere in the back of my mind, that if I ever do fully understand the algorithms of intelligence, it will destroy all remaining novelty—no matter what new situation I encounter, I'll know I can solve it just by being intelligent, the same damn thing over and over.  All novelty will be used up, all existence will become boring, the remaining differences no more important than shades of pixels in a video game.  Other beings will go about in blissful unawareness, having been steered away from studying this forbidden cognitive science.  But I, having already thrown myself on the grenade of AI, will face a choice between eternal boredom, or excision of my forbidden knowledge and all the memories leading up to it (thereby destroying my existence as Eliezer, more or less).

Now this, mind you, is not my predictive line of maximum probability.  To understand abstractly what rough sort of work the brain is doing, doesn't let you monitor its detailed execution as a boring repetition.  I already know about Bayesian updating, yet I haven't become bored with the act of learning.  And a self-editing mind can quite reasonably exclude certain levels of introspection from boredom, just like breathing can be legitimately excluded from boredom.  (Maybe these top-level cognitive algorithms ought also to be excluded from perception—if something is stable, why bother seeing it all the time?)

No, it's just a cute little nightmare, which I thought made a nice illustration of this proposed principle:

That the very top-level things (like Bayesian updating, or attaching value to sentient minds rather than paperclips) and the very low-level things (like breathing, or switching transistors) are the things we shouldn't get bored with.  And the mid-level things between, are where we should seek novelty.  (To a first approximation, the novel is the inverse of the learned; it's something with a learnable element not yet covered by previous insights.)

Now this is probably not exactly how our current emotional circuitry of boredom works.  That, I expect, would be hardwired relative to various sensory-level definitions of predictability, surprisingness, repetition, attentional salience, and perceived effortfulness.

But this is Fun Theory, so we are mainly concerned with how boredom should work in the long run.

Humanity acquired boredom the same way as we acquired the rest of our emotions: the godshatter idiom whereby evolution's instrumental policies became our own terminal values, pursued for their own sake: sex is fun even if you use birth control.  Evolved aliens might, or might not, acquire roughly the same boredom in roughly the same way.

Do not give into the temptation of universalizing anthropomorphic values, and think:  "But any rational agent, regardless of its utility function, will face the exploration/exploitation tradeoff, and will therefore occasionally get bored with exploiting, and go exploring."

Our emotion of boredom is a way of exploring, but not the only way for an ideal optimizing agent.

The idea of a steady trickle of mid-level novelty is a human terminal value, not something we do for the sake of something else.  Evolution might have originally given it to us in order to have us explore as well as exploit.  But now we explore for its own sake.  That steady trickle of novelty is a terminal value to us; it is not the most efficient instrumental method for exploring and exploiting.

Suppose you were dealing with something like an expected paperclip maximizer—something that might use quite complicated instrumental policies, but in the service of a utility function that we would regard as simple, with a single term compactly defined.

Then I would expect the exploration/exploitation tradeoff to go something like as follows:  The paperclip maximizer would assign some resources to cognition that searched for more efficient ways to make paperclips, or harvest resources from stars.  Other resources would be devoted to the actual harvesting and paperclip-making.  (The paperclip-making might not start until after a long phase of harvesting.)  At every point, the most efficient method yet discovered—for resource-harvesting, or paperclip-making—would be used, over and over and over again.  It wouldn't be boring, just maximally instrumentally efficient.

In the beginning, lots of resources would go into preparing for efficient work over the rest of time.  But as cognitive resources yielded diminishing returns in the abstract search for efficiency improvements, less and less time would be spent thinking, and more and more time spent creating paperclips.  By whatever the most efficient known method, over and over and over again.

(Do human beings get less easily bored as we grow older, more tolerant of repetition, because any further discoveries are less valuable, because we have less time left to exploit them?)

If we run into aliens who don't share our version of boredom—a steady trickle of mid-level novelty as a terminal preference—then perhaps every alien throughout their civilization will just be playing the most exciting level of the most exciting video game ever discovered, over and over and over again.  Maybe with nonsentient AIs taking on the drudgework of searching for a more exciting video game.  After all, without an inherent preference for novelty, exploratory attempts will usually have less expected value than exploiting the best policy previously encountered.  And that's if you explore by trial at all, as opposed to using more abstract and efficient thinking.

Or if the aliens are rendered non-bored by seeing pixels of a slightly different shade—if their definition of sameness is more specific than ours, and their boredom less general—then from our perspective, most of their civilization will be doing the human::same thing over and over again, and hence, be very human::boring.

Or maybe if the aliens have no fear of life becoming too simple and repetitive, they'll just collapse themselves into orgasmium.

And if our version of boredom is less strict than that of the aliens, maybe they'd take one look at one day in the life of one member of our civilization, and never bother looking at the rest of us.  From our perspective, their civilization would be needlessly chaotic, and so entropic, lower in what we regard as quality; they wouldn't play the same game for long enough to get good at it.

But if our versions of boredom are similar enough —terminal preference for a stream of mid-level novelty defined relative to learning insights not previously possessed—then we might find our civilizations mutually worthy of tourism.  Each new piece of alien art would strike us as lawfully creative, high-quality according to a recognizable criterion, yet not like the other art we've already seen.

It is one of the things that would make our two species ramen rather than varelse, to invoke the Hierarchy of Exclusion.  And I've never seen anyone define those two terms well, including Orson Scott Card who invented them; but it might be something like "aliens you can get along with, versus aliens for which there is no reason to bother trying".

 

Part of The Fun Theory Sequence

Next post: "Sympathetic Minds"

Previous post: "Dunbar's Function"

104 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Robin_Hanson2 · 2009-01-18T12:57:33.000Z · LW(p) · GW(p)

Are you sure this isn't the Eliezer concept of boring, instead of the human concept? There seem to be quite a few humans who are happy to keep winning using the same approach day after day year after year. They keep getting paid well, getting social status, money, sex, etc. To the extent they want novelty it is because such novelty is a sign of social status - a new car every year, a new girl every month, a promotion every two years, etc. It is not because they expect or want to learn something from it.

Replies from: diegocaleiro, verec, waveman
comment by diegocaleiro · 2010-11-24T08:42:40.492Z · LW(p) · GW(p)

An easy way to differentiate the two kinds for those who like games is: People who can play Mario Kart thousands of times and have a lot of fun. People who must play the new final fantasy.

There are those who do both, and those who only enjoy games designed for doing the same thing, better and better, every five minutes.

Compare the complexity of handball with the complexity of bowling.

Maybe bowling is Eliezer::boring but it isn't boring for a lot of people.

It would be a waste of energetic resources if FAI gave those people Final Fantasy 777 instead of just letting them play Mario Kart 9.

The tough question then becomes: Are those of us who enjoy Mario Kart and bowling willing to concede the kind of fun that the Eliezer Final Fantasy, pro-increasing-rate-of-complexity find desirable? They will be consuming soooo much energy for their fun.

Isn't it fair that we share the pie half in half, and they consume theirs exponencially, while we enjoy for subjectively longer?

comment by verec · 2011-01-01T16:07:40.426Z · LW(p) · GW(p)

The argument that you would loose interest if you could explain boredom away -- which is what I have to conclude from your stance:

All novelty will be used up, all existence will become boring, the remaining differences no more important than shades of pixels in a video game.

seems a bit thin to me. Does a magician loose interest because he knows every single trick that wows the audience?

Does the musician who has spent a lifetime studying the intricacies of Bach's partita No 2 loose interest just because he can deconstruct it entirely?

Douglas Hoefstadter expressed a similar concern a decade or so ago when he learnt of some "computer program" able to "generate Mozart music better then Mozart himself" only to recant a bit later when facing the truism that there is more to the entity than the some of its parts.

I do not know that we will someday be able to "explain magic away", and if that makes me irrational (and no, I don't need to bring any kind of god in the picture: I'm perfectly happy being goddless and irrational :) so be it.

comment by waveman · 2016-07-17T01:38:25.707Z · LW(p) · GW(p)

sex

Maybe for some people more shallow forms of novelty suffice e.g. sex with new women.

comment by Anonymous50 · 2009-01-18T15:04:00.000Z · LW(p) · GW(p)

Perhaps consistent with Robin's comment, I don't see any reason not to "collapse ... into orgasmium," at least after our other utilitarian obligations (e.g., preventing suffering by others in the multiverse) are completed.

comment by tcpkac · 2009-01-18T15:05:57.000Z · LW(p) · GW(p)

An appropriate post : I've come to find EY's posts very boring. Subtle, intelligent, all that, sure. A mind far finer than my own, sure. But it never gets anywhere, never goes anywhere. He spends so much time posting he's clearly not moving AI forward. His book is still out of sight, two years down the line. I can understand the main thrust of his posts, and the comments, if I invest enough, my intelligence and knowledge are just about up to that. But why bother ? It's sterile. Boredom = sterility. As for Robin's comment, which is pertinent and bears on the real world of lived emotions, the connection is that boredom is not a result of what you are doing, it's a result of what you're not doing. Think about it.

comment by Carl_Shulman · 2009-01-18T15:31:08.000Z · LW(p) · GW(p)

Interest in previously boring (due to repetition) things regenerates over time. Eating strawberries every six months may not be as good as the first time (although nostalgia may make it better), but it's not obvious that it declines in utility.

We may also actively value non-boredom in some mid-level contexts, e.g. in sexual fidelity, or for desires that we consider central to our identity/narratives.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-06T21:27:47.551Z · LW(p) · GW(p)

"Eating strawberries every six months may not be as good as the first time (although nostalgia may make it better), but it's not obvious that it declines in utility."

Isn't "not being as good" just what "declines in utility" means?

Replies from: smk, CarlShulman
comment by smk · 2012-06-13T13:04:04.744Z · LW(p) · GW(p)

Maybe they meant that it doesn't continue getting less and less good. I dunno.

comment by CarlShulman · 2012-06-13T23:48:02.565Z · LW(p) · GW(p)

Read that as "continues to decline in utility every six months."

Replies from: pnrjulius
comment by pnrjulius · 2012-06-19T04:20:54.627Z · LW(p) · GW(p)

Maybe the problem is trying to assign utility to "eat strawberries" rather than to the whole timeless state "ate strawberries Tuesday, blueberries Wednesday, bananas Thursday" etc.

We do seem to run into some weird problems if we say that the marginal utility of strawberries declines each time you eat a strawberry... though maybe we could say that this is true, everything else held constant or something like that.

comment by paniq · 2009-01-18T15:46:00.000Z · LW(p) · GW(p)

Cool stuff. It's philosophy for our present times. I like all the cultural references.

comment by Bored_Billionaire · 2009-01-18T15:54:26.000Z · LW(p) · GW(p)

I wonder what kinds of boredom, unfamiliar to us resource-limited kind, do billionaires suffer from. It seems it's the same things all over and over again, only on a different scale - probably the closest to the boredom experienced widely in the Culture.

Science, it seems, is ultimately the only reliable escape from boredom. Until everything is solved - any estimation when we might call the project called 'Science' "Done!"?

"All science is either physics or stamp collecting." -Ernest Rutherford

Replies from: pnrjulius
comment by pnrjulius · 2012-06-06T21:28:16.335Z · LW(p) · GW(p)

If you get bored of being a billionaire, give away all your money.

In fact, give it to me.

comment by Aron · 2009-01-18T16:02:47.000Z · LW(p) · GW(p)

I always think of boredom as the chorus of brain agents crying out that 'whatever you are doing right now, it has not recently helped ME to achieve MY goals'. Boredom is the emotional reward circuit to keep us rotating contributions towards our various desired goals. It also applies even if we are working on a specific goal, but not making progress.

I think as we age our goals get fewer, narrower and a bit less vocal about needing pleasing, thus boredom recedes. In particular, we accept fewer goals that are novel, which means the goals we do have tend to be more practical with existing known methods of achieving them such that we are more often making progress.

comment by Benya_Fallenstein (Benja_Fallenstein) · 2009-01-18T16:24:51.000Z · LW(p) · GW(p)

Robin, I suspect that despite how it may look from a high level, the lives of most of the people you refer to probably do differ enough from year to year that they will in fact have new experiences and learn something new, and that they would in fact find it unbearable if their world were so static as to come even a little close to being video game repetitive.

That said, I would agree that many people seem not to act day-to-day as if they put a premium on Eliezer-style novelty, but that seems like it could be better explained by Eliezer's boredom being a FAR value than by the concept being specific to Eliezer :-)

comment by Zaphod · 2009-01-18T16:30:00.000Z · LW(p) · GW(p)

My Greek Philosophy professor claims that Americans invented boredom.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-06T21:28:27.743Z · LW(p) · GW(p)

He is wrong.

comment by Emile · 2009-01-18T17:07:10.000Z · LW(p) · GW(p)

I'm not sure breathing needs a special exclusion from boredom, for the same reasons people don't get bored from jumping in Mario: we don't get bored with something if it's only a mean to something else.

This blog post talks about that a bit.

You could also say that you only get bored with conscious activities, and that breathing is unconscious, just as jumping is in mario. I'm not sure which explanation is the best way of putting things "not boring because unconscious" and "not boring because it's a means to a goal".

But anyway, those explanations seem to fit reality closer than "Things so extremely low-level, or with such a small volume of possibilities, that you couldn't avoid repeating them even if you tried; but which are required to support other non-boring activities." (I don't think "low-level" and "small volume of possibilities" are necessary conditions. Some pretty high-level and complex things like driving a car can still be non-boring if it's unconscious / used with another goal in mind.

comment by Emile · 2009-01-18T17:21:05.000Z · LW(p) · GW(p)

... and if you consider the class of "subconscious activities done in order to reach another goal", you'll see that if covers both "low-level" stuff like breathing, and "high-level" stuff like thinking (or at least, the mechanics of thinking - retrieving memories, updating beliefs, etc.). So you get one category instead of two.

Replies from: Jade
comment by Jade · 2011-03-28T21:50:12.656Z · LW(p) · GW(p)

Another way to get one category instead of two... Think of boredom as a signal of not incorporating new, useful physical info. Breathing and thinking (usefully) are not boring because those processes facilitate the body's exploitation or incorporation of physical info. In other words, boredom arises from a lack of novelty on the level of physics, though the process of breathing may seem repetitive or non-novel on the level of biomechanics.

comment by PJ_Eby · 2009-01-18T18:40:37.000Z · LW(p) · GW(p)

You can't just be "intelligent over and over", because discovery and insight are essentially random processes. You can't just find insight, you have to look for it, in the same way that evolution searches the option space.

Yes, you can always have better heuristics or search algorithms. But those heuristics are not themselves intelligence. And there are always new heuristics to discover...

So, I don't think mere insight into the process of intelligence would allow you to be bored, since the things to be discovered by intelligence would still be "out there" rather than "in here", if you get my drift. And it's those subjects of discovery that are the intended targets of novelty and interest, anyway.

Replies from: DanielLC
comment by DanielLC · 2013-02-05T06:35:35.049Z · LW(p) · GW(p)

because discovery and insight are essentially random processes.

In that case, roll a die over and over, and you'll have to worry about boredom.

But those heuristics are not themselves intelligence.

True, but one of those heuristics is you, and you can't stop doing that.

Of course, I don't think any of us think this is very likely to be a problem. I'm basically just playing devil's advocate.

I'm pretty certain you'd have to do some significant self-modification to understand it all on a level that makes it boring, and at that point, you could just self-modify so that it isn't boring.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-18T20:41:41.000Z · LW(p) · GW(p)

Robin, do they eat the same foods every day? Drive to the same places every day? Buy the same things every time they shop? Have sex in the same position every time? Watch the same movie each time they go to the theater? Since you're standing back, you see them at a level of abstraction from which their life looks mostly "the same" to you, but I doubt they're playing the same level of the same video game over and over again "every time they sit at the computer".

Replies from: diegocaleiro
comment by diegocaleiro · 2010-11-24T08:47:38.325Z · LW(p) · GW(p)

An easy way to differentiate the two kinds for those who like games is: People who can play Mario Kart thousands of times and have a lot of fun. People who must play the new final fantasy.

There are those who do both, and those who only enjoy games designed for doing the same thing, better and better, every five minutes.

Compare the complexity of handball with the complexity of bowling.

Maybe bowling is Eliezer::boring but it isn't boring for a lot of people.

It would be a waste of energetic resources if FAI gave those people Final Fantasy 777 instead of just letting them play Mario Kart 9.

The tough question then becomes: Are those of us who enjoy Mario Kart and bowling willing to concede the kind of fun that the Eliezer Final Fantasy, pro-increasing-rate-of-complexity find desirable? They will be consuming soooo much energy for their fun.

Isn't it fair that we share the pie half in half, and they consume theirs exponencially, while we enjoy for subjectively longer?

Replies from: Ghatanathoah, Ghatanathoah
comment by Ghatanathoah · 2012-06-13T07:25:21.642Z · LW(p) · GW(p)

People who can play Mario Kart thousands of times and have a lot of fun. People who must play the new final fantasy.

Do you really play Mario Kart thousands of times because you love repeating the same thing? Or do you love it because you have a finer eye for small detail than the new FF player, and so are noticing new novelty each time you play? I know I can watch "King Kong" or "Halloween" over and over again partly because I notice something new each time I watch those films.

That being said, I think a proper fun theory would probably have some sort of error bars around the level of boredom that is acceptable. In other words a creature that gets bored x% less easily than the median human is might still be a worthwhile creature, but creating a creature x+1% less easily bored would be bad (unless such a creature is instrumentally useful, obviously).

Replies from: Vulture
comment by Vulture · 2014-01-20T19:33:00.193Z · LW(p) · GW(p)

I know I can watch "King Kong" or "Halloween" over and over again partly because I notice something new each time I watch those films.

If we're defining "novelty" such that it encompasses literally watching the same movie over and over again, then maybe it's time to step back and consider whether we need a new word, since we're moving so far away from the intuitive definition of "novelty".

comment by Ghatanathoah · 2012-06-14T21:56:14.011Z · LW(p) · GW(p)

Another question came to my mind while thinking about this today. How often do you play Mario Kart alone vs. with friends? Adding social interaction to the game vastly increases its complexity. Probably part of the reason it's more enduring than the FF games is that most of those games are single player, so the complexity is limited by your inability to play against other humans. Good multiplayer games are probably so replayable partly because they are venues for social interaction, which is a very, very complex form of novelty.

comment by Gwern_Branwen · 2009-01-18T20:43:16.000Z · LW(p) · GW(p)

Zaphod: kind of funny, given the many foreign words in English - ennui, weltschmerz, melancholy etc.

comment by Emile · 2009-01-18T21:02:57.000Z · LW(p) · GW(p)

Robin: It is not because they expect or want to learn something from it.

A major component of fun in video games is the emotional reward when the brain learnt something; that probably generalizes to why we find a lot of activities enjoyable, even though we might not label them as "learning" which is often associated to "memorizing useless facts because you're forced to".

Replies from: pnrjulius
comment by pnrjulius · 2012-06-06T21:30:00.546Z · LW(p) · GW(p)

Right. Football players don't think of themselves as "learning" when they perfect their field goal kicks, but neurologically that is exactly what they are doing.

comment by Bored_2_Bits · 2009-01-19T00:43:58.000Z · LW(p) · GW(p)

What bores me is that we live in a binary universe. Sort of limits your options.

Replies from: DanielLC
comment by DanielLC · 2013-02-05T06:38:02.250Z · LW(p) · GW(p)

What's a binary universe?

comment by Letitia_Sweitzer · 2009-01-19T15:47:06.000Z · LW(p) · GW(p)

If most of us have a "terminal preference for a stream of mid-level novelty" those with ADD/ADHD find that unstimulating. They require a stream of high level novelty or, at least, a much faster stream of mid-level. See ADHD posts on ThePowerOfBoredom.com

comment by Caledonian2 · 2009-01-19T19:07:50.000Z · LW(p) · GW(p)

Few people become bored with jumping in SMB because 1) becoming skilled at it is quite hard, 2) it's used to accomplish specific tasks and is quite useful in that context, 3) it's easier to become bored with the game as a whole than with that particular part of it.

comment by yo_dawg · 2009-01-20T22:12:01.000Z · LW(p) · GW(p)

the time when we gain enough experience to find all boring will be near to the time we can eliminate boredom as an emotion.

If you still wish to feel novelty you are free to wallow in ignorance; all will be new to you then. if you wish to move forward then remember self-modification to be an option.

Replies from: DanielLC
comment by DanielLC · 2013-02-05T06:37:46.203Z · LW(p) · GW(p)

If there's still a forward to move in then it would seem we don't have enough experience to find all boring.

comment by Jotaf · 2009-01-21T03:09:17.000Z · LW(p) · GW(p)

Emile and Caledonian are right. Eliezer should've defined exceptions to boredom instead (and more simply) as "activities that work towards your goal". Those are exempt of boredom and can even be quite fun. No need to distinguish between high, low and mid-level.

The page at Lostgarden that Emile linked to is a bit long, so I'll try to summarize the proposed theory of fun, with some of my own conclusions:

You naturally find activities that provide you with valuable insights fun (the "aha!" moment, or "fun"). Tolerance to repetition (actually, finding a repetitive act "fun" as well) is roughly proportional to your expectation of how it will provide you with a future fun moment.

There are terminal fun moments. Driving a car is repetitive, but at high speeds adrenaline makes up for that. Seeing Mario jump for the first time is fun (you' found a way of impacting the world [the computer screen] through your own action). I'm sure you can think of other examples of activities chemically wired to being fun, of course ;)

Working in the financial business might be repetitive and boring (or at least it seems that way at first), but if it yields good paychecks, which give you the opportunity to buy nice things, gain social status, etc, you'll keep doing it.

Jumping in Mario is repetitive, and if jumping didn't do anything, you'd never touch that button again after 10 jumps (more or less). But early on it allows you to get to high platforms, which kinda "rekindles" the jumping activity, and the expectation that it will be useful in the future/yield more fun. Moving from platform to platform gets repetitive, unless it serves yet another purpose.

(The above is all described in Lostgarden and forms the basis of their theory of fun, and how to build a fun game. Following are some of my own conclusions.)

The highest goal of all is usually to "beat the game"/"explore the game world"/"have the highest score", and you set it upon yourself naturally. This is like the goal of jumping over a ledge, even if you don't know what's beyond it (in the Mario world). You ran out of goals so you're exploring, which usually means thinking up an "exploratory goal", ie, trying something new.

You can say that finding goals is fun in itself. If you start in a blank state, nothing will seem fun at first. You might as well just sit down and whither away! So a good strategy is to set yourself a modest goal (an exploratory goal), and the total fun had will be greater than the fun you assigned earlier to the goal in itself, which might be marginal. A more concrete example: The fun in reading "You win!" is marginal, but you play through Mario just to read those words. So I guess that the journey is more important than getting to the end.

comment by consider2 · 2009-01-21T12:56:10.000Z · LW(p) · GW(p)

Ben Goertzel's patternist philosophy of mind is suggestive here. It is boring when the same patterns repeat themselves. It's not about whether the pixels change or stay the same. It's about whether you can detect a unique pattern each time.

This may explain Eliezer's preference for the mid-level. If you are too specific, you don't see any patterns. If you are too general you miss many lower-level patterns. And also the balance between triviality and intractability of problems. If it's trivial you already have the pattern, if it is intractable you can't see any.

Intelligence may be the same algorithm each time it is applied, but as long as it generates/detects new patterns in the problems it encounters, there is no cause for boredom. Intelligent people get bored more quickly, because they can assimilate more patterns per unit time. But I like to think we all experience the same 'subjective novelty-time', so there is no need to make yourself stupid to extend your experience of novelty.

"it's easier to become bored with the game as a whole than with that particular part of it."

That has to do with our limited working memory capacity. When we conceive of "the game as a whole" we don't download the whole game into working memory. There is no space. Since the game really isn't in working memory, the conscious you does not detect any pattern. Playing the game though, is fun. Not because jumping in mario is instrumental to scoring points, but because arriving at a particular goal through particular means constitutes a pattern. Of course, mere permutations on fragments of the journey aren't as exciting techniques within strategies within stories because well, 'permutation of elements in a set' is a pretty common pattern...

What about detecting patterns in the clouds, in coincidences, in astrology, in pi, in the noise? Very quickly boring because when we go meta, we see there are no patterns linking these disparate atomic patterns.

So the recipe for interestingness is: objects, patterns, recursion. This is best demonstrated in axiomatic systems constructed by mathematicians. But it is also the same recipe with an interesting life in general.

comment by verec · 2011-01-01T16:08:45.351Z · LW(p) · GW(p)

The argument that you would loose interest if you could explain boredom away, which is what I have to conclude from your stance:

All novelty will be used up, all existence will become boring, the remaining differences no more important than shades of pixels in a video game.

seems a bit thin to me. Does a magician loose interest because he knows every single trick that wows the audience?

Does the musician who has spent a lifetime studying the intricacies of Bach's partita No 2 loose interest just because he can deconstruct it entirely?

Douglas Hoefstadter expressed a similar concern a decade or so ago when he learnt of some "computer program" able to "generate Mozart music better then Mozart himself" only to recant a bit later when facing the truism that there is more to the entity than the sum of its parts.

I do not know that we will someday be able to "explain magic away", and if that makes me irrational (and no, I don't need to bring any kind of god in the picture: I'm perfectly happy being goddless and irrational :) so be it.

comment by timtyler · 2011-02-07T21:41:29.199Z · LW(p) · GW(p)

You get boredom when you are attempting to efficiently explore a mostly-unexplored search space - or at least you get a tendency to avoid repeatedly sampling the same region of the search space - which is boredom's primary behavioural manifestation.

In the example in the post - of an optimiser that doesn't get bored - that happens because the search space it is exploring has become exhausted.

That is simply a property of the particular example which was selected. It isn't a general property of efficient optimisers and it doesn't mean that efficient optimisers don't exhibit boredom. They do exhibit boredom - when exploring mostly-unexplored search spaces.

IMO, boredom is best seen as being a universal instrumental value - and not as an unfortunate result of "universalizing anthropomorphic values".

Update 2011-07-27: Yudkowsky responds to roughly this point - 39 minutes in here. He claims

If you look at an ideal Bayesian decision maker as ask: "what kind of boredom is a convergent instrumental value?" then there is a convergent solution to the exploration-exploitation tradeoff and that solution leads to what we would regard as a boring, worthless, valueless future.

He doesn't give any references, or much of a supporting argument - it is more of an assertion. Maybe he thinks the material about paperclips in this post is sufficiently convincing. Our civilisation maximises entropy - not paperclips - which hardly seems much more interesting. Also, it isn't clear that Yudkowsky realises the extent to which boredom is implemented an instrumental value in modern humans. Nature can't easily wire in some kind of universal "boredom rate" - since boredom is task-specific (we mostly don't get bored of sex and food) and context specific. That's not to say it is entirely instrumental - but it is partly instrumental in humans.

Yudkowsky then goes on immediately to say:

this is why we have to solve the Friendly AI problem

...so it seems to be a significant point.

My position is that we had better wind up approximating the instrumental value of boredom (which we probably do pretty well today anyway - by the wonder of natural selection) - or we are likely to be building a rather screwed-up civilisation. There is no good reason why this would lead to a "worthless, valueless future" - which is why Yudkowsky fails to provide one.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-13T07:16:49.858Z · LW(p) · GW(p)

Our civilisation maximises entropy - not paperclips - which hardly seems much more interesting.

No it doesn't. It tries to maximize fun. It might maximize entropy as a side effect, but saying that we act to maximize entropy is as ludicrous as saying cows act to maximize atmospheric methane content. You're confusing a side-effect with the real goal.

A constant theme I've noticed in your posts is that you take some (usually evolutionary) trend that is occurring in our society or history and then act as if that trend is an actual conscious goal of human beings, and life. You then exhibit confusion when someone makes a statement that their real conscious goals are in opposition to this trend, and tell them that the fact that this trend is occurring means they must really want it to occur, and that it is part of their real goals. This demonstrates rather confused thinking, both on how human minds work, an on what a "goal" really is.

Scientists often metaphorically describe trends as having goals and "maximizing" things, you seem to have taken these metaphors excessively literally and act like these trends literally have goals and literally maximize things. Terms like "goals" and "maximization" only refer, in the literal sense, to the computations of consequentialist thinking beings (consequentialist in this case meaning a being that can forecast the future, not the moral theory). A goal is a forecast of the future a consequentialist assigns favorable utility to. Maximizing refers to a consequentialist that values a certain property in the future so much it assigns very favorable utility to increasing it as much as possible. This is the only appropriate time to literally use the terms "goal" and "maximize," all other times are metaphorical.

Evolution and other trends do not literally maximize anything. They have no goals. It is just sometimes useful to metaphorically pretend they do because it makes thinking about it easier for human brains, which are better at modeling other consequentialist than they are at modeling abstract trends. Any claim that evolution has a goal in a non-metaphorical sense is a blatant falsehood. Until you realize this fact you will fail to understand virtually everything that Eliezer is talking about.

There is no good reason why this would lead to a "worthless, valueless future" - which is why Yudkowsky fails to provide one.

Yes there is. Such a future would be boring. It is bad for things to be boring and good for things to be interesting, so such a future would be bad. And don't ask me why boringness is bad, that's like asking why water is H2O. You are asking Eliezer to provide some meaning of good seperate from things like truth, fun, novelty, life, etc, when he has clearly explained that there is no meaning of good seperate from those things. Those things are good, so a world where they don't exist, or are reduced, would be bad, full stop.

Replies from: timtyler
comment by timtyler · 2012-06-13T10:09:59.161Z · LW(p) · GW(p)

No it doesn't. It tries to maximize fun. It might maximize entropy as a side effect, but saying that we act to maximize entropy is as ludicrous as saying cows act to maximize atmospheric methane content. You're confusing a side-effect with the real goal.

Really? How do you know that? Are plants trying to maximise "fun"? Is "fun" even a measurable quantity?

If "fun" is being maximised, why is there so much suffering in the world? If two systems are in contention, is it really the one that is having the most fun that will win? The "fun-as-maximand" theory seems trivially refuted by the facts.

"Fun" - if we are trying to treat the concept seriously - is better characterised as the proxy that brains use for the inclusive fitness of their associated organism.

There's a scientific literature on the subject of what God's utility function is. Entire books have been written about the topic. I'm familiar with this literature, are you?

Terms like "goals" and "maximization" only refer, in the literal sense, to the computations of consequentialist thinking beings (consequentialist in this case meaning a being that can forecast the future, not the moral theory). A goal is a forecast of the future a consequentialist assigns favorable utility to. Maximizing refers to a consequentialist that values a certain property in the future so much it assigns very favorable utility to increasing it as much as possible. This is the only appropriate time to literally use the terms "goal" and "maximize," all other times are metaphorical.

We had better talk about "optimization" then, or we will talk past each other.

Evolution and other trends do not literally maximize anything.

Really? How do you know that? Evolution is a gigantic optimization process with a maximand. You claimed above that it is "fun" - and my claim is that it is entropy. As I say, there's a substantial scientific literature on the topic - have you looked at it?

Replies from: Richard_Kennaway, Ghatanathoah
comment by Richard_Kennaway · 2012-06-13T11:47:07.001Z · LW(p) · GW(p)

Evolution is a gigantic optimization process with a maximand

Success for the fox is failure for the rabbit; success for the rabbit is failure for the fox. What is the maximand?

Replies from: TheOtherDave, timtyler
comment by TheOtherDave · 2012-06-13T13:32:56.304Z · LW(p) · GW(p)

OTOH, as rabbits become better fox-evaders, foxes become better rabbit-hunters. If there exists some thing X that fox-evasion and rabbit-hunting have in common, it's possible (I would even say likely) that X is increasing throughout this process.

Replies from: timtyler
comment by timtyler · 2012-06-13T23:22:36.226Z · LW(p) · GW(p)

Increasing != maximising, though. Methane is increasing in both cases - but evolution doesn't maximise methane production.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-14T02:36:25.525Z · LW(p) · GW(p)

Not sure why it's relevant, but certainly true.

comment by timtyler · 2012-06-13T23:18:03.723Z · LW(p) · GW(p)

So: entropy, as far as we can tell. See the works of Dewar, referenced here. Or for a popular version, try: Whitfield, John Survival of the Likeliest? for a popular version from someone other than me.

comment by Ghatanathoah · 2012-06-14T02:09:36.381Z · LW(p) · GW(p)

Really? How do you know that?

You said: "Our civilisation maximises entropy." Our civilization consists of all the humans in the world. When you're asking what our civilization is trying to maximize you're asking what the humans of the world are trying to maximize. Humans try to do things they enjoy, things that are fun. Therefore our civilization tries to maximize fun.

I know that because that's basic human psychology 101. Humans want to be happy and have fulfilled preferences.

Are plants trying to maximise "fun"?

We're talking about our civilization. In other words, all the humans in the world. Plants aren't human, so whether they maximize fun is irrelevant. I suppose if you regarded human tools and artifacts as part of our civilization then agricultural plants could be regarded as part of it. But they aren't the part of our civilization that makes decisions on what to maximize, humans are.

Plants aren't trying to maximize anything. They're plants, they don't have minds. If I was to use the word maximize as liberally as you do I could actually argue that agricultural plants do try to maximize fun, because humans grow them for the purpose of eating, and eating is fun. But that wouldn't be strictly accurate, plants just execute their genetically coded behaviors, any purpose they have is really the purpose of the consequentalist minds that grow them, not of the plants. Saying that agricultural plants have any purpose at all is the mind-projection fallacy.

If "fun" is being maximised, why is there so much suffering in the world?

Because some humans are selfish and try to maximize their own fun at the expense of the fun of others. And sometimes we make big mistakes when trying to make the world more fun. But still, most of the time we try to work together to have fun. We aren't that good at it yet, but we're trying and keep improving. The world is getting progressively more fun.

If you systems are in contention, is it really the one that is having the most fun that will win?

Yes. Humans who are enjoying life the most are generally regarded as being more successful at life than humans who are not. This is a basic and easily observable fact.

The "fun-as-maximand" theory seems trivially refuted by the facts.

It's easily confirmed by the facts. As humans have grown richer and more technologically advanced they have devoted more and more of their resources to having fun. Look at the existence of places like Disneyworld for evidence.

"Fun" - if we are trying to treat the concept seriously - is better characterised as the proxy that brains use for the inclusive fitness of their associated organism.

No it isn't. Brains don't care about inclusive genetic fitness. At all. They never have. If you want evidence for that, note the fact that humans do things like use condoms. Also note that the growth of the world's population is slowing and will probably stop by the end of the 21st century if trends continue.

There's a scientific literature on the subject of what God's utility function is. Entire books have been written about the topic. I'm familiar with this literature, are you?

That literature has exactly zero relevance to our current discussion, which is what human beings, value, care about, and try to maximize. You learn about that by studying basic psychology. Evolutionary theory may give us insights into how humans came to have our current values, but it has no relevance on what we should do now that we have them.

Our values are what we value, how we came to have them is irrelevant. If our values were bestowed on us by an alien geneticist rather than evolution we would behave exactly the same as we do now. Humans don't give a crap about "god's utility function." If they end up increasing entropy it is as a side effect to obtaining their real goals.

We had better talk about "optimization" then, or we will talk past each other.

Optimization has the same problem. Optimization literally refers to a consequentialist creature using its future forecasting abilities to determine how an object or meme would better suit its goals and altering that thing accordingly. Evolution can be metaphorically said to optimize, but that isn't strictly true. It's just a form of personification to make thinking about evolution easier.

Strictly speaking, evolution is just a description of a series of trends. Since human minds are bad at modeling trends, but good at modeling other consequentialists, sometimes it's useful to pretend that evolution is a consequentialist with "goals" and a "utility function" to help people understand it. It's less scientifically accurate than modeling evolution as a series of trends, but it makes up for it by being easier for a human brain to compute. The problem is that, while most scientists understand this, there are some people who who misinterpret this to mean that evolution literally has goals, desires, and utility functions. You appear to be one of these people.

Really? How do you know that?

Because literally speaking, only consequentialist minds maximize things. You might be able to say evolution maximizes things as a useful metaphor, but literally speaking it isn't true.

Evolution is a gigantic optimization process with a maximand.

No it isn't. It is useful to pretend that it is because doing so makes it a little easier for the human mind to think about evolution. But really, evolution is just an abstract series of mindless trends.

You claimed above that it is "fun" - and my claim is that it is entropy.

I never claimed evolution tries to maximize fun. I claimed our civilization does. In other words, that the consequentialist minds making up human civilization use their forecasting abilities to foresee possible futures, and then steer the universe towards the one where they are having the most fun.

As I say, there's a substantial scientific literature on the topic - have you looked at it?

I'm familiar with some of the literature, and I've looked at your website. You constantly confuse the metaphorical "goals" evolution has with the real goals that consequentialist minds such as human beings have. For instance you say:

Another example: currently, researchers at ITER in France are working on an enormous fusion reactor, to allow us to accelerate the conversion of order into entropy still further.

This is trivially false, the reason researchers are working on a fusion reactor is to secure human beings cheap renewable energy to have more fun with. The fact that it increases entropy is a side-effect. The consequentialist human minds do not foresee a future with more entropy and take action in order to secure that future. They foresee a future where humans are using cheap energy to have more fun and take actions to secure that future. The entropy increase is an unfortunate, but acceptable side effect.

What you remind me of is one of those theologians who describe God as an "unmoved mover" or something like that and suggest such a thing must exist (which was a reasonable hypothesis at one point in history, even if it isn't now). They then make the ridiculous leap of logic that because an unmoved mover must exist, and you can call such a thing "God," that therefore a God with all the ludicrously specific human-like properties described in the Bible must exist.

Similarly, you take some basic facts about evolution and physics that every educated person agrees are true. Then you make bizarre leaps of logic to conclude that human beings care about maximizing IGF and maximizing entropy and other obvious falsehoods. I am not objecting to the evolutionary biology research you cite, I am objecting to the bizarre and unjustified inferences about human psychology and moral philosophy you use that research to make.

Replies from: timtyler, timtyler, timtyler, timtyler, timtyler, timtyler
comment by timtyler · 2012-06-14T23:38:13.149Z · LW(p) · GW(p)

Then you make bizarre leaps of logic to conclude that human beings care about maximizing IGF and maximizing entropy and other obvious falsehoods.

To clarify, many humans fail to maximise their own inclusive fitnesses - largely because they are malfunctioning - with many of the most common malfunctions being caused by parasites - and the most common parasites being responsible for memetic hijacking. Humans and the ecosystems they are part of really do maximise entropy (subject to constraints) - or at least the MEP is a deep and powerful explanatory principle - when it comes to CAS and living systems.

comment by timtyler · 2012-06-14T23:49:39.254Z · LW(p) · GW(p)

You said: "Our civilisation maximises entropy." Our civilization consists of all the humans in the world. When you're asking what our civilization is trying to maximize you're asking what the humans of the world are trying to maximize. Humans try to do things they enjoy, things that are fun. Therefore our civilization tries to maximize fun.

Brains are hedonic maximisers. They're only about 2% of human body mass, though. There are plenty of other optimisation processes to consider as well - machines, corporation, stock markets also maximise. The picture of civilization as a bunch of human brains is deeply mistaken.

Hedonism is a means to an end. Pleasure is there for a reason. The reason is that it helps organisms reproduce, and organisms reproduce - ultimately - because that's the best way to maximise entropy - according to the deep principle of the MEP.

Think you can more accurately characterise nature's maximand? Go right ahead. If David Pearce has his way, hedonism will play a more significant role in the future - until aliens eat our lunch - but whether David Pearce's future comes to pass remains to be seen.

Replies from: CarlShulman, Ghatanathoah
comment by CarlShulman · 2012-06-15T00:13:53.760Z · LW(p) · GW(p)

They're only about 2% of human body mass, though.

Closer to 20% than 2% in energy use.

organisms reproduce - ultimately - because that's the best way to maximise entropy - according to the deep principle of the MEP.

This "because" doesn't seem like a meaningful answer to a real question. Life on Earth makes use of some solar and geothermal energy before it heads off into space. Does Earth generate much more entropy than Venus? ETA: it seems to me that in the long-run you get the same effects. In the short run local life can use up free energy more quickly, but it can also stockpile resources for later extraction (fossil fuels, acorns, stellar engineering).

Think you can more accurately characterise nature's maximand?

Thermodynamics tells us that doing most anything at all increases entropy. Calling that a utility function looks like talking about how the utility functions of falling objects value being closer to large masses.

Replies from: timtyler
comment by timtyler · 2012-06-15T00:32:25.961Z · LW(p) · GW(p)

Does Earth generate much more entropy than Venus?

I think that's comparing apples and cheese.

it seems to me that in the long-run you get the same effects. In the short run local life can use up free energy more quickly, but it can also stockpile resources for later extraction (fossil fuels, acorns, stellar engineering).

Your point here isn't clear. Orgainsms stockpile, but they also eat their stockpiles. Escosystems ultimately leave nothing behind, to the best of their ability. Life produces maximal devastation.

Think you can more accurately characterise nature's maximand?

Thermodynamics tells us that doing most anything at all increases entropy. Calling that a utility function looks like talking about how the utility functions of falling objects value being closer to large masses.

Except that that particular effect can be explained as a manifestation of the MEP principle - which is much more general. So the idea that objects like to be close to other objects is redundant, unnecessary - and can be discarded on Occamian grounds.

Replies from: CarlShulman
comment by CarlShulman · 2012-06-15T01:46:08.895Z · LW(p) · GW(p)

Your point here isn't clear. Orgainsms stockpile, but they also eat their stockpiles. Escosystems ultimately leave nothing behind, to the best of their ability. Life produces maximal devastation.

At any given time, much of the grasslands and fertile ocean are not engaged in photosynthesis because herbivores have cropped the primary producers, reducing short-term entropy production. You can swallow that problem with a catch-all "best of their ability clause," but now "ability" needs to talk about the ability of herbivores to compete in a sea of ill-defended plants, selfish genes, and so forth.

The move to biological and social systems is an attempt at empirical generalization with some success, since untapped free energy has the potential to power living creatures' reproduction, and mutants that tap such sources proliferate. Humans can use free energy to power machinery as well as their own bodies, so they tap available resources. Great, you have a correlate for the proliferation of life.

But this isn't enough to power accurate predictions about the portion of Earth's surface performing photosynthesis, or whether humanity (or successors) will use up the available resources in the Solar System as quickly as possible, or as quickly as will maximize interstellar colonization and energy use, or much more slowly to increase the total computation that can be performed, or slowly so as to sustain a smaller population with longer lifespans.

Replies from: timtyler
comment by timtyler · 2012-06-15T10:50:22.931Z · LW(p) · GW(p)

Your point here isn't clear. Orgainsms stockpile, but they also eat their stockpiles. Escosystems ultimately leave nothing behind, to the best of their ability. Life produces maximal devastation.

At any given time, much of the grasslands and fertile ocean are not engaged in photosynthesis because herbivores have cropped the primary producers, reducing short-term entropy production. You can swallow that problem with a catch-all "best of their ability clause," but now "ability" needs to talk about the ability of herbivores to compete in a sea of ill-defended plants, selfish genes, and so forth.

Herbivores cause massive devastation and destruction to plant life. The extend life's reach underground - where plants cannot live. They led to oil drilling, international flights, global warming and nuclear power. If you want to defend the thesis that the planet would be a better dissipator without them, you have quite a challenge on your hands, it seems to me.

But this isn't enough to power accurate predictions about the portion of Earth's surface performing photosynthesis, or whether humanity (or successors) will use up the available resources in the Solar System as quickly as possible, or as quickly as will maximize interstellar colonization and energy use, or much more slowly to increase the total computation that can be performed, or slowly so as to sustain a smaller population with longer lifespans.

MEP is a statistical principle. It illuminates these issues, but doesn't make them trivial. Compare with natural selection - which also illuminates these areas without trivializing them.

comment by Ghatanathoah · 2012-06-15T00:37:04.161Z · LW(p) · GW(p)

Brains are hedonmic maximisers. They're only about 2% of human body mass, though. There are plenty of other optimisation processes to consider as well - machines, corporation, stock markets also maximise. The picture of civilization as a bunch of human brains is deeply mistaken.

All those things are controlled by brains. They execute the brains' commands, which is optimizing the world for fun. They are extensions of the human brains. Now, they might increase entropy or something as a side effect, but everything they do they do because a brain commanded it.

Hedonism is a means to an end. Pleasure is there for a reason.

Life doesn't give us reason and purpose. We give life reason and purpose. Speculating on what sort of metaphorical "purposes" life and nature might have might be a fun intellectual exercise, but ultimately it's just a game. Our purposes come from the desires of our brains, not from some mindless abstract trend. Your tendency to think otherwise is the major intellectual error that keeps you from grokking Eliezer's arguments.

The reason is that it helps organisms reproduce, and organisms reproduce - ultimately - because that's the best way to maximise entropy - according to the deep principle of the MEP.

Here's a question for you: Suppose some super-advanced aliens show up that offer to detonate a star for you. That will generate huge amounts of entropy, far more than you ever could by yourself. All you have to do in return is torture some children to death for the aliens' amusement. They'll make sure the police and your friends never find out you did it.

Would you torture those children? No, of course you wouldn't. Because you care about being moral and doing good and don't give a crap about entropy. You just think you do because you have a tendency to confuse real human goals with metaphorical, fake "goals" that abstract natural trends have.

Think you can more accurately characterise nature's maximand?

Why would I need to do that? My main point is that human civilization doesn't and shouldn't give a crap about nature's worthless maximand. When you post comments on Less Wrong a lot of time you seem to act like maximizing IGF and entropy are good things that organisms ought to do. You get upset at Eliezer for suggesting we should do something better with our lives. This is because you're deeply mistaken about the nature of goodness, progress, and values.

But just for fun, I'll take up your challenge. Nature doesn't have a maximand. It isn't sentient. And even if Nature was sentient and did have a maximand, the proper response for the human race would be to ignore Nature and obey their own desires instead of its stupid, evil commands.

That being said, even you instead asked me to answer the more reasonable question "What trends in evolution sort of vaguely resemble the maximand of an intelligent creature?" I still wouldn't say entropy maximization. The idea that evolution tends to do that is an illusion created by the Second Law of Thermodynamics. Because of the way that 2LTD works, doing anything for any reason tends to increase entropy. So obviously if an evolved organism does anything at all, it will end up increasing entropy. This creates an illusion that organisms are trying to maximize entropy. Carl Shulman is right, calling entropy nature's maximand is absurd, you might as well say "being attracted by gravity" or "being made of matter" are what nature commands.

A better (metaphorical) maximand might actually be local entropy minimization. It's obviously impossible to minimize total entropy, but life has a tendency to decrease the entropy in its local area. Life tends to use energy to remove entropy from its local area by building complex cellular structures. It's sort of an entropy pump, if you will. So if we metaphorically pretended it evolution had a purpose, it would actually be the reverse of what you claim.

But again, that's not my main point. My main point is that while you have a lot of good sources for your biology references, you don't have nearly as good a grasp of basic psychology and philosophy. This causes you to make huge errors when discussing what good, positive ways for life to develop in the future are.

Replies from: timtyler, asparisi, timtyler, timtyler, timtyler, timtyler
comment by timtyler · 2012-06-15T00:43:05.493Z · LW(p) · GW(p)

Brains are hedonmic maximisers. They're only about 2% of human body mass, though. There are plenty of other optimisation processes to consider as well - machines, corporation, stock markets also maximise. The picture of civilization as a bunch of human brains is deeply mistaken.

All those things are controlled by brains. They execute the brains' commands, which is optimizing the world for fun. They are extensions of the human brains. Now, they might increase entropy or something as a side effect, but everything they do they do because a brain commanded it.

Nope. For instance, look at Kevin Kelly's book "Out of control". Or look into memetics. Human brains are an important force, but there are other maximisation processes going on, in culture, with genes and inside machines.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-19T00:37:57.061Z · LW(p) · GW(p)

"Out of Control" appears to be primarily about decentralized decision making processes like democracy and capitalism. I never said that brains controlled the artifacts of civilization in a centralized fashion, I just said that they control them. Obviously human beings use all sorts of decentralized methods to help coordinate with each other.

That being said, while systems are not controlled in a centralized manner, they are restricted in a centralized manner. For instance, capitalism only works properly if people are prevented from killing and stealing. Even if there is no need to centrally control everything to get positive results, there is a need to centrally control some things.

There seems to be a later section in "Out of Control" where Kelly suggests giving up control to our machines is good in the same way that dictators giving up central control to democracy and capitalism is good. This seems short-sighted, especially in light of things like Bostrom's orthogonality thesis. The reasons democracy and capitalism do so much good is that:

  1. Human minds are an important component of them, and (most) humans care about morality, so the systems tend to be steered towards morally good results.
  2. There are some centralized restrictions on what these decentralized systems are able to do.

Unless you are somehow able to program the machines with moral values (i.e. make an FAI), turning control over to them seems like a bad idea. Creating moral machines isn't impossible, but the main point of Eliezer's writing is that it is much, much harder than it seems. I think he's quite correct.

As for memetics, the idea impressed me when I first came across it, but there doesn't seem to have been much development in the field since then. I am no longer impressed. In any case, the main reason memes "propagate" is that they help a brain fulfill its desires in some way, so really ever-evolving memes are just another one of the human mind's tools in its continuing quest for universal domination.

Replies from: timtyler
comment by timtyler · 2012-06-19T01:03:43.784Z · LW(p) · GW(p)

As for memetics, the idea impressed me when I first came across it, but there doesn't seem to have been much development in the field since then. I am no longer impressed. In any case, the main reason memes "propagate" is that they help a brain fulfill its desires in some way, so really ever-evolving memes are just another one of the human mind's tools in its continuing quest for universal domination.

From a biological perspective, brains are seen as being a way for genes and memes to make more copies of themselves.

That this is a valuable point of view is illustrated by some sea squirts - which digest their own brains to further their own reproductive ends.

In nature, genes are fundamental, while brains are optional and expendable.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-19T15:46:34.252Z · LW(p) · GW(p)

From a biological perspective, brains are seen as being a way for genes and memes to make more copies of themselves....

...In nature, genes are fundamental, while brains are optional and expendable.

Genes are biologically fundamental, certainly. You will get no argument from me there (although the fact that brains are biologically expendable does not imply that it is moral to expend them). The evidence that memes are more fundamental than brains, however, is not nearly as strong.

It is quite possible to model memes as "reproducing," by being passed from one brain to another. But most of the time the reason the meme is passed from one brain to another is because they aid the brain in fulfilling its desires in some way. The memes associated with farming, for instance, spread because they helped the brain fulfill its desire to not starve. In instances where brains stopped needing the farming memes to obtain food (such as when the Plains Indians acquired horses and were suddenly able to hunt bison more efficiently) those memes promptly died out.

There are parasitic memes, cult ideologies for instance, that reproduce by exploiting flaws in the brain's cognitive architecture. But the majority of memes "reproduce" by demonstrating their usefulness to the brain carrying them. You could say that a meme's "fitness" is measured by its usefulness to its host.

Replies from: timtyler
comment by timtyler · 2012-06-20T00:15:42.238Z · LW(p) · GW(p)

You could say that a meme's "fitness" is measured by its usefulness to its host.

That wouldn't be terribly accurate, though. Smoking memes, obesity memes, patriotism memes, and lots of advertising and marketing memes are not good for their hosts, but rather benefit those attempting to manipulate them. However, there's usually a human somewhere at the end of the chain today.

That probably won't remain the case, though. After the coming memetic takeover we are likely to have an engineered future - and then it will be memes all the way down.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-20T22:03:38.626Z · LW(p) · GW(p)

That probably won't remain the case, though. After the coming memetic takeover we are likely to have an engineered future - and then it will be memes all the way down.

The memetic takeover you describe would just consist of intelligences running on computer-like substrates instead of organic substrates. That isn't morally relevant to me, I don't care if the creatures of the future are made of carbon or silicon. I care about what sort of minds they have, what they value and believe in.

I'm not sure referring to an intelligent creature that is made of computing code instead of carbon as a "meme" is true to the common definitions of the term. I always thought of memes as contagious ideas and concepts, not as a term to describe an entire intellect.

After the memetic takeover there would still be intelligent creatures, they'd just run on a different substrate. Many of them could possibly be brain-like in design or have human-like values. They would continue to exchange memes with each other just as they did before, and those memes would spread or die depending on their usefulness to the intelligent creatures. Just like they do now.

Replies from: timtyler
comment by timtyler · 2012-06-21T23:45:05.540Z · LW(p) · GW(p)

I'm not sure referring to an intelligent creature that is made of computing code instead of carbon as a "meme" is true to the common definitions of the term.

People don't call the works of Shakespeare a "meme" either. Conventionally, such things are made of memes - and meme products.

comment by asparisi · 2012-06-15T00:46:36.437Z · LW(p) · GW(p)

you might as well say "being attracted by gravity" or "being made of matter" are what nature commands.

This makes me want to start a religion where the Creator of the Universe gives points to things that behave like a member of the universe. "Thou shalt be made of matter." "Thou shalt be attracted by gravitational force." "Thou shalt increase entropy." etc. Too bad 'Scientology' is taken as a name. Physianity, maybe?

Replies from: Dolores1984
comment by Dolores1984 · 2012-06-15T01:18:25.233Z · LW(p) · GW(p)

In the beginning, there was nothing. The cosmos were void - timeless, and without form. And, lo, God pointed upon the abyss, and said 'LET THERE BE ENERGY' And there was energy. And God pointed to the energy, and said, 'and let you be bound among yourselves that you may wander the void together, proton to neutron, and proton to proton, and let the electrons always seek their opposite number, within the appropriate energy barrier, and let the photons wander where they will.' Lo, and god spoke to the stranger particles, for some time, but what He said was secret. And God saw hydrogen, and saw that it was good.

And God saw the particles moving at all different speeds, away from one another, and saw that it was bad, and God said 'and let the cosmos be bent and cradle the particles, that they may always be brought back together, though they be one billion kilometers apart, within the appropriate energy barrier, of course. And let the curvature of space rise without end with the energy of velocity, that they all be bound by a common yoke.' And god looked upon the spirals of gas, and saw that it was good.

And god took the gas and energy above, and the gas and energy below, and said 'and you shall be matter, and you shall be antimatter, and your charges shall ever be in conflict, and never the twain shall meet, except in very small quantities.' And so there was the matter and the antimatter.

And God saw the cosmos stretching out to a single future, and said 'And let you all be amplitude configurations, that you may not know thyself from the thy neighbor, and that the future may expand without end.' And god saw the multiverse, and saw that it was good.

comment by timtyler · 2012-06-15T00:46:56.672Z · LW(p) · GW(p)

Hedonism is a means to an end. Pleasure is there for a reason.

Life doesn't give us reason and purpose. We give life reason and purpose. Speculating on what sort of metaphorical "purposes" life and nature might have might be a fun intellectual exercise, but ultimately it's just a game.

The game seems to involve willfully misunderstanding me. Talking about the "reason" for adaptations is biology 101. If you can't grasp such talk, just ignore people like me. I would prefer to talk to those who are capable of underrstanding what I mean instead.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-18T04:40:48.640Z · LW(p) · GW(p)

Talking about the "reason" for adaptations is biology 101.

I know that. Most of the time I use the same language. But that's because I trust the people I'm talking to to know that I'm speaking metaphorically. I also trust them to understand enough basic morality to know that just because something is extremely common in nature, doesn't mean it's morally good. The reason I am not doing that when talking to you is that I am not convinced that I should extend you that trust. You constantly confuse the descriptive with the normative and the "common in nature" with the "morally good."

I do not have issue with the majority of factual statements you make. What I have issue with is the appalling moral statements you make. I get the impression that your are upset at Eliezer because he wants to preserve the values that make us morally significant beings, even if doing so will stop us from evolving. You act like evolving is our "real" purpose and that things that people actually value, like creativity, novelty, love, art, friendship, etc. are not important. This is the exact opposite of the truth. Evolution is useful only so far as it preserves and enhances our values such as creativity, novelty, love, art, friendship, etc.

Again, if you really think maximizing entropy is your real purpose in life; would you torture 50 children to death if it would get some sadistic aliens to make a far-off star go nova for you? Detonating one star would produce far more entropy than those children would over their lifetimes, but I still bet you wouldn't torture them, because you know it's wrong. The fact that you wouldn't do this proves you think doing the right thing is more important than maximizing entropy.

Replies from: timtyler
comment by timtyler · 2012-06-18T10:25:27.467Z · LW(p) · GW(p)

I do not have issue with the majority of factual statements you make. What I have issue with is the appalling moral statements you make.

Gee, thanks for that.

Again, if you really think maximizing entropy is your real purpose in life; would you torture 50 children to death if it would get some sadistic aliens to make a far-off star go nova for you?

People often seem to think that entropy maximisation priinciples imply that organisms should engage in wanton destruction, blowing things up. However, that is far from the case. Causing explosions is usually a very bad way of maximising entropy in the long term - since it tends to destroy the world's best entropy maximisers, living systems. Living systems go on to cause far more devastation than exploding a sun ever could. So, wanton destruction of a sun, is bad - not good - from this perspective.

Replies from: asparisi, nshepperd, Ghatanathoah, army1987
comment by asparisi · 2012-06-18T11:08:54.954Z · LW(p) · GW(p)

So, if the nova's explosion did not destroy any living systems, you would happily trade the 50 kids for the nova explosion?

comment by nshepperd · 2012-06-18T13:53:14.034Z · LW(p) · GW(p)

This is ridiculous. Are you actually proposing entropy maximisation as a reduction of "should", normative ethical theory, etc., or do you just find it humorous to waste our time?

comment by Ghatanathoah · 2012-06-18T17:49:37.689Z · LW(p) · GW(p)

Causing explosions is usually a very bad way of maximising entropy in the long term - since it tends to destroy the world's best entropy maximisers, living systems.

That's why I said "far-off" star. I was trying to imply that the star was so far away its destruction would not harm any living things. Please don't fight the hypothetical.

In any case, the relevant part of the question isn't "Would you blow up a star?" That was just an attempt to give the hypothetical some concrete details so it sounded less abstract. The relevant question is "Would you torture fifty children to death in order to greatly increase the level of entropy in the universe." Assume that the increase would be greater than what the kids would be able to accomplish themselves if you allowed them to live.

comment by A1987dM (army1987) · 2012-06-20T14:56:55.065Z · LW(p) · GW(p)

Causing explosions is usually a very bad way of maximising entropy in the long term - since it tends to destroy the world's best entropy maximisers, living systems. Living systems go on to cause far more devastation than exploding a sun ever could.

Are you sure? A black hole is the system with the most possible entropy among those with a given mass. Your point would only be valid if interstellar civilizations are easy to achieve, and given that we don't see any of those around I don't think they are.

comment by timtyler · 2012-06-15T00:50:21.479Z · LW(p) · GW(p)

The idea that evolution tends to do that is an illusion created by the Second Law of Thermodynamics. Because of the was the 2LTD works, doing anything for any reason tends to increase entropy. So obviously if an evolved organism does anything at all, it will end up increasing entropy. This creates an illusion that organisms are trying to maximize entropy.

This should be mistunderstanding #1 in the MEP FAQ. MEP is not the same as the second law. It's a whole different idea, which you don't appear to know anything about.

Replies from: pragmatist
comment by pragmatist · 2012-06-15T02:15:37.970Z · LW(p) · GW(p)

Extremum rate principles like MEP have proven very useful for describing the behavior of certain systems, but the extrapolation of the principle into a general law of nature remains hugely speculative. In fact, at this point I think the status of MEP can be described as "not even wrong", because we do not yet have a a rigorous notion of thermodynamic entropy that extends unproblematically to nonequilibrium states. The literature on entropy production usually relies on equations for the entropy production rate that are compatible with our usual definition of thermodynamic entropy when we are dealing with quasistatic transformations, but if we use these rate equations as the basis for deriving a non-equilibrium conception of entropy we get absurd results (like ascribing infinite entropy to non-equilibrium states).

Dewar's work, which you link below, is an improvement, in that it operates with a notion of entropy that is clearly defined both in and out of equilibrium, derived from the MaxEnt formalism. But the relationship of this entropy to thermodynamic entropy when we're out of equilibrium is not obvious. Also, Dewar's derivation of MEP relies on applying some very specific and nonstandard constraints to the problem, constraints whose general applicability he does not really justify. If I were permitted to jury-rig the constraints, I could derive all kinds of principles using MaxEnt. But of course, that wouldn't be enough to establish those principles as natural law.

Replies from: timtyler
comment by timtyler · 2012-06-15T10:02:53.808Z · LW(p) · GW(p)

Extremum rate principles like MEP have proven very useful for describing the behavior of certain systems, but the extrapolation of the principle into a general law of nature remains hugely speculative. In fact, at this point I think the status of MEP can be described as "not even wrong", because we do not yet have a a rigorous notion of thermodynamic entropy that extends unproblematically to nonequilibrium states.

Entropy and MEP are statistical phenomena. Thermodynamics is an application This has been understood since Boltzmann's era. Most of the associated "controversy" just looks like ignorance to me.

Entropy maximisation in living systems has been around since Lotka 1922. Universal Darwinism applies it to all CAS. Lots of people don't understand it - but that isn't really much of an argument.

comment by timtyler · 2012-06-15T00:58:53.893Z · LW(p) · GW(p)

A better (metaphorical) maximand might actually be local entropy minimization. It's obviously impossible to minimize total entropy, but life has a tendency to decrease the entropy in its local area. Life tends to use energy to remove entropy from its local area by building complex cellular structures. It's sort of an entropy pump, if you will. So if we metaphorically pretended it evolution had a purpose, it would actually be the reverse of what you claim.

Prigogine actually came up with a genuine entropy minimization principle once (in contrast to your idea - which has never been formalised as a real entropy minimization principle - AFAIK). He called it the theorem of "minimum entropy production". However, in "Maximum entropy production and the fluctuation theorem" Dewar explained it as a special case of his "MaxEnt" formalism.

comment by timtyler · 2012-06-15T01:09:30.500Z · LW(p) · GW(p)

Carl Shulman is right, calling entropy nature's maximand is absurd, you might as well say "being attracted by gravity" or "being made of matter" are what nature commands.

Some books on the topic:

Still think it is "absurd"?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-15T05:51:20.848Z · LW(p) · GW(p)

Still think it is "absurd"?

Wow, just wow. I'm extremely disappointed with Schneider and Sagan. Not because of their actual research, which looks like some interesting and useful stuff on thermodynamics. No, what's disappointing and embarrassing is the deceitful way they pretend that they've discovered life's "purpose." Like many words, the word "purpose" has multiple referents, sometimes it refers to profound concepts, others times to trivial ones. Schneider and Sagan have discovered some insights into one of the more trivial concepts the word "purpose" can refer to, but are using verbal sleight of hand to pretend they've found the answer to one of the word's more profound referents.

When someone says they are looking for "life's purpose" what they mean is that they are looking for values and ideals to live their life around. A very profound concept. When Schneider and Sagan say they have found life's purpose what they are saying is, "We pretended that the laws of physics were a person with a utility function and then deduced what that make-believe utility function was based on how the laws of physics caused life to develop."

Now, doing that has it's place, it's easier for human brains to model other people than it is for them to model physics, so sometimes it is useful to personify physics. But the "purpose" you discover from that is ultimately trivial. It doesn't give you values and ideals to live your life around. It just describes forces of nature in an inaccurate, but memorable way.

I'm not saying it's absurd that to say that entropy tends to increase, that's basic physics. But it's absurd to pretend that entropy is the deep, meaningful purpose of human life. Purpose is something humans give themselves, not something that mindless physical laws bestow upon them. Schneider and Sagan may be onto something when they suggest that life has a tendency to destroy gradients. But if they claim that is the "purpose" of human life in any meaningful sense they are dead wrong.

Replies from: pragmatist, timtyler
comment by pragmatist · 2012-06-15T06:39:18.641Z · LW(p) · GW(p)

I read Into the Cool a while ago, and it's a bad book. Schneider and Sagan posit a law of nonequilibrium thermodynamics: "nature abhors a gradient". They go on to explain pretty much everything in the universe -- from fluid dynamics to abiogenesis to evolution to human aging to the economy to the purpose of life to... -- as a consequence of this law. The thing is, all of this is done in a very hand-wavey fashion, without any math.

Now, there is definitely something interesting about the fact that when there are gradients in thermodynamic parameters we often see the emergence of stable, complex structures that can be seen as directed towards driving the system to equilibrium. But when the authors start claiming that this is basically the origin of all macroscopic structure, even when the "gradient" involved isn't really a thermodynamic gradient, things start getting crazy. Benard convection occurs when there is a temperature gradient in a fluid; arbitrage occurs when there is a price gradient in an economy. These are both, according to the authors, consequences of the same universal law: nature abhors a gradient.

Perhaps Schneider has worked his ideas out with greater rigor elsewhere (if he has, I would like to see it), but Into the Cool is in the same category as Per Bak's How Nature Works and Mark Buchanan's Ubiquity, a popular book that extrapolates useful insights to such an absurd extent that it ventures into mild crackpot territory.

Replies from: timtyler
comment by timtyler · 2012-06-15T10:11:31.409Z · LW(p) · GW(p)

But when the authors start claiming that this is basically the origin of all macroscopic structure, even when the "gradient" involved isn't really a thermodynamic gradient, things start getting crazy. Benard convection occurs when there is a temperature gradient in a fluid; arbitrage occurs when there is a price gradient in an economy. These are both, according to the authors, consequences of the same universal law: nature abhors a gradient.

That's right - MEP is a statistical characterisation of universal Darwinism, which explains a lot about CAS - including why water flows downhill, turbulence, crack propagation, crystal formation, and lots more.

Schneider's original work on the topic is Life as a manifestation of the second law of thermodynamics.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-15T17:11:28.846Z · LW(p) · GW(p)

Of course, while this work has some scientific interest (a fact I never denied), it is worthless for determining what the purpose of intelligent life and civilization should be. All it does is explain where life came from, it has no value in determining what we want to do now and what we should do next.

Your original statement that started this discussion was a claim that our civilization maximizes entropy. That claim was based on a trivial map-territory confusion, confounding two different referents of the word "maximize," Referent 1 being :"Is purposefully designed to greatly increase something by intelligent beings" and Referent 2 being: "Has a statistical tendency to greatly increase something."

When Eliezer claimed that intelligent creatures and their civilization would only be interesting if they purposefully acted to maximize novelty, you attempted to refute his claim by saying that our civilization is not purposefully acting to maximize novelty because it has a statistical tendency to greatly increase entropy. In other words, you essentially said "Our civilization does not maximize(1) novelty because it maximizes(2) entropy." You entire argument is based on map-territory confusion.

Replies from: timtyler
comment by timtyler · 2012-06-15T22:04:27.310Z · LW(p) · GW(p)

Your comment is a blatant distortion of the facts. Eliezer's only references to maximizing are to an "expected paperclip maximizer". He never talks about "purposeful" maximisation. Nor did I attempt the refutation you are attribting to me. You've been reduced to making things up :-(

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-18T04:23:05.787Z · LW(p) · GW(p)

Eliezer's only references to maximizing are to an "expected paperclip maximizer".

Eliezer never literally referred to the word "maximize," but the thrust of his essay is that a society that purposefully maximizes, or at least greatly increases novelty, is far more interesting than one that doesn't. He claimed that, for this reason, a paperclip maximizing civilization would be valueless, because paperclips are all the same.

Nor did I attempt the refutation you are attribting to me.

You said:

Our civilisation maximises entropy - not paperclips - which hardly seems much more interesting.

In this instance you are using "maximize" to mean "Has a statistical tendency to increase something." You are claiming that everything humans do is uninteresting because it has a statistical tendency to increase entropy and destroy entropy gradients, and entropy is uninteresting. You're ignoring the fact that when humans create, we create art, socialization, science, literature, architecture, history, and all sorts of wonderful things. Paperclip maximizers just create the same paperclip, over and over again. It doesn't matter how much entropy gets made in the process, humans are a quadrillion times more interesting because there is so much diversity in what we do.

Claiming that all the wonderful, varied, and diverse things humans do is no more interesting than paperclipping, just because you could describe it as "entropy maximization" is ridiculous. You might as well say that all events are equally uninteresting because you can describe all of them as "stuff happening."

So yes, Eliezer never used the word "maximize" but he definitely claimed that creatures that didn't value novelty would be boring. And you did attempt to refute his claim by claiming that our civilization's statistical tendency to increase entropy means that creating art, conversation, science, etc. is no different from paperclipping. I think my objection stands.

Replies from: timtyler
comment by timtyler · 2012-06-19T00:29:29.262Z · LW(p) · GW(p)

You said:

Our civilisation maximises entropy - not paperclips - which hardly seems much more interesting.

In this instance you are using "maximize" to mean "Has a statistical tendency to increase something." You are claiming that everything humans do is uninteresting because it has a statistical tendency to increase entropy and destroy entropy gradients, and entropy is uninteresting.

You're in a complete muddle about my position. 'Maximize' doesn't mean 'increase'. The maximum entropy principle isn't just "a statistical tendency to increase entropy". You are apparently thinking about the second law of thermodynamics - which is a completely different idea. Nor was I arguing that human activity was "uninteresting". Since you so obviously don't have a clue what I am talking about, I see little point in continuing. Perhaps look into the topic, and get back to us when you know something about it.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-19T00:58:15.750Z · LW(p) · GW(p)

Nor was I arguing that human activity was "uninteresting".

You said:

Our civilisation maximises entropy - not paperclips - which hardly seems much more interesting.

What am I supposed to interpret that as besides "Human activity is uninteresting."? Or at least, "human activity is as uninteresting as paperclip making."

Since you so obviously don't have a clue what I am talking about, I see little point in continuing. Perhaps look into the topic, and get back to us when you know something about it.

Stop trying to pretend that this is just a discussion about physics and evolution. You derive all sorts of horrifying moral positions from the science you are citing and when someone calls you out on it you act like the problem is that they don't understand the science properly. I have some problems with your science, you seem to like talking about big ideas that aren't that strongly supported, but my main objection is to your ethical positions. You constantly act like what is common in nature is what is morally good.

The whole reason I have been so hard on you about personifying forces of nature is that you constantly switch between the descriptive and the normative. You act like humans have a moral duty to maximize entropy and that we're bad, bad people if we don't keep evolving. I think that if you stopped personifying natural forces it would make it easier for you to spot when you do this.

Again, answer my moral dilemma: "Would you torture fifty children to death in order to greatly increase the level of entropy in the universe." Assume that the increase would be greater than what the kids would be able to accomplish themselves if you allowed them to live.

I doubt you even consider moral dilemmas like this because you are interested in talking about big cool ideas, not about challenging them or considering them critically. MEP might have originally been a useful scientific theory when it was first formulated, but you've turned it into a Fake Utility Function.

Replies from: timtyler
comment by timtyler · 2012-06-19T01:28:57.403Z · LW(p) · GW(p)

Stop trying to pretend that this is just a discussion about physics and evolution. You derive all sorts of horrifying moral positions from the science you are citing and when someone calls you out on it you act like the problem is that they don't understand the science properly.

I don't know what you are talking about. What "horrifying moral positions"?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-19T16:27:05.687Z · LW(p) · GW(p)

I don't know what you are talking about. What "horrifying moral positions"?

This whole conversation started because you denigrated human values, saying that all the glorious and wonderful things our civilization does "hardly seems much more interesting" than tiling the universe with paperclips.

You have frequently implied that the metaphorical "goals" of abstract statistical processes like MEP and natural selection are superior to human values like love, compassion, freedom, morality, happiness, novelty, etc. For instance, here you say:

Similarly with human values: those are a bunch of implementation details - not the real target.

The moral position you keep implicitly arguing for, again and again, is that the metaphorical "goals" of abstract natural processes like MEP and natural selection represent real objective morality, while the values and ideals that human beings base their lives around are just a bunch of "implementation details" that it's perfectly okay to discard if they get in the way. This is exactly backwards. Joy, love, sympathy, curiosity, compassion, novelty, art, etc. are what is really valuable. Preserving these things is what morality is all about. The solemn moral duty of the human race is to make sure that a sizable portion of the future creatures that will exist share these values, even if they do not physically resemble humans.

I was also extremely horrified by your response to the dilemma I posed you. I attempted to prove that MEP is a terrible moral rule by asking you if you would torture children to death in order to greatly increase entropy. The correct response was: "Of course I wouldn't, the lives and happiness of children are more important than MEP." Instead of saying that, you changed the subject by saying that the method of entropy production I suggested was inefficient because it might destroy living systems. This implies, as asparisi put it:

So, if the nova's explosion did not destroy any living systems, you would happily trade the 50 kids for the nova explosion?

Just to be clear, I don't think that you would ever torture children. I think the beliefs you write about are, thankfully, completely divorced from your behavior. MEP is your Fake Utility Function, not your real one. But that doesn't change the fact that it's horrifying to read about. It's discouraging that I try to tell people that studying science won't destroy your moral sense, that it won't turn you into a Hollywood Rationalist, but then encounter someone to whom its done precisely that.

Replies from: TheOtherDave, wedrifid, timtyler
comment by TheOtherDave · 2012-06-19T16:39:17.329Z · LW(p) · GW(p)

It's discouraging that I try to tell people that studying science won't destroy your moral sense, that it won't turn you into a Hollywood Rationalist, but then encounter someone to whom its done precisely that.

Can you expand on your reasons for believing that studying science was causal to what you categorize here as the destruction of Tim's moral sense? (I'm not asking why you believe his moral sense has been destroyed; I think I understand your reasoning there. I'm asking why you believe studying science was the cause.)

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-19T17:09:25.933Z · LW(p) · GW(p)

Because he constantly uses scientific research to justify his moral positions, and then when I challenge them he accuses me of just not understanding the science well enough. He switches back and forth between statements about science and normative statements about what would make the future of humanity good without seeming to notice. Learning about evolutionary science seems to have put him in an Affective Death Spiral around evolution. (I know the symptoms, I used to be in one around capitalism after I started studying economics) It's one of the more extreme examples of the naturalistic fallacy I've ever seen.

Now, since you've read some of my other posts you know that I don't necessarily accept the strong naturalistic fallacy, the idea that ethical statements cannot be reduced to naturalistic ones at all. But I definitely believe in the weaker form of the naturalistic fallacy, the idea that things that are common in nature are not necessarily good. And that is the form of the fallacy Tim makes when he says absurd things like our civilization maximizes entropy or that our values are not precious things that need to be preserved if they get in evolution's way.

Studying the science of evolution certainly wasn't sole cause, maybe not even the main cause, of Tim's ethical confusion, but it certainly contributed to it.

comment by wedrifid · 2012-06-19T18:05:53.212Z · LW(p) · GW(p)

Just to be clear, I don't think that you would ever torture children.

I totally would. Then - if the situation demanded it and if I didn't have a fat guy available - I'd throw them all in front of a trolley. Because not torturing children is evil when the alternative to the contrived torturing is a contrived much worse thing.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-19T19:13:53.136Z · LW(p) · GW(p)

I meant that I didn't ever think he'd torture children for no reason other than to increase the level of entropy in the universe (in my original contrived hypothetical the entropy increase was accomplished by having sadistic alien make a star go nova in return for getting to watch the torture. The star was far enough away from inhabited systems that the radiation wouldn't harm any living things).

I wasn't meaning to set up "not torturing children" as a deontological rule. Obviously there are some circumstances where it is necessary, such as torturing one child to prevent fifty more children from being tortured for an equal amount of time per child. What I was trying to do was illustrate that Tim's Maximum Entropy Principle was a really, really bad "maximand" to follow by creating a hypothetical where following it would make you do something insanely evil. I think we can both agree that entropy maximization (at least as an end in itself rather than as a byproduct of some other end) is far less important than preventing the torture of children.

Tim responded to my question by sidestepping the issue, instead of engaging the hypothetical he said that a nova was a bad way to maximize entropy because it might kill living things that would go on to produce more entropy, even though I tried to constrain the hypothetical so that that wasn't a possibility.

comment by timtyler · 2012-06-19T23:50:42.468Z · LW(p) · GW(p)

This whole conversation started because you denigrated human values, saying that all the glorious and wonderful things our civilization does "hardly seems much more interesting" than tiling the universe with paperclips.

But that's complete nonsense. I already explained this by saying here:

Nor was I arguing that human activity was "uninteresting"

Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able - they don't attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind. This is not some kind of moral assertion, it's just a straightforwards description of how these systems would behave. Entropy is, I claimed, not much more interesting than paperclips.

The intended lesson here was NOT that human civilization is somehow uninteresting, but rather that optimisation processes with simple targets can produce vast complexity (machine intelligence, space travel, nanotechnology, etc).

This is really just a particular case of the "simple rules, complex dynamics" theme that we see in complex systems theory (e.g. game of life, rule 30, game of go, etc).

So: this whole "horrifying moral position" business is your own misunderstanding.

Failure to address your other points is not a sign of moral weakness - it just doesn't look as though the discussion is worth my time.

Replies from: None, Ghatanathoah
comment by [deleted] · 2012-06-20T00:20:50.657Z · LW(p) · GW(p)

It may be worth your time to explicitly disclaim the whole "torturing children to blow up stars" position (instead of appearing to dodge it a second or third time), particularly seeing as if it is a misunderstanding, it is not uniquely Ghatanathoah's.

comment by Ghatanathoah · 2012-06-20T22:22:29.430Z · LW(p) · GW(p)

But that's complete nonsense. I already explained this by saying here:

Nor was I arguing that human activity was "uninteresting"

That wasn't an explanation, it was an assertion. I was not satisfied that that assertion was supported by the rest of your statements.

Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able - they don't attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind.

That is a much better explanation of your position. You are correct that that is not a moral assertion. However, before that you said:

IMO, boredom is best seen as being a universal instrumental value - and not as an unfortunate result of "universalizing anthropomorphic values".

And also:

....My position is that we had better wind up approximating the instrumental value of boredom (which we probably do pretty well today anyway - by the wonder of natural selection) - or we are likely to be building a rather screwed-up civilisation. There is no good reason why this would lead to a "worthless, valueless future" - which is why Yudkowsky fails to provide one.

Saying something is "screwed up" is a moral judgement. Saying that a future where boredom has no terminal value and exists purely instrumentally is not valueless is a moral judgement. Any time you compare different scenarios and argue that one is more desirable than the others you are making a moral judgement. And the ones you made were horrifying moral judgements because they advocate passively standing in the way of creatures that would destroy everything human beings value.

Given a bunch of negentropy, modern ecosystems reduce it to a maximum-entropy state, as best as they are able - they don't attempt to leave any negentropy behind. A paperclipper would (presumably) attempt to leave paperclips behind.

Even if that's true, a lot more fun and complexity would be generated by a human-like civilization on the way to that end than by paperclippers making paperclips.

Besides, humans are often seen making a conscious effort to prevent things from being reduced to a maximum entropy state. We make a concerted effort to preserve places and artifacts of historical significance, and to prevent ecosystems we find beautiful from changing. Human civilization would not reduce the world to a maximum entropy state if it retains the values it does today.

The intended lesson here was NOT that human civilization is somehow uninteresting, but rather that optimisation processes with simple targets can produce vast complexity (machine intelligence, space travel, nanotechnology, etc).

Compexity is not necessarily a goal in itself. People want a complex future because we value many different things, and attempting to implement those values all at once leads to a lot of complexity. For instance, we value novelty, and novelty is more common in a complex world, so we generate complexity as an instrumental goal toward the achievement of novelty.

The fact that paperclip maximizers would build big, cool machines does not make a future full of them almost as interesting as a civilization full of intelligences with human-like values. Big cool machines are not nearly as interesting as the things people do, and I say that as someone who finds big cool machines far more interesting than the average person.

Failure to address your other points is not a sign of moral weakness - it just doesn't look as though the discussion is worth my time.

My other points are the core of my objection to your views. Besides, it would take like, ten seconds to write "I wouldn't torture children to increase the entropy levels," I think that that at least would be worth your time. Looking at your website, particularly your essay on Nietzscheanism, I think I see the wrong turn you made in your thought processes.

When you discuss W. D. Hamilton you state, quite correctly, that:

Hamilton has suggested that the best way for selfish individuals to fool everone into thinking that they are nice is to actually belive it themselves (and practice a sort of hypocritical double-think to either self-justify or forget about any non-nice behaviour......Here, Hamilton is suggesting that merely pretending to be a selfless altriust is not good enough - you actually have to believe it yourself to avoid being detected by all the smart psychologists in the rest of society - since they are experts in looking for signs of selfishness.

You then go on to argue that in the more transparent future such self-deception will be impossible and people will be forced to become proud Nietzscheans. You say:

Once humanity becomes a little bit more enlightened, things like recognising your nature and aspiring to fulfill the potential of your genes may not be regarded in such a negative light.

Your problem is that you didn't take the implications of Hamilton's work far enough. There's an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.

Now, why then do people do so many nasty things if we evolved to be genuine altruists? Well evolution, being the amoral monster it is, metaphorically "realized" that being an altruist all the time might decrease our IGF, so it metaphorically "cursed" us with akrasia and other ego-dystonic mental health problems that prevent us from fulfilling our altruistic potential. Self-deception, in this account, does not exist to make us think we're altruists when we're really IGF maximizers, it exists to prevent us from recognizing our akrasia and fighting it.

This theory has much more predictive power than your self-deception theory, it explains things like why there is a correlation between conscientiousness (willpower) and positive behavior. But it also has implications for the moral positions you take. If humans evolved to cherish values like altruism for their own sake (and be sabotaged from achieving them by akrasia), rather than to maximize IGF and deceive ourselves about it, then it is a very bad thing if those values are destroyed and replaced by something selfish and nasty like what you call "Nietzscheanism".

Replies from: timtyler
comment by timtyler · 2012-06-21T23:33:12.679Z · LW(p) · GW(p)

Your problem is that you didn't take the implications of Hamilton's work far enough.

I do say in my essay: "I think Hamilton's points are good ones".

There's an even more efficient way to convince people you are an altruist than self-deception. Actually be an altruist! Human beings are not closet IGF maximizers tricking ourselves into thinking we are altruists. We really are altruistic! Being an altruist to the core might harm your IGF occasionally, but it also makes you so trustworthy to potential allies that the IGF gain is usually a net positive.

You need to look up "altruism" - since you are not using the term properly. An "altruist", by definition, is an agent that takes a fitness hit for some other agent with no hope of direct or indirect repayment. You can't argue that altruists exhibit a net fitness gain - unless you are doing fancy footwork with your definitions of "fitness".

Your account of human moral hypocracy doesn't look significantly different from mine to me. However, you don't capture my own position - which may help to explain your percieved difference. I don't think most humans are "really IGF maximizers". Instead, they are victims of memetic hijacking. They do reap some IGF gains though - looking at the 7 billion humans.

I find your long sequence of arguments that I am mistaken on this issue to be tedious and patronising. I don't share your values is all. Big deal: rarely do two humans share the same values.

comment by timtyler · 2012-06-15T22:13:16.536Z · LW(p) · GW(p)

When Schneider and Sagan say they have found life's purpose what they are saying is, "We pretended that the laws of physics were a person with a utility function and then deduced what that make-believe utility function was based on how the laws of physics caused life to develop."

When biologists say "the purpose of a nose is smelling things" you don't have to personify mother naure to make sense of what they mean. Personifying the organism is often enough. Since the organism may not be so very different from a person, this is often an easier step.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-18T03:50:42.135Z · LW(p) · GW(p)

When biologists say "the purpose of a nose is smelling things" you don't have to personify mother naure to make sense of what they mean. Personifying the organism is often enough.

That doesn't change the fact that personification is a way to help people think about reality more easily at the expense of accurately describing it. Noses don't literally have a purpose. It's just that organisms that are good at smelling things tend to reproduce more.

The problem with Schneider and Sagan is that they confound this metaphorical meaning of the word purpose (the utility function of a personified entity) with a different meaning (ideals to live your life around). Hence their second book makes the absurd statement* that, when you strip the word "purpose" from it basically says "knowing that decreasing entropy gradients is a major reason life arose will give you ideals to live your life around." That's ridiculous.

*To be fair that statement was a cover blurb, so it's possible that it was written by the publisher, not Schneider and Sagan.

comment by timtyler · 2012-06-15T00:03:01.347Z · LW(p) · GW(p)

Another example: currently, researchers at ITER in France are working on an enormous fusion reactor, to allow us to accelerate the conversion of order into entropy still further.

This is trivially false, the reason researchers are working on a fusion reactor is to secure human beings cheap renewable energy to have more fun with. The fact that it increases entropy is a side-effect. The consequentialist human minds do not foresee a future with more entropy and take action in order to secure that future. They foresee a future where humans are using cheap energy to have more fun and take actions to secure that future. The entropy increase is an unfortunate, but acceptable side effect.

This line of reasoning is intuitive, but, I believe, wrong. Destroying energy gradients is actively selected for in lots of ways. For example, it actively deprives competitors of resources. Organisms compete to dissipate sources of order by reaching them quicky and eliminating before others can. The picture of entropy as an inconvenient side effect seems attractive initially, but doesn't withstand close inspection.

I don't deny that properly functioning brains act like hedonic maximisers. Hedonic maximisation is a much weaker explanatory principle than entropy maximisation, though. The latter explains why water flows downhill. Hedonic maximisation is a narrow and weak idea - by comparison.

comment by timtyler · 2012-06-15T00:12:03.536Z · LW(p) · GW(p)

"Fun" - if we are trying to treat the concept seriously - is better characterised as the proxy that brains use for the inclusive fitness of their associated organism.

No it isn't. Brains don't care about inclusive genetic fitness. At all. They never have. If you want evidence for that, note the fact that humans do things like use condoms. Also note that the growth of the world's population is slowing and will probably stop by the end of the 21st century if trends continue.

You are misunderstanding me. Pleasure is literally evolution's way of getting organisms do things that help them to increase their inclusive fitness. This idea is true, and is in no way refuted by condoms or the demographic transition.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-18T04:46:28.920Z · LW(p) · GW(p)

You are misunderstanding me. Pleasure is literally evolution's way of getting organisms do things that help them to increase their inclusive fitness.

It is evolution's (metaphorical) way. When you say that brains use it as a proxy for genetic fitness that gave the impression that you thought brains literally cared about fitness and were optimizing for it.

comment by timtyler · 2012-06-15T00:19:59.742Z · LW(p) · GW(p)

Strictly speaking, evolution is just a description of a series of trends. Since human minds are bad at modeling trends, but good at modeling other consequentialists, sometimes it's useful to pretend that evolution is a consequentialist with "goals" and a "utility function" to help people understand it. It's less scientifically accurate than modeling evolution as a series of trends, but it makes up for it by being easier for a human brain to compute. The problem is that, while most scientists understand this, there are some people who who misinterpret this to mean that evolution literally has goals, desires, and utility functions. You appear to be one of these people.

Feel free to substitute "maximisation" terminology if my preferred lingo causes you conceptual problems. Selfishness, progress and optimisation can all be "cashed out" in more long-winded terms. Remember: teleonomy is just teleology in new clothes.

comment by timtyler · 2012-06-15T00:22:50.655Z · LW(p) · GW(p)

We had better talk about "optimization" then, or we will talk past each other.

Optimization has the same problem. Optimization literally refers to a consequentialist creature using its future forecasting abilities to determine how an object or meme would better suit its goals and altering that thing accordingly.

Nonsense. Look it up.

comment by Ghatanathoah · 2012-10-12T07:16:27.239Z · LW(p) · GW(p)

Now this is probably not exactly how our current emotional circuitry of boredom works. That, I expect, would be hardwired relative to various sensory-level definitions of predictability, surprisingness, repetition, attentional salience, and perceived effortfulness.

It is interesting to read this after reading Yvain's classic essay on wanting, liking, and approving. In Yvain's terms, the value of boredom could be construed as an instance where our "wanting, "liking," and "approving" systems are in relative harmony.

Couched in Yvain's terms, Eliezer praises boredom because he and most other people approve very strongly of the things that boredom motivates us to do, such as explore, engage in personal growth, learn new things, seek out new experiences, etc. In addition to this, the "wanting" and "liking" aspects of our characters also motivate us to engage in these positive behaviors, because as they become used to certain experiences they start to like them less and want to do them less frequently. This means that in addition to approving of seeking out new experiences, we also want to and like to.

But this is Fun Theory, so we are mainly concerned with how boredom should work in the long run.

Based on Yvain's work, it would seem that the way to create the improved boredom of the future would be to greatly enhance the power of the "approving" parts of our minds, so that they can more easily override the "wanting" and "liking" parts. If done properly this would give us free reign to improve the "liking" parts of our minds so that we can feel wonderful and happy all the time without fear of this causing us to lose our motivation to do awesome things with our lives. The people of the future could engage in all sorts of complex and challenging activities and feel like orgasmium while they're doing them.

This reminds me a little of how Eliezer speculates elsewhere that we might ultimately find a way to improve pain so that the more negative aspects of it are removed without getting rid of other facets of it that we approve of, such as enjoying Serious Stories.

comment by waveman · 2016-07-17T01:37:20.220Z · LW(p) · GW(p)

EY: (Do human beings get less easily bored as we grow older, more tolerant of repetition, because any further discoveries are less valuable, because we have less time left to exploit them?)

Anonymous: For a 3 year old, every day is like your first day in Paris.

Sample of one but my personal experience in my seventh decade is that is just gets damn hard to find interesting new hings, especially easily accessible interesting new things.

My tolerance of boredom actually seems far lower than it used to be but I have to work a lot harder to access good new stuff.