Emotional Involvement

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-06T22:23:24.000Z · LW · GW · Legacy · 53 comments

Contents

53 comments

Followup toEvolutionary Psychology, Thou Art Godshatter, Existential Angst Factory

Can your emotions get involved in a video game?  Yes, but not much.  Whatever sympathetic echo of triumph you experience on destroying the Evil Empire in a video game, it's probably not remotely close to the feeling of triumph you'd get from saving the world in real life.  I've played video games powerful enough to bring tears to my eyes, but they still aren't as powerful as the feeling of significantly helping just one single real human being.

Because when the video game is finished, and you put it away, the events within the game have no long-term consequences.

Maybe if you had a major epiphany while playing...  But even then, only your thoughts would matter; the mere fact that you saved the world, inside the game, wouldn't count toward anything in the continuing story of your life.

Thus fails the Utopia of playing lots of really cool video games forever.  Even if the games are difficult, novel, and sensual, this is still the idiom of life chopped up into a series of disconnected episodes with no lasting consequences.  A life in which equality of consequences is forcefully ensured, or in which little is at stake because all desires are instantly fulfilled without individual work—these likewise will appear as flawed Utopias of dispassion and angst.  "Rich people with nothing to do" syndrome.  A life of disconnected episodes and unimportant consequences is a life of weak passions, of emotional uninvolvement.

Our emotions, for all the obvious evolutionary reasons, tend to associate to events that had major reproductive consequences in the ancestral environment, and to invoke the strongest passions for events with the biggest consequences:

Falling in love... birthing a child... finding food when you're starving... getting wounded... being chased by a tiger... your child being chased by a tiger... finally killing a hated enemy...

Our life stories are not now, and will not be, what they once were.

If one is to be conservative in the short run about changing minds, then we can get at least some mileage from changing the environment.  A windowless office filled with highly repetitive non-novel challenges isn't any more conducive to emotional involvement than video games; it may be part of real life, but it's a very flat part.  The occasional exciting global economic crash that you had no personal control over, does not particularly modify this observation.

But we don't want to go back to the original savanna, the one where you got a leg chewed off and then starved to death once you couldn't walk.  There are things we care about tremendously in the sense of hating them so much that we want to drive their frequency down to zero, not by the most interesting way, just as quickly as possible, whatever the means.  If you drive the thing it binds to down to zero, where is the emotion after that?

And there are emotions we might want to think twice about keeping, in the long run.  Does racial prejudice accomplish anything worthwhile?  I pick this as a target, not because it's a convenient whipping boy, but because unlike e.g. "boredom" it's actually pretty hard to think of a reason transhumans would want to keep this neural circuitry around.  Readers who take this as a challenge are strongly advised to remember that the point of the question is not to show off how clever and counterintuitive you can be.

But if you lose emotions without replacing them, whether by changing minds, or by changing life stories, then the world gets a little less involving each time; there's that much less material for passion.  And your mind and your life become that much simpler, perhaps, because there are fewer forces at work—maybe even threatening to collapse you into an expected pleasure maximizer.  If you don't replace what is removed.

In the long run, if humankind is to make a new life for itself...

We, and our descendants, will need some new emotions.

This is the aspect of self-modification in which one must above all take care—modifying your goals.  Whatever you want, becomes more likely to happen; to ask what we ought to make ourselves want, is to ask what the future should be.

Add emotions at random—bind positive reinforcers or negative reinforcers to random situations and ways the world could be—and you'll just end up doing what is prime instead of what is good.  So adding a bunch of random emotions does not seem like the way to go.

Asking what happens often, and binding happy emotions to that, so as to increase happiness—or asking what seems easy, and binding happy emotions to that—making isolated video games artificially more emotionally involving, for example—

At that point, it seems to me, you've pretty much given up on eudaimonia and moved to maximizing happiness; you might as well replace brains with pleasure centers, and civilizations with hedonium plasma.

I'd suggest, rather, that one start with the idea of new major events in a transhuman life, and then bind emotions to those major events and the sub-events that surround them.  What sort of major events might a transhuman life embrace?  Well, this is the point at which I usually stop speculating.  "Science!  They should be excited by science!" is something of a bit-too-obvious and I dare say "nerdy" answer, as is "Math!" or "Money!"  (Money is just our civilization's equivalent of expected utilon balancing anyway.)  Creating a child—as in my favored saying, "If you can't design an intelligent being from scratch, you're not old enough to have kids"—is one candidate for a major transhuman life event, and anything you had to do along the way to creating a child would be a candidate for new emotions.  This might or might not have anything to do with sex—though I find that thought appealing, being something of a traditionalist.  All sorts of interpersonal emotions carry over for as far as my own human eyes can see—the joy of making allies, say; interpersonal emotions get more complex (and challenging) along with the people, which makes them an even richer source of future fun.  Falling in love?  Well, it's not as if we're trying to construct the Future out of anything other than our preferences—so do you want that to carry over?

But again—this is usually the point at which I stop speculating.  It's hard enough to visualize human Eutopias, let alone transhuman ones.

The essential idiom I'm suggesting is something akin to how evolution gave humans lots of local reinforcers for things that in the ancestral environment related to evolution's overarching goal of inclusive reproductive fitness.  Today, office work might be highly relevant to someone's sustenance, but—even leaving aside the lack of high challenge and complex novelty—and that it's not sensually involving because we don't have native brainware to support the domain—office work is not emotionally involving because office work wasn't ancestrally relevant.  If office work had been around for millions of years, we'd find it a little less hateful, and experience a little more triumph on filling out a form, one suspects.

Now you might run away shrieking from the dystopia I've just depicted—but that's because you don't see office work as eudaimonic in the first place, one suspects.  And because of the lack of high challenge and complex novelty involved.  In an "absolute" sense, office work would seem somewhat less tedious than gathering fruits and eating them.

But the idea isn't necessarily to have fun doing office work.  Just like it's not necessarily the idea to have your emotions activate for video games instead of real life.

The idea is that once you construct an existence / life story that seems to make sense, then it's all right to bind emotions to the parts of that story, with strength proportional to their long-term impact.  The anomie of today's world, where we simultaneously (a) engage in office work and (b) lack any passion in it, does not need to carry over: you should either fix one of those problems, or the other.

On a higher, more abstract level, this carries over the idiom of reinforcement over instrumental correlates of terminal values.  In principle, this is something that a purer optimization process wouldn't do.  You need neither happiness nor sadness to maximize expected utility.  You only need to know which actions result in which consequences, and update that pure probability distribution as you learn through observation; something akin to "reinforcement" falls out of this, but without the risk of losing purposes, without any pleasure or pain.  An agent like this is simpler than a human and more powerful—if you think that your emotions give you a supernatural advantage in optimization, you've entirely failed to understand the math of this domain.  For a pure optimizer, the "advantage" of starting out with one more emotion bound to instrumental events is like being told one more abstract belief about which policies maximize expected utility, except that the belief is very hard to update based on further experience.

But it does not seem to me, that a mind which has the most value, is the same kind of mind that most efficiently optimizes values outside it.  The interior of a true expected utility maximizer might be pretty boring, and I even suspect that you can build them to not be sentient.

For as far as my human eyes can see, I don't know what kind of mind I should value, if that mind lacks pleasure and happiness and emotion in the everyday events of its life.  Bearing in mind that we are constructing this Future using our own preferences, not having it handed to us by some inscrutable external author.

If there's some better way of being (not just doing) that stands somewhere outside this, I have not yet understood it well enough to prefer it.  But if so, then all this discussion of emotion would be as moot as it would be for an expected utility maximizer—one which was not valued at all for itself, but only valued for that which it maximized.

It's just hard to see why we would want to become something like that, bearing in mind that morality is not an inscrutable light handing down awful edicts from somewhere outside us.

At any rate—the hell of a life of disconnected episodes, where your actions don't connect strongly to anything you strongly care about, and nothing that you do all day invokes any passion—this angst seems avertible, however often it pops up in poorly written Utopias.

53 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by infotropism · 2009-01-06T23:20:16.000Z · LW(p) · GW(p)

Some argue that death gives meaning to life, and that life without death would be duller...

Well, I've often thought of how my life would be easier if I could think of it with the same feelings I'd regard a video game with. Being too involved, can make things stressful, unpleasant, and prevent action, while being relaxed and knowing that the consequences of my actions won't mean the difference between life and death, would that add meaning to my life ? What amount of challenge and responsibility do we need ?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-07T00:51:04.000Z · LW(p) · GW(p)

Now you're asking about pain, which is quite a different thing from passion. And I also remark that many religious dupes lived their lives without thinking themselves in danger of true death, and don't seem to have suffered much from it in their daily passions.

comment by TGGP4 · 2009-01-07T04:21:03.000Z · LW(p) · GW(p)

Asking what happens often, and binding happy emotions to that, so as to increase happiness - or asking what seems easy, and binding happy emotions to that - making isolated video games artificially more emotionally involving, for example -

At that point, it seems to me, you've pretty much given up on eudaimonia and moved to maximizing happiness; you might as well replace brains with pleasure centers, and civilizations with hedonium plasma. Well, why not? What makes changing the external stimulus more worthwhile than the subjective experience of it? It can't be that you hold the emotions evolution gave us as sacred or you wouldn't want to eliminate racial prejudice.

comment by PJ_Eby · 2009-01-07T06:09:31.000Z · LW(p) · GW(p)

I'm surprised you think that removing negative emotions would remove depth from life.

In my personal experience, eliminating negative emotional responses increases the depth of life experience, because of the richer opportunity to experience positive emotions in the same circumstance.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-06T23:23:17.311Z · LW(p) · GW(p)

But we have those negative emotions for a reason (an evolutionary reason). Are you so sure you understand the mechanism that you're prepared to junk that piece entirely?

comment by nazgulnarsil3 · 2009-01-07T06:41:24.000Z · LW(p) · GW(p)

so would you be for or against an AI that inserted us into an experience machine programmed to provide a life of maximum self expression without our knowledge?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-07T06:59:31.000Z · LW(p) · GW(p)

NGN, definitely Against.

comment by steven · 2009-01-07T08:48:28.000Z · LW(p) · GW(p)

For alliances to make sense it seems to me there have to be conflicts; do you expect future people to get in each other's way a lot? I guess people could have conflicting preferences about what the whole universe should look like that couldn't be satisfied in just their own corner, but I also guess that this sort of issue would be only a small percentage of what people cared about.

comment by steven · 2009-01-07T08:51:51.000Z · LW(p) · GW(p)

I think putting it as "eudaimonia vs simple wireheading" is kind of rhetorical; I agree eudaimonia is better than complex happy mind states that don't correspond to the outside world, but I think complex happy mind states that don't correspond to the outside world are a lot better than simple wireheading.

comment by pdf23ds · 2009-01-07T10:28:09.000Z · LW(p) · GW(p)

For as far as my human eyes can see, I don't know what kind of mind I should value, if that mind lacks pleasure and happiness and emotion in the everyday events of its life. ... If there's some better way of being (not just doing) that stands somewhere outside this, I have not yet understood it well enough to prefer it.

There are some pleasures I would already readily give up, even lacking the perspective of a transhuman, like the pleasures of the taste/smell of food and being full, in preference for a more sensible system of acquiring energy and nutrients. This seems to be an instance of me standing outside my own system and seeing a shortcoming--a part of my emotional system that is thoroughly archaic and counterproductive, despite the real pleasure it brings--leading to a real preference to change what I find pleasurable. I imagine other humans will prefer to keep these emotions (as is their choice), but they will probably outgrow them as they mature as transhumans, just as I will likely outgrow many more of mine.

comment by krfhwe · 2009-01-07T10:37:01.000Z · LW(p) · GW(p)

you can have alliances without conflict- theyre called clubs

comment by Vizikahn2 · 2009-01-07T13:18:05.000Z · LW(p) · GW(p)

Eliezer: "Thus fails the Utopia of playing lots of really cool video games forever... the hell of a life of disconnected episodes"

Better to reign in hell, no?

comment by gaffa2 · 2009-01-07T15:09:16.000Z · LW(p) · GW(p)

Eliezer, if you suddenly woke up in a lab and people in white coats told you "We're so sorry, the experience machine has malfunctioned" would you want them to fix the problem and re-connect you to this virtual reality where you are an intelligent AI researcher with a popular blog and a quest to save the world, or would you just pull the wires out and return to reality?

comment by jb5 · 2009-01-07T15:35:08.000Z · LW(p) · GW(p)

Some quick observations:

  1. Playing MMOs gives me a sense of emotional accomplishment that single player videogames does not.
  2. Playing videogames competitively against (or in teams) with other people also is far more satisfying as well.
  3. I have found my purpose in life - do my part to make the human race and (to a lesser extent) the Earth essentially immortal (i.e. catastrophe-proof and generally self-sustaining).

#3 is such a big problem that I can't hope to attack it directly - I simply focus on improving productivity and technical innovation in such ways as I can, and I spread the gospel of 'the asteroid will not care how clean the Earth's water is' to people who consider technical degradation as an acceptable strategy for human survival.

(For example, in the remake of The Day The Earth Stood Still, Klaatu claims that 'if humanity dies, the Earth survives', which is such a pile of horsecrap that I still shake my head in wonder at the 'faux-long-term' thinking it represents.)

Now, let's say that in the future, #3 is solved - humanity is essentially immortal, as are the creatures of the Earth. What then? Well, in that case, find a way to remove the 'essentially' from that equation. After that? Dunno. One thought would be: Find a way to edit your memories and place a copy of yourself (or some limited version of yourself) in a simulation of the Earth in the far past, so you can see how you'd live, love and learn in a more primitive time.

comment by General_Optimizer · 2009-01-07T16:44:11.000Z · LW(p) · GW(p)

I guess I'm sort of living the life of an expected utility maximizer. All I do all day long, year in year out, is optimize. Every night I go to sleep having optimized more problems out of existence, or having learned about ways that fail. Some days I find more problems. I don't believe I can ever be done with it, because I've chosen problems with high challenges and complex novelty.

I've optimized my emotional landscape: it's barren when it's not filled with the radiant joy of successful optimization. I don't care how my mind or body feels. It's all in the service of optimization. Sleep, defecate, gym, eat, study, optimize, study, optimize. Repeat forever. Adjust parameters so that I always feel at peak performance. What about entertainment? Some time off? I enjoy some comedy, music, and perhaps the occasional video at the gym. No vacations. That's my life. All of it. I basically never meet anybody in RL, because anybody with thoughts relevant to my work doesn't live nearby.

In case you think that I'm missing out on some essential "human experience" and wasting my life with nerdy stuff, I disagree. I think I'm living one of the best possible lives ever lived - and I'm including the whole universe. There are a few billion thoroughly human lives being lived at the moment so I think that part of the experience space is pretty well covered already and needs no help from me. The part that is not well covered is my experience space. In that space, I find thoughts never thought before. I find deeds never done before. I never could get a kick from anything else, except maybe from creating an AI - but that's for someone else to explore (I hear it's not too crowded in there). I have no need to even briefly visit any other experience space as long as my experience space is brimming with novel and challenging experiences.

The meaning of life, as I see it:

In everything, achieve 99.99999999999999999999999999999999999999999999999999999999999...%

comment by Tim_Tyler · 2009-01-07T16:59:14.000Z · LW(p) · GW(p)

General Optimizer, you seem like a prospect for responding to this question: "in the interests of transparency, would anyone else like to share what they think their utility function is?"

comment by billswift · 2009-01-07T18:16:40.000Z · LW(p) · GW(p)

Has anyone run an experiment with someone having their pleasure center stimulated regularly for a substantial time and asked what they experienced? I was wondering, because of the tests where someone's arm was stimulated to move and they reported that they did it on purpose, and all of the other results where someone did something because they were manipulated into it and then reported why they chose to do it. Has anyone run the test to see if a "wirehead" would feel and report "complex happy mind states"?

comment by Marshall · 2009-01-07T19:10:47.000Z · LW(p) · GW(p)

Tim:-"would anyone else like to share what they think their utility function is?" Seem to have missed this question the first time around - and it looks like a good question. My timid answer is thus: To maximise the quality of (my) time. This is no trivial task and requires a balance between achieving things in the world, acquiring new information (thanks OB) and achieving new things in the world, peppered with a little bit of good biological feelings. Repeat.

comment by Richard10 · 2009-01-07T19:17:06.000Z · LW(p) · GW(p)

Racial prejudice encourages large-scale cooperation. Racially prejudiced individuals will happily sacrifice their personal well-being for the good of the (racial) group, even if the group is much larger than the Dunbar number. While this can certainly be useful under ordinary conditions of nonviolent competition, it becomes vital when the group faces an existential threat as a group.

A rogue paperclipper in a mostly Friendly world can probably only be stopped by racial prejudice--to a rational creature, it's always easier to feed him your neighbor than it is to fight him.

comment by Patrick_(orthonormal) · 2009-01-07T20:02:48.000Z · LW(p) · GW(p)

A rogue paperclipper in a mostly Friendly world can probably only be stopped by racial prejudice--to a rational creature, it's always easier to feed him your neighbor than it is to fight him.

A couple of problems with this statement, as I see it:

  1. The word "only". Forget five minutes— think for five seconds about Third Alternatives. At the very least, wouldn't an emotion for human-favoritism serve the goal better than an emotion for race-favoritism? Then everyone could cooperate more fully, not just each race by itself.

You could be using "racial prejudice" to mean "species prejudice" or something even wider, but that's not what the question's about. Your argument gives no reason for maintaining the current brain architecture, which creates these divisions of allegiance within the normal human race.

  1. Rational agents are doomed to fail because they won't cooperate enough? I stand with Eliezer: rational agents should WIN. If the inevitable result of noncooperation is eventual destruction, genuinely rational agents WILL find ways to cooperate; the Prisoner's Dilemma doesn't operate within every conceivable cooperative enterprise.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-07T20:03:08.000Z · LW(p) · GW(p)

Gaffa: PULL OUT THE WIRES.

comment by Tim_Tyler · 2009-01-07T20:13:52.000Z · LW(p) · GW(p)

As I recall, Arnold's character faced pretty-much this dilemma in Total Recall.

There's a broadly-similar episode of Buffy the Vampire Slayer.

Both characters wind up going on with their mission of saving the planet.

comment by General_Optimizer · 2009-01-07T20:54:22.000Z · LW(p) · GW(p)

I like well-defined problems with a definite solved-state. I seek out problems that lie within my capacities. But most of the problems I've (often unsuccessfully) dealt with earlier have been beyond my then inadequate skills. After failure I've put them on the back burner, with the idea of revisiting them perhaps in a decade or two, to see if I'm skilled enough by then.

Part of the problem of problem-solving is of course acquiring the requisite skills without wasting time on skills you have no use for, ever. Schools, I'm looking at you with a disappointed sigh. I estimate about 9 10ths of my time at schools I could have been doing something more useful. I hate to think about it. These days I try to compensate by being hyper-efficient.

You really don't want to acquire any more skills than the problems require; if your goal is to play good rock music, better not hang around people with bow ties. No matter how much knowledge and experience you have, if it's not applicable, it's worse than useless; it has wasted your time and may have sidetracked you for years - even decades. Some persistent scientific paradigms are merely social clubs with social conventions, i.e. not science at all - wonderfully entertaining ones, too, unless you're actually looking for the inconvenient truth, in which case they become an abomination, and you, an outcast.

Most things in the human sphere of affairs are unfortunately merely time sinks - though undeniably pleasurable ones, us being evolved for that sort of thing. In terms of problem solving, having a social life is an efficiency killer. Efficiency matters until this life extension business really gets going.

I actually don't enjoy the act of problem solving that much, because the process is mostly tedious and rarely rewarding. An endless number of dead-ends await. Any progress is hard won. At best I fail to experience being physically present and don't notice the passage of time - or the fact that I need to eat and take bathroom breaks - which was easily solved with a few timed beeps. To to become pure thought for extended periods of time, I've found it helps to have no natural light, a silent, air conditioned space, no net, no phones.

I only like the solutions. The moments of 1. Most moments are 0. Sometimes there's a -1, when I've mistaken a 0 for a 1 - a moment of "oh well, back to the drawing board". A surprisingly large number of the brain states generated and utilized during the process are one-time-use-only, at least so they seem to me. Maybe it's just my failure to generalize.

I prioritize my to-do list by long-term impact, preferably indefinite utility. At the top of the list I find problems of the long-standing, hard kind which require anything from years to decades of effort and deep specialization embedded within a network of cross-disciplinary knowledge. For this reason some problems I identify could even go unrecognized by the world community of experts even if explicitly pointed out. I suspect that's not awfully rare; there probably are people here who possess such incommunicable skills and knowledge. If you're part of a handful of top people in your field you're actually having a rich and varied professional and social life compared those at the vanguard of knowledge, the ones who write the books. (I'm not saying I'm one, but I'm working on becoming one.)

The 99.999...% means that I don't like partial solutions. I like to go all the way. I guess it's simply the brief nice feeling I get when I can provably check a task as Done - for eternity - that I'm after, after all. The Smile. The widest grin in the world. That's the wire I'm trying to attach to my head, again and again. I've had it on a few times, but it just won't stick.

Back to the business of reattaching it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-07T21:17:37.000Z · LW(p) · GW(p)

Tim:

(1) Logical fallacy of generalization from fictional evidence.

(2) Having read the plot of that episode, I have to say that the 'reality' there is pretty unambiguous from Buffy's epistemic standpoint. She did the wrong thing.

(3) "TECH SUPPORT! TECH SUPPORT!" (Or as in the three-times-better foreign-language version, Abre Los Ojos, "I want to wake up! I want to wake up!")

Replies from: Kenny
comment by Kenny · 2013-08-06T18:57:38.185Z · LW(p) · GW(p)

I agree with everything except that Abre Los Ojos is better than Vanilla Sky. In the former, the acting seemed so wooden, particularly of the male lead and the actor playing his best friend, and the production was too cheap for me to readily suspend disbelief. [I saw Vanilla Sky first.]

comment by Peter_de_Blanc · 2009-01-07T21:54:27.000Z · LW(p) · GW(p)

Eliezer:

There's no logical fallacy because Tim did not make any generalization.

comment by gaffa2 · 2009-01-07T22:08:17.000Z · LW(p) · GW(p)

RE: Eliezer: "PULL OUT THE WIRES"

Ok.

If anyone is interested, one of the people from the Experimental Philosophy crew did a study on peoples (people with no philosophical training) intuitions about experience machines, using a number of slightly different scenarios (including this "backward-looking experience machine"). The author's interpretation of the results is that peoples intuitions are largely an effect of status quo bias: they don't care if they're in reality or not, they care about maintaining status quo.

the paper: http://homepage.uab.edu/angner/SWB/DeBrigard.pdf

comment by Kaj_Sotala · 2009-01-07T22:37:24.000Z · LW(p) · GW(p)

General Optimizer:

So... what exactly do you optimize? Problems, but what problems? One could describe any kind of life as "only optimizing problems", if you chose the problems right.

comment by Joe6 · 2009-01-08T00:26:52.000Z · LW(p) · GW(p)

This discussion reminds me of "Infinite Jest" by David Foster Wallace. One of the themes of the book is something like "be careful what you choose to love", which could be re-phrased "be careful what emotions you bind where".

comment by General_Optimizer · 2009-01-08T00:46:23.000Z · LW(p) · GW(p)

Theoretical thoughts and experimental deeds involving computation, physics, and electronics - The Stuff Dreams Undreamt Are Made Of. :)

I've never learned to think "That looks awfully hard... I don't think I can do it in a year, not even a decade, so I won't even give it a try." For many people hearing something is hard equals "Don't bother, it's been attempted by people far smarter than you with far more resources; it's impossible." They have resigned to the "fact" that it's not going to be solved during their lifetime. Not by them, not by anyone. That may mean that not that many people are even trying to solve the "hard" problems. (Will the AGI-during-our-lifetime-for-sure crowd please raise their hand?) I target such problems specifically "not because they're easy, but because they're hard". I don't care if they're called hard. I need to find out for myself why they're called hard. Maybe I don't find them so hard. At least not impossible-hard. Maybe I just find them fascinatingly challenging.

My brain just won't let go of problems it finds interesting and solvable, when it's offered no other reasons than "too hard to even try". I has to get seriously stuck for years to give up. But even then, if it finds no logical impossibilities, just lack of skills and knowledge, it will not let go entirely. It mutters under its breath "I'll be back."

It's like there's a meat hook stuck in it with a cord attached to the solution far above, and the only way to pry it off is to pull myself inch by inch to the solution, which hopefully does not lie beyond the end of my health span. I can never solve everything I'd like to, but if I do have the time to solve something, I want it Solved for good.

The trip may be a barren landscape of few sights and sounds, but it's better than the alternative of "normal life", whatever that means by your cultural standards, which I find simply garishly decorated emptiness. I don't particularly enjoy the trip, I enjoy the thought of the destination. It's the first and last thing on my mind every day. I savor each precious drop of knowledge that quenches my thirst in this endless desert of drudgery. Now I'm hanging by these multiple meat hooks and they won't let my feet touch the ground until I've climbed up and parachuted down - if I ever do choose to come down...

From the Manhattan Project, I learned that there's at least one valid, socially acceptable reason to grin, laugh, and cry uncontrollably, and that is scientific discovery. But sometimes, silence is all you can come up with when face to face with matters vastly bigger than you are.

comment by phane2 · 2009-01-08T07:04:17.000Z · LW(p) · GW(p)

Eliezer,

Your argument that people do not get emotionally attached "enough" to videogames is due, I think, to your oversimplification of what a "videogame" is. Not that I think you don't know the first thing about videogames (clearly, if you've been brought to tears from a game, you respect them at least somewhat). I think it's more that you're simplifying for the sake of argument and throwing out too much. Basically, what you're saying is that difficult, novel, and sensual experiences are not enough: they also have to "count." Our lives will be dispassionate without the "meaning" of "real" experience with lasting consequences, as opposed to games that don't matter. A few points to make:

  1. Humans get an enormous amount out of games already, especially competitive ones, and there's a lot to be said for self-improvement through games. There are many people whose golf performance is one of the most important things in their lives. Some people are the same way with Smash Brothers. Me and many of my close friends take games very seriously. I would consider gaming to be the most enriching thing I do. I don't think that's a sickness on my part, and I don't think I'd be having more fun if I made money doing it (Or, if I had to do it in order to keep civilization moving, or whatever other rubric of "lasting importance" you want to use).
  2. Doesn't your argument work equally well against basically all art? Writing, music, movies, anything? Hell, most of those are even less valuable since they're not interactive. Essentially, you're saying "why should you bother doing anything if it doesn't trigger your fundamental survival buttons that make you feel awesome?" Except that, people dedicate huge amounts of their lives to stuff like writing. And games, too. Will posthumans not create and enjoy art? Why not?
  3. In the posthuman world, what's left to matter? We already don't get chased by tigers unless we really want to, and presumably posthumans will have the choice to not eat unless they really want to. Even our stances on making children or upholding social relationships aren't so sacred that we wouldn't tweak them. In the end, staying tied to "biologically attractive" things seems no different than any other "bored rich person" hobby.
  4. The line between being a pleasure center and being a eudaimonic civilization participant seems dubious to me. On the one hand, a giant super-efficient orgasm-brain is not the most admirable thing I can imagine. But on the other, we're talking about a posthuman future in which the kinds of outcomes that would constitute "progress" are up for considerable debate. What kinds of things should we like? Some things are "admirable" to like, and others aren't, I guess. Should posthumans like food? Sex? Mathematics? Having bigger and awesomer brains? I don't know. No matter what you name, they all seem vulnerable to falling into an "orgasmic pit trap" from which there may be no return. Among all the things that posthumans might be interested in, only a few stand out (to me) as being too valuable to accept their rejection by our future selves. Among them is an interest in designing and appreciating experiences; upholding a culture to discuss what is beautiful, what is fun, what is aesthetically pleasing. This is what you've labeled "videogames." As you've said, it doesn't press our fundamental survival buttons. Is that enough to discredit it as making our lives worthwhile? Why does achieving greatness in this endeavor not "count?" Why isn't it a meaningful thing for intelligent, experiential beings to do?

I would appreciate your thoughts/comments.

comment by Ben_Jones · 2009-01-08T11:57:51.000Z · LW(p) · GW(p)

Thus fails the Utopia of playing lots of really cool video games forever.

Not convinced. Based on my experience of what people are like; from the moment where games are immersive enough, and we have the technology to plug in for good, the problem of 'no lasting consequences' will vanish for people who want it to. There are already plenty of people willing to plug into WoW for arbitrary amounts of time if they are able. Small increases in immersiveness and catheter technology will lead to large increases in uptime.

phane touches on something interesting just above. One shouldn't talk about video games or even VR as a special case; one should talk about non-magical-boundary sensory input and our reactions. I'm fully in agreement that you should YANK OUT THE WIRES, but I'm having trouble generalizing exactly why. Something to do with 'the more real your achievements the better'? Doesn't feel right. If this has come up implicitly in the Fun Theory series, apologies for not having spotted it.

Also, seconding Peter dB. Saying 'that reminds me of an episode where...' doesn't deserve a ticking-off, particularly following such posts as 'Use the try harder, Luke'. In fact, it can actually be useful to ground things when thinking anstractly, as long as you take care not to follow the fictional logic.

comment by Jon_R · 2009-01-08T12:44:55.000Z · LW(p) · GW(p)

Eliezar, why do you say Buffy made the wrong choice? I've not seen the episode, but I read the summary, and it seems to me that Buffy couldn't conclusively determine which world was real. But choosing to stay with the 'hospital' world would mean that, if she was wrong, her friends would die. Choosing to stay with the 'Sunnyvale' world would mean that, if she was wrong, she'd be hallucinating for the rest of her life. I admit it's a bit like Pascal's Wager, but it seems to me that picking Sunnyvale is more moral, unless you have a really good reason for thinking the 'hospital' world is actually correct.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-08T16:47:02.000Z · LW(p) · GW(p)

Jon, she's got a really good reason for thinking the 'hospital' world is actually correct. See Excluding the Supernatural.

Ben, good point on the fiction thing - I was being unfair. I guess the dangers of fictional evidence are somehow much more salient when someone uses it to make a point you disagree with.

Phane, the idea that competing on more or less arbitrary games gives you an open-ended way to increase your strength and plenty of lasting consequences from that increase and the interactions you have with other people, is, of course, a doctrine of the Competitive Conspiracy. I have Competitive sympathies, but I'm not sure you can sustain a life just on games; a little science (Bayesian Conspiracy) and art (Bardic Conspiracy) and politics (Cooperative Conspiracy) might be helpful as well, not to mention that other conspiracy that Yin's part of.

comment by Jon_R · 2009-01-08T21:07:11.000Z · LW(p) · GW(p)

The Sunnyvale world doesn't strictly require supernatural explanations -- you could posit vampires as being a subspecies of humanity, etc etc etc. But as you said in that post, it doesn't matter; we don't really care if vampires are the product of demons or mutated genes. We just care 'does there exist a monster that appears human, and likes to drink blood? does buffy exhibit superior strength and reaction time that is useful for fighting said monsters?' The reality presented by the Sunnyvale world appears to answer 'yes' to these questions, while the reality presented by the hospital answers 'no'. It's something Buffy can look and see; the question is WHICH set of sensory inputs to trust.

comment by Kaj_Sotala · 2009-01-08T22:02:05.000Z · LW(p) · GW(p)

General Optimizer:

That's a very interesting way of living (and thinking). Thank you for sharing.

comment by Richard_Hollerith2 · 2009-01-08T22:14:57.000Z · LW(p) · GW(p)

Buffy lives in Sunnydale, not Sunnyvale.

comment by Jon_R · 2009-01-09T00:49:21.000Z · LW(p) · GW(p)

Richard: Whichever.

comment by Abigail · 2009-01-09T13:16:18.000Z · LW(p) · GW(p)

I want to Breed, with the most attractive possible real mate. I want to bring up my children to be the best they can be, and for them me and partner to continue to improve our ideas of what the Best is.

This raises the likelihood of, perhaps permanent, unhappiness for many people- and perhaps because of this, the possibility in whatever wonderful future, the possibility of happiness and fulfilment. Choices about how to spend ones time, how, if at all, to improve onesself, arise from the central problem of breeding.

"Pull out the wires"- the person is in a state of Cognitive Dissonance, perceptions of reality which conflict with all previous perceptions. I think it a strong possibility that the person would feel a complete crisis, having no way of trusting either the current perceptions or previous perceptions of reality. If I did not collapse into a shivering heap, I hope I too would say, "Pull out the wires"- if it IS real, I want to be there.

comment by General_Optimizer · 2009-01-10T03:04:20.000Z · LW(p) · GW(p)

I don't think it would be such a bad idea to have some prerequisites for breeding, such as:

• Self-reliance • Rationality • Non-violence • No criminal record • And the most important of all: having already given birth to a brain child... or a few. Just one useful thing that didn't exist before you were born. Just one. You can come up with one in a lifetime, can't you?

comment by Frank_Lantz · 2009-01-10T18:14:40.000Z · LW(p) · GW(p)

Games are aesthetic experiences, like dance, music, painting, or poetry. So a Utopia of "playing lots of cool videogames forever" is similar to one in which you do those other things forever. Aesthetic experiences need to exist alongside, in contrast to, and in concert with the other aspects of life, not as a replacement for them. Loving to dance doesn't mean you want to eliminate walking.

comment by rand2 · 2009-01-17T21:14:35.000Z · LW(p) · GW(p)

jb: I would also say that MMOs are more emotionally engaging because they don't go away when you turn them off.

I'd suggest, rather, that one start with the idea of new major events in a transhuman life, and then bind emotions to those major events and the sub-events that surround them. <<<

It's interesting, that in videogames, this is exactly what we do now with Achievements. And again, this is something that is shared, and does not go away when the machine is turned off.

comment by MaoShan · 2010-11-25T06:37:16.090Z · LW(p) · GW(p)

"Can your emotions get involved in a video game? Yes, but not much. Whatever sympathetic echo of triumph you experience on destroying the Evil Empire in a video game, it's probably not remotely close to the feeling of triumph you'd get from saving the world in real life. I've played video games powerful enough to bring tears to my eyes, but they still aren't as powerful as the feeling of significantly helping just one single real human being.Because when the video game is finished, and you put it away, the events within the game have no long-term consequences."

I remember when I was around twelve years old or so, playing F-Zero (a hovercraft racing game) for SNES. I was playing on Master Class, and very close to the end, my craft blew up suddenly, and my already racing heart leapt, and it was so emotionally wrenching that I was on the floor in a state of near-catatonia for several minutes. And this was with 16-bit graphics. A sort of voluntary psychosis, but isn't that what consciousness is? A psychosis that corresponds well to your environment? Even if you learn no specific intellectual lessons from videogames, the emotional experiences will still carry lasting effects on your personality--to the extent that you are involved. You can link the two worlds by investing more in the imaginary, which is at present mostly a voluntary procedure. Therefore videogames could be a useful tool for experimenting with the limits of human psyches with fewer permanent effects than reconfiguring whole systems of neurons.

comment by Strange7 · 2010-12-17T20:16:01.432Z · LW(p) · GW(p)

Excuse me, sir, but it seems you've pushed off into the distant incomprehensible transhuman future something which already happens on a routine basis today. There are already people who have come up with new emotions, and created art which teaches the viewer to share those emotions.

Most of them aren't very interesting, because most people aren't very innovative. Drastic innovation all too often leads to insanity, functional sterility, and early death, so of course it's selected against.

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-17T20:29:03.400Z · LW(p) · GW(p)

I'd be interested in references.

Replies from: Strange7
comment by Strange7 · 2010-12-17T21:50:08.689Z · LW(p) · GW(p)

I am particularly thinking of weird porn. Exposure to a bizarre but compatible fetish stimulates further interest, and motivates action related to the fetish. People have developed fetishes for things which simply did not exist in the ancestral environment, such as washing machines.

Replies from: JGWeissman
comment by JGWeissman · 2010-12-17T21:53:32.492Z · LW(p) · GW(p)

Making existing emotions applicable to new targets is not the same as making new emotions.

Replies from: Strange7
comment by Strange7 · 2010-12-17T22:40:38.410Z · LW(p) · GW(p)

What would you consider a new emotion, then? Something that motivates a new kind of action, something on the level of 'seek,' 'avoid,' 'protect,' or 'destroy,' but without precedent?

There was a reference to getting rid of racial prejudice as an example of removing an emotion. Isn't that just the more general emotion of revulsion, applied to a specific target? Are you saying that zeroing out the natural predisposition toward that feeling would count as removing an emotion, but subsequently restoring it would not count as adding an emotion?

Replies from: TheOtherDave
comment by TheOtherDave · 2010-12-18T01:13:18.830Z · LW(p) · GW(p)

If the only difference between two emotional experiences is the nature of the stimulus that triggers them, I would not call those different emotions.

For example, I would not consider a craving for Chinese food to be a different emotion from a craving for hamburgers. I would not consider being aroused by wearing leather chaps to be a different emotion from being aroused by wearing frilly silk underwear.

Neither would I consider being aroused by sex in public places a new emotion, though it may involve different combinations of emotions (e.g., fear of discovery).

Similarly, I would not say that being revolted by a particular skin color is a different emotion from being revolted by a particular gender, or a missing arm, or etc., though I would expect different scenarios to involve different combinations of emotions.

So if racial prejudice is nothing more than being revolted by a particular target, then I would say racial prejudice is not its own emotion. Removing racial prejudice, on this view, is not removing an emotion; restoring it is not adding an emotion.

All of that said, you may be correct that EY is saying that racial prejudice is its own emotion. His discussions of emotions throughout this Sequence don't make a whole lot of sense to me; I won't try to speak for him. And yeah, I agree with you that if removing it is removing an emotion (which, again, I don't think it is), then restoring it is adding an emotion.

Replies from: Strange7
comment by Strange7 · 2010-12-18T21:44:18.660Z · LW(p) · GW(p)

By those definitions, I would agree that creating a new emotion would more-or-less require creating a whole new category of potential action, or a new tier on Maslow's Heirarchy, and accordingly would be about as difficult to imagine (never mind actually attempt) as adding a new spatial dimension to a pre-existing universe.

comment by pnrjulius · 2012-06-06T23:24:15.816Z · LW(p) · GW(p)

I think show-off bias basically explains the entire field of analytic philosophy.

Seriously, guys: Grue? Brains in vats?

comment by InquilineKea · 2016-07-05T21:21:58.154Z · LW(p) · GW(p)

Now with people posting more of their gaming online, many of their gaming experiences don't necessarily go away once they quit the game. In fact, how one plays video games says a lot about one's personality.

I still stay emotionally involved with some of my old AOE2 games many years later (because I record them all), and I still sometimes reel over certain really irrational decisions I made in them.