Seduced by Imagination

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-01-16T03:10:22.000Z · LW · GW · Legacy · 20 comments

Contents

20 comments

"Vagueness" usually has a bad name in rationality—connoting skipped steps in reasoning and attempts to avoid falsification.  But a rational view of the Future should be vague, because the information we have about the Future is weak.  Yesterday I argued that justified vague hopes might also be better hedonically than specific foreknowledge—the power of pleasant surprises.

But there's also a more severe warning that I must deliver:  It's not a good idea to dwell much on imagined pleasant futures, since you can't actually dwell in them.  It can suck the emotional energy out of your actual, current, ongoing life.

Epistemically, we know the Past much more specifically than the Future.  But also on emotional grounds, it's probably wiser to compare yourself to Earth's past, so you can see how far we've come, and how much better we're doing.  Rather than comparing your life to an imagined future, and thinking about how awful you've got it Now.

Having set out to explain George Orwell's observation that no one can seem to write about a Utopia where anyone would want to live—having laid out the various Laws of Fun that I believe are being violated in these dreary Heavens—I am now explaining why you shouldn't apply this knowledge to invent an extremely seductive Utopia and write stories set there.  That may suck out your soul like an emotional vacuum cleaner.

I briefly remarked on this phenomenon earlier, and someone said, "Define 'suck out your soul'."  Well, it's mainly a tactile thing: you can practically feel the pulling sensation, if your dreams wander too far into the Future.  It's like something out of H. P. Lovecraft:  The Call of Eutopia.  A professional hazard of having to stare out into vistas that humans were meant to gaze upon, and knowing a little too much about the lighter side of existence.

But for the record, I will now lay out the components of "soul-sucking", that you may recognize the bright abyss and steer your thoughts away:

Hope can be a dangerous thing.  And when you've just been hit hard—at the moment when you most need hope to keep you going—that's also when the real world seems most painful, and the world of imagination becomes most seductive.

It's a balancing act, I think.  One needs enough Fun Theory to truly and legitimately justify hope in the future.  But not a detailed vision so seductive that it steals emotional energy from the real life and real challenge of creating that future.  You need "a light at the end of the secular rationalist tunnel" as Roko put it, but you don't want people to drift away from their bodies into that light.

So how much light is that, exactly?  Ah, now that's the issue.

I'll start with a simple and genuine question:  Is what I've already said, enough?

Is knowing the abstract fun theory and being able to pinpoint the exact flaws in previous flawed Utopias, enough to make you look forward to tomorrow?  Is it enough to inspire a stronger will to live?  To dispel worries about a long dark tea-time of the soul?  Does it now seem—on a gut level—that if we could really build an AI and really shape it, the resulting future would be very much worth staying alive to see?

20 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Andy_McKenzie2 · 2009-01-16T07:28:57.000Z · LW(p) · GW(p)

Yes. You and Eric Drexler and a few others have sufficiently convinced me that I absolutely look forward to the future. I'm not sure if I did already (I had vague though), but now I do. Thanks, I guess. :)

comment by Aron · 2009-01-16T08:50:14.000Z · LW(p) · GW(p)

My optimism about the future has always been inducted from historical trend. It doesn't require the mention of AI for that or most of the fun topics discussed. I would define this precisely as having the justified expectation of pleasant surprise. I don't know the specifics of how the future looks, but can generalize with some confidence that it is likely to be better than today (for people on average, if not necessarily me in particular). If you think the trend now is positive, but the result of this trend somewhere in the future is quite negative, than you have a story to tell about why. And with all stories about the future, you are likely wrong.

comment by Abigail · 2009-01-16T10:24:56.000Z · LW(p) · GW(p)

I find it hard to conceive of falling into misery because I do not live in a future society where an all-powerful FAI seeking the best interests of each individual and of the species governs perfectly. I am glad that I do not have to work as a subsistence peasant, at risk of starvation if the harvest is poor, and I have some envy of celebrities that I see.

I think a lot of misery comes from wanting the World to be other than it is, without the power to change it. Everybody knows it: I need courage to change what I can change, serenity to accept what I can't change, wisdom to know the difference. It is not easy, but it is simple (this last sentence comes from House MD).

Replies from: ikrase
comment by ikrase · 2013-02-03T16:38:32.111Z · LW(p) · GW(p)

I'd add that one of the strongest imagination-seducers possible is wanting the world to be different in a microcosmic, personal way that one is still not able to deal with. (For example, I have learned that while global-scale worldbuilding is fine, I need to stop worldbuilding on a subcultural or regional-cultural level unless I actually am going to publish fiction.)

comment by ShardPhoenix · 2009-01-16T15:43:26.000Z · LW(p) · GW(p)

I feel that a lot of your discussion about Fun Theory is a bit too abstract to have an emotional appeal in terms of looking forward to the future. I think for at least some people (even smart, rational ones), it may be more effective to point out the possibility of more concrete, primitive, "monkey with a million bananas" type scenarios, even if those are not the most likely to actually occur.

Even if you know that the future probably won't be specifically like that, you can imagine how good that would be in a more direct and emotionally compelling way, and then reason that a Fun Theory compatible future would be even better than that, even if you can't visualize what it would be like so clearly.

Replies from: diegocaleiro, Error
comment by diegocaleiro · 2010-12-14T18:32:48.770Z · LW(p) · GW(p)

that is good for those who indirectly work on AI.

Those who do it directly cannot afford the cost of mis-representation.

Still, great idea.

comment by Error · 2012-10-02T14:34:18.323Z · LW(p) · GW(p)

"monkey with a million bananas"

Has anyone done this experiment? Actually put a monkey in an environment with the equivelant of a million bananas (unlimited food, uncontested mates, whatever puzzles we can think of to make life interesting in the absense of pain and conflict, etc.) and watched how it acted over a period of years for signs of boredom and despair?

Might be useful information about the real effects of certain kinds of "Utopias." Also might be horribly unethical, depending on how you feel about primate experimentation.

Replies from: thomblake
comment by thomblake · 2012-10-02T14:58:23.981Z · LW(p) · GW(p)

If giving a monkey some bananas is wrong, I don't want to be right.

Replies from: Error
comment by Error · 2012-10-02T15:17:56.185Z · LW(p) · GW(p)

I meant that in the context of the Fun Theory sequence, which I'm currently reading through. It seems to me to implicitly predict that a monkey given unlimited bananas, mates, etc., ought to turn out surprisingly unhappy, at least to the extent that its psych is not-too-dissimilar from humans. It would be interesting to see if that prediction is correct.

comment by Doug_S. · 2009-01-16T20:16:44.000Z · LW(p) · GW(p)

My soul got sucked out a long time ago.

[whine] I wanna be a wirehead! Forget eudamonia, I just wanna feel good all the time and not worry about anything! [/whine]

Replies from: JohnWittle
comment by JohnWittle · 2013-04-17T16:12:12.190Z · LW(p) · GW(p)

This is an interesting thought. I started out a heroin addict with a passing interest in wireheading, which my atheist/libertarian/programmer/male brain could envision as being clearly possible, and the 'perfect' version of heroin (which has many downsides even if you are able to sustain a 3 year habit without slipping into withdrawal a single time, as I was). I saw pleasure as being the only axiomatic good, and dreamed of co-opting this simple reward mechanism for arbitrarily large amounts of pleasure. This dream led me here (I believe the lesswrong wiki article is at least on the front page of the Google results for 'wireheading'), and when I first read the fun theory sequence, I was skeptical that we would end up actually wanting something other than wireheading. Oh, these foolish AI programmers who have never felt the sheer blaze of pleasure of a fat shot of heroin, erupting like an orgasmic volcano from their head to their toes... No, but I did at least realize that I could bring about wireheading sooner by getting off heroin and starting to study neuroscience at my local (luckily, neuroscience specialized) university.

Once I got clean (which took about two weeks of a massively uncomfortable taper), I realized two things: the main difference between a life of heroin and a life without is having choices. A heroin addict satisfies his food and shelter needs in the cheapest way possible and then spends the rest of his money on heroin. The opportunity cost of something is readily available to your mind, "I could get this much heroin with the money instead", instead of being a vague notion of all the other things you could have bought instead. There is something to be said for this simplicity. Which leads me to the second realization: pleasure is definitely relative. We experience pleasure when we go from less pleasure to more pleasure, not as an absolute value of pleasure. The benefit of heroin is that it's a very sharp spike in pleasure for a minute or two, which then subsides into a state where you probably are experiencing larger absolute pleasure, but you can't actually tell the difference. Eventually, some 6-8 hours later, you start to feel cold, clammy, feverish; definitely you experience pain. I remember times where i'd be at 12 hours since my last shot, and feeling very bad, but I would hold out a little longer just so that when I finally DID dose, the difference between the past state of pleasure and the current state would be as large as possible.

In fact, being in the absolute hell of day 2 withdrawal, 24-48 hours since last dose, puking everywhere and defecating everywhere and lying in a puddle of sweat, and then injecting a dose which brought me up to baseline over the course of five-ten seconds, without any pleasure in the absolute sense, was just as pleasurable as going from baseline to a near-overdose.

I am glad to be free of that terrible addiction, but it taught me such straight forward lessons about how pleasure actually works that I think studying the behavior of, say, heroin-addicted primates, would be useful.

comment by Roko · 2009-01-17T01:04:23.000Z · LW(p) · GW(p)

"So how much light is that, exactly? Ah, now that's the issue.

I'll start with a simple and genuine question: Is what I've already said, enough?"

  • Enough for what purpose? There are two distinct purposes that I can think of. Firstly, there is the task of convincing some "elite" group of potential FAI coders that the task is worth doing. I think that enough has been said for this one. How likely is this strategy to work? Well,

Secondly, there is the task of convincing a nontrivial fraction of "ordinary" people in developed countries that the humanity+ movement is worth getting excited about, worth voting for, worth funding. This might be a worthy goal if you think that the path of technological development will be significantly influenced by public opinion and politics. For this task, abstract descriptions are not enough, people will need specifics. If you tell John and Jane public that the AI will implement their CEV, they'll look at you like you're nuts. If you tell them that this will, as a special case, solve almost all of the problems that they currently worry about - like their health, their stressed lifestyles, the problems that they have with their marriage, the dementia that grandpa is succumbing to, etc, then you might be on to something.

comment by Vladimir_Nesov · 2009-01-17T01:59:50.000Z · LW(p) · GW(p)

I always got emotionally invested in abstract causes, so it was enough for me to perceive the notion of a way to get things better, and not just somewhat better, but as good as it gets. About two years ago, when exhausting routine of University was at an end, I got generally bored, and started idly exploring various potential hobbies, learning Japanese, piano and foundations of mathematics. I was preparing to settle down in the real world. The idea of AGI, and later FAI (understood and embraced only starting this summer, despite availability of all the material) as perceived ideal target gave focus to my life and linked intrinsic worth of the cause to natural enjoyment in the process of research. A new perspective didn't suck out my soul, but nurtured it. I don't spend time contemplating specific stories of the better, I need to understand more of the basic concepts in order to have a chance of seeing any specifics about the structure of goodness. For now, whenever I see a specific story, I prefer an abstract expectation of there being a surprising better way quite unlike the one depicted.

comment by Michael_G.R. · 2009-01-17T06:02:21.000Z · LW(p) · GW(p)

I'm currently reading Global Catastrophic Risks by Nick Bostrom and Cirkovic, and it's pretty scary to think of how arbitrarily everything could go bad and we could all live through very hard times indeed.

That kind of reading usually keeps me from having my soul sucked into this imagined great future...

comment by Nick_Tarleton · 2009-01-17T07:19:00.000Z · LW(p) · GW(p)
Firstly, there is the task of convincing some "elite" group of potential FAI coders that the task is worth doing.

Not all the object-level work that needs to be done is (or requires the same skills as) FAI programming – not to mention the importance of donors and advocates.

This might be a worthy goal if you think that the path of technological development will be significantly influenced by public opinion and politics.

...in a desirable way. Effective SL3 "pro-technology" activism seems like it would be very dangerous. I doubt that advocacy (or any activity other than donation) by people who need detailed predictions to sustain their motivation (not just initiate it) has any significant chance of being useful.

comment by Roko · 2009-01-17T12:24:10.000Z · LW(p) · GW(p)

@ nick t: I'd be interested to see the justification for the claim that pro technology activism would be very dangerous. Personally, I'm not convinced either way. If it turns out that you're right, then I'd say that this little series on fun theory has probably gone far enough.

One argument in favor of pro-rationalist/technology activism is that we cannot rely upon technology that is conducive to siai or some other small group being able to keep control of things. Robin has argued for a "distributed" singularity based on economic interdependence, probably via a whole host of bci and/or uploading efforts, with the main players being corporations and governments. In this scenario, a small elite group of singularitarian activists would basically be spectators. A much larger global h+ movement would have influence. A possible counterargument is that such a large organization would make bad decisions and have a negative influence due to the poor average quality of its members.

comment by Patri_Friedman · 2009-01-17T19:14:36.000Z · LW(p) · GW(p)

I really liked this post. Not sure if you meant it this way, but for me it mostly applies to imagining / fantasizing about the future. Some kinds of imagining are motivating, and they tend to be more general. The ones you describe as "soul-sucking" are more like an Experience Machine, or William Shatner's Tek (if you've had the misfortune to read any of his books).

For me this brings up the distinction between happiness (Fun) and pleasure. Soul-sucking is very pleasurable, but it is not very Fun. There is no richness, no striving, no intricacy - just getting what you want is boring.

ShardPhoenix - I agree that concreteness is important, but there is still a key distinction between concrete scenarios that motivate people to work to bring them about, and concrete scenarios that people respond to by drifting off into imagination and thinking "yeah, that would be fun."

comment by taelor · 2011-11-22T21:30:59.156Z · LW(p) · GW(p)

I briefly remarked on this phenomenon earlier, and someone said, "Define 'suck out your soul'." Well, it's mainly a tactile thing: you can practically feel the pulling sensation, if your dreams wander too far into the Future. It's like something out of H. P. Lovecraft: The Call of Eutopia. A professional hazard of having to stare out into vistas that humans were meant to gaze upon, and knowing a little too much about the lighter side of existence.

Interstingly enough,Lovecraft wrote a story that I think captures this phenomena quite well. See also this story, in which Lovecraft briefly revisits the protagonist of the original story, and elaborates on his fate. Of note, both stories deal with seduction by memories of a idealized past, rather than imaginings of an idealized future, but I think that the same general principle applies.

comment by Delta · 2012-09-14T12:09:54.831Z · LW(p) · GW(p)

Very interesting article, and a real "ouch" moment for me when I realised that all my escapism growing up had exactly this effect. By becoming engaged with fictional worlds through films, books and games you can start to disengage with the world, finding nothing so interesting and vibrant in it (this is a particular risk if you are young and haven't found activities and people you value in reality yet). The scary thing was when I was realised the characters in my books felt more real than people in reality. If you have trouble connecting with people books offer ready-made connections that can distract you from getting the social skills you need to form meaningful relationships in real life.

To an extent I think I am still prey to this, so does anyone have advice on ways to balance your escapist pleasures so you can still enjoy them without losing the vibrancy of real life?

Replies from: ikrase
comment by ikrase · 2013-02-03T16:53:43.374Z · LW(p) · GW(p)

It occurs to me that even more seductive than a future world might be a plausible, more formiddable self. (It suddenly occurs to me why many video game player characters are either conspicuously characterless, like Valve protagonists, or rather unlikable people (the 'why do I have to play as this jerk?' problem).