Welcome to Heaven

post by denisbider · 2010-01-25T23:22:45.169Z · LW · GW · Legacy · 246 comments

Contents

246 comments

I can conceive of the following 3 main types of meaning we can pursue in life.

1. Exploring existing complexity: the natural complexity of the universe, or complexities that others created for us to explore.

2. Creating new complexity for others and ourselves to explore.

3. Hedonic pleasure: more or less direct stimulation of our pleasure centers, with wire-heading as the ultimate form.

What I'm observing in the various FAI debates is a tendency of people to shy away from wire-heading as something the FAI should do. This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want". This is not, however, clear to me at all.

The utility we get from exploration and creation is an enjoyable mental process that comes with these activities. Once an FAI can rewire our brains at will, we do not need to perform actual exploration or creation to experience this enjoyment. Instead, the enjoyment we get from exploration and creation becomes just another form of pleasure that can be stimulated directly.

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them. This enjoyment could be of any type - it could be explorative or creative or hedonic enjoyment as we know it. The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly. Therefore, the greatest utility will be achieved by wire-heading. Everything else falls short of that.

What I don't quite understand is why everyone thinks that this would be such a horrible outcome. As far as I can tell, these seem to be cached emotions that are suitable for our world, but not for the world of FAI. In our world, we truly do need to constantly explore and create, or else we will suffer the consequences of not mastering our environment. In a world where FAI exists, there is no longer a point, nor even a possibility, of mastering our environment. The FAI masters our environment for us, and there is no longer a reason to avoid hedonic pleasure. It is no longer a trap.

Since the FAI can sustain us in safety until the universe goes poof, there is no reason for everyone not to experience ultimate enjoyment in the meanwhile. In fact, I can hardly tell this apart from the concept of a Christian Heaven, which appears to be a place where Christians very much want to get.

If you don't want to be "reduced" to an eternal state of bliss, that's tough luck. The alternative would be for the FAI to create an environment for you to play in, consuming precious resources that could sustain more creatures in a permanently blissful state. But don't worry; you won't need to feel bad for long. The FAI can simply modify your preferences so you want an eternally blissful state.

Welcome to Heaven.

246 comments

Comments sorted by top scores.

comment by mkehrt · 2010-01-25T23:53:22.741Z · LW(p) · GW(p)

I think you are missing the point.

First throw out the FAI part of this argument; we can consider an FAI just as a tool to help us achieve our goals. Any AI which does not do at least this this is insufficiently friendly (and thus counts as a paperclipper, possibly).

Thus, the actual question is what are our goals? I don't know about you, but I value understanding and exploration. If you value pleasure, good! Have fun being a wirehead.

It comes down to the fact that a world where everyone is a wirehead is not valued by me or probably by many people. Even though this world would maximize pleasure, it wouldn't maximize utility of people designing the world (I think this is the util/hedon distinction, but I am not sure). If we don't value that world, why should we create it, even if we would value after we create it?

Replies from: djadvance22, Pablo_Stafforini, Matt_Simpson, Raoul589
comment by djadvance22 · 2010-01-29T01:23:22.110Z · LW(p) · GW(p)

The way I see it is that there is a set of preferable reward qualia we can experience (pleasure, wonder, empathy, pride) and a set of triggers attached to them in the human mind (sexual contact, learning, bonding, accomplishing a goal). What this article says is that there is no inherent value in the triggers, just in the rewards. Why rely on plugs when you can short circuit the outlet?

But that is missing an entire field of points: there are certain forms of pleasure that can only be retrieved from the correct association of triggers and rewards. Basking in the glow of wonder from cosmological inquiry and revelation is not the same without an intellect piecing together the context. You can have bliss and love and friendship all bundled up into one sensation, but without the STORY, without a timeline of events and shared experience that make up a relationship, you are missing a key part of that positive experience.

tl;dr: Experiencing pure rewards without relying on triggers is a retarded (or limited) way of experiencing the pleasures of the universe.

comment by Pablo (Pablo_Stafforini) · 2010-01-28T20:03:31.624Z · LW(p) · GW(p)

Like many others below, your reply assumes that what is valuable is what we value. Yet as far as I can see, this assumption has never defended with arguments in this forum. Moreover, the assumption seems clearly false. A person whose brain was wired differently than most people may value states of horrible agony. Yet the fact that this person valued these states would not constitute a reason for thinking them valuable. Pain is bad because of how it feels, rather than by virtue of the attitudes that people have towards painful states.

Replies from: randallsquared
comment by randallsquared · 2010-01-31T00:36:27.937Z · LW(p) · GW(p)

Like many others below, your reply assumes that what is valuable is what we value.

Well, by definition. I think what you mean is that there are things that "ought to be" valuable which we do not actually value [enough?]. But what evidence is there that there is any "ought" above our existing goals?

Replies from: Raoul589
comment by Raoul589 · 2013-01-20T12:04:24.751Z · LW(p) · GW(p)

What evidence is there that we should value anything more than what mental states feel like from the inside? That's what the wirehead would ask. He doesn't care about goals. Let's see some evidence that our goals matter.

Replies from: jooyous, randallsquared
comment by jooyous · 2013-01-26T04:51:41.667Z · LW(p) · GW(p)

What would evidence that our goals matter look like?

comment by randallsquared · 2013-01-22T20:00:58.317Z · LW(p) · GW(p)

Just to be clear, I don't think you're disagreeing with me.

Replies from: Raoul589
comment by Raoul589 · 2013-01-26T01:08:25.885Z · LW(p) · GW(p)

We disagree if you intended to make the claim that 'our goals' are the bedrock on which we should base the notion of 'ought', since we can take the moral skepticism a step further, and ask: what evidence is there that there is any 'ought' above 'maxing out our utility functions'?

A further point of clarification: It doesn't follow - by definition, as you say - that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that 'what is valuable is what we value' tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects.

Note: I took the statement 'what is valuable is what we value' to be equivalent to 'things are valuable because we value them'. The statement has another possbile meaning: 'we value things because they are valuable'. I think both are incorrect for the same reason.

Replies from: randallsquared, nshepperd
comment by randallsquared · 2013-01-26T04:38:57.481Z · LW(p) · GW(p)

I think I must be misunderstanding you. It's not so much that I'm saying that our goals are the bedrock, as that there's no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there's some basis for what we "ought" to do, but I'm making exactly the same point you are when you say:

what evidence is there that there is any 'ought' above 'maxing out our utility functions'?

I know of no such evidence. We do act in pursuit of goals, and that's enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it's not very close at all, and I agree, but I don't see a path to closer.

So, to recap, we value what we value, and there's no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about "ought" presume a given goal both can agree on.

Would making paperclips become valuable if we created a paperclip maximiser?

To the paperclip maximizer, they would certainly be valuable -- ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :)

By the way, you can't say the wirehead doesn't care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn't care about goals would never do anything at all.

Replies from: Raoul589, Kawoomba
comment by Raoul589 · 2013-01-26T13:47:02.858Z · LW(p) · GW(p)

I think that you are right that we don't disagree on the 'basis of morality' issue. My claim is only that which you said above: there is no objective bedrock for morality, and there's no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.

comment by Kawoomba · 2013-01-26T07:30:57.348Z · LW(p) · GW(p)

An entity that didn't care about goals would never do anything at all.

I agree with the rest of your comment, and depending on how you define "goal" with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of "go left when there is a light on the right", think Braitenberg vehicles minus the evolutionary aspect.

Replies from: randallsquared, army1987
comment by randallsquared · 2013-01-26T15:06:19.807Z · LW(p) · GW(p)

Yes, I thought about that when writing the above, but I figured I'd fall back on the term "entity". ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).

comment by A1987dM (army1987) · 2013-01-26T08:11:32.453Z · LW(p) · GW(p)

See also

Replies from: Kawoomba
comment by Kawoomba · 2013-01-26T08:15:24.173Z · LW(p) · GW(p)

Hard to be original anymore. Which is a good sign!

comment by nshepperd · 2013-01-26T02:17:29.644Z · LW(p) · GW(p)

What is valuable is what we value, because if we didn't value it, we wouldn't have invented the word "valuable" to describe it.

By analogy, suppose my favourite colour is red, but I speak a language with no term for "red". So I invent "xylbiz" to refer to red things; in our language, it is pretty much a synonym for "red". All objects that are xylbiz are my favourite colour. "By definition" to some degree, since my liking red is the origin of the definition "xylbiz = red". But note that: things are not xylbiz because xylbiz is my favourite colour; they are xylbiz because of their physical characteristics. Nor is xylbiz my favourite colour because things are xylbiz; rather xylbiz is my favourite colour because that's how my mind is built.

It would, however, be fairly accurate to say that if an object is xylbiz, it is my favourite colour, and it is my favourite colour because it is xylbiz (and because of how my mind is built). It would also be accurate to say that "xylbiz" refers to red things because red is my favourite colour, but this is a statement about words, not about redness or xylbizness.

Note that if my favourite colour changed somehow, so now I like purple and invent the word "blagg" for it, things that were previously xylbiz would not become blagg, however you would notice I stop talking about "xylbiz" (actually, being human, would probably just redefine "xylbiz" to mean purple rather than define a new word).

By the way, the philosopher would probably ask "what evidence is there that we should value what mental states feel like from the inside?"

comment by Matt_Simpson · 2010-01-26T01:15:20.316Z · LW(p) · GW(p)

Agreed.

This is one reason why I don't like to call myself a utilitarian. Too many cached thoughts/objections associated with that term that just don't apply to what we are talking about

comment by Raoul589 · 2013-01-26T14:02:31.511Z · LW(p) · GW(p)

As a wirehead advocate, I want to present my response to this as bluntly as possible, since I think my position is more generally what underlies the wirehead position, and I never see this addressed.

I simply don't believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you 'yay, understanding and exploration!'. What's more, the only way you even know this much, is from how you feel about exploration - on the inside - when you are considering it or engaging in it. That is, how much 'pleasure' or wirehead-subjective-experience-nice-feelings-equivalent you get from it. You say to your brain: 'so, what do you think about making scientific discoveries?' and it says right back to you: 'making discoveries? Yay!'

Since literally every single thing we value just boils down to 'my brain says yay about this' anyway, why don't we just hack the brain equivalent to say 'yay!' as much as possible?

Replies from: TheOtherDave, lavalamp, Kindly, ArisKatsaris
comment by TheOtherDave · 2013-01-26T16:10:01.013Z · LW(p) · GW(p)

If I were about to fall off a cliff, I would prefer that you satisfy your brain's desire to pull me back by actually pulling me back, not by hacking your brain to believe you had pulled me back while I in fact plunge to my death. And if my body needs nutrients, I would rather satisfy my hunger by actually consuming nutrients, not by hacking my brain to believe I had consumed nutrients while my cells starve and die.

I suspect most people share those preferences.

That pretty much summarizes my objection to wireheading in the real world.

That said, if we posit a hypothetical world where my wireheading doesn't have any opportunity costs (that is, everything worth doing is going to be done as well as I can do it or better, whether I do it or not), I'm OK with wireheading.

To be more precise, I share the sentiment that others have expressed that my brain says "Boo!" to wireheading even in that world. But in that world, my brain also says "Boo!" to not wireheading for most the same reasons, so that doesn't weigh into my decision-making much, and is outweighed by my brain's "Yay!" to enjoyable experiences.

Said more simply: if nothing I do can matter, then I might as well wirehead.

comment by lavalamp · 2013-01-26T14:39:19.795Z · LW(p) · GW(p)

Because my brain says 'boo' about the thought of that.

Replies from: Raoul589
comment by Raoul589 · 2013-01-27T01:46:25.891Z · LW(p) · GW(p)

It seems, then, that anti-wireheading boils down to the claim that 'wireheading, boo!'.

This is not a convincing argument to people whose brains don't say to them 'wireheading, boo!'. My impression was that denisbider's top level post was a call for an anti-wireheading argument more convincing than this.

Replies from: lavalamp
comment by lavalamp · 2013-01-27T16:14:30.801Z · LW(p) · GW(p)

I use my current value system to evaluate possible futures. The current me really doesn't like the possible future me sitting stationary in the corner of a room doing nothing, even though that version of me is experiencing lots of happiness.

I guess I view wireheading as equivalent to suicide; you're entering a state in which you'll no longer affect the rest of the world, and from which you'll never emerge.

No arguments will work on someone who's already wireheaded, but for someone who is considering it, hopefully they'll consider the negative effects on the rest of society. Your friends will miss you, you'll be a resource drain, etc. We already have an imperfect wireheading option; we call it drug addiction.

If none of that moves you, then perhaps you should wirehead.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-27T16:29:52.657Z · LW(p) · GW(p)

Is the social-good argument your true rejection, here?

Does it follow from this that if you concluded, after careful analysis, that you sitting stationary in a corner of a room experiencing various desirable experiences would be a net positive to the rest of society (your friends will be happy for you, you'll consume fewer net resources than if you were moving around, eating food, burning fossil fuels to get places, etc., etc.), then you would reluctantly choose to wirehead, and endorse others for whom the same were true to do so?

Or is the social good argument just a soldier here?

Replies from: lavalamp, ArisKatsaris
comment by lavalamp · 2013-01-27T18:28:24.693Z · LW(p) · GW(p)

After some thought, I believe that the social good argument, if it somehow came out the other way, would in fact move me to reluctantly change my mind. (Your example arguments didn't do the trick, though-- to get my brain to imagine an argument that would move me, I had to imagine a world where my continued interaction with other humans in fact harms them in ways I cannot do something to avoid; something like I'm an evil person, don't wish to be evil, but it's not possible to cease being evil are all true.) I'd still at least want a minecraft version of wireheading and not a drugged out version, I think.

Replies from: TheOtherDave, Raoul589
comment by TheOtherDave · 2013-01-27T19:16:35.765Z · LW(p) · GW(p)

Cool.

comment by Raoul589 · 2013-01-28T01:19:14.643Z · LW(p) · GW(p)

You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave's speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good - since in his scenario it is clearly net better to wirehead than not to.

Replies from: lavalamp
comment by lavalamp · 2013-01-28T01:34:52.952Z · LW(p) · GW(p)

All of the things he lists are not true for me personally and I had trouble imagining worlds in which they were true of me or anyone else. (Exception being the resource argument-- I imagine e.g. welfare recipients would consume fewer resources but anyone gainfully employed AFAIK generally adds more value to the economy than they remove.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-28T05:51:44.162Z · LW(p) · GW(p)

FWIW, I don't find it hard to imagine a world where automated tools that require fewer resources to maintain than I do are at least as good as I am at doing any job I can do.

Replies from: lavalamp
comment by lavalamp · 2013-01-28T13:29:53.153Z · LW(p) · GW(p)

Ah, see, for me that sort of world has human level machine intelligence, which makes it really hard to make predictions about.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-28T15:30:45.860Z · LW(p) · GW(p)

Yes, agreed that automated tools with human-level intelligence are implicit in the scenario.
I'm not quite sure what "predictions" you have in mind, though.

Replies from: lavalamp
comment by lavalamp · 2013-01-28T19:35:04.341Z · LW(p) · GW(p)

That was poorly phrased, sorry. I meant it's difficult to reason about in general. Like, I expect futures with human-level machine intelligences to be really unstable and either turn into FAI heaven or uFAI hell rapidly. I also expect them to not be particularly resource constrained, such that the marginal effects of one human wireheading would be pretty much nil. But I hold all beliefs about this sort of future with very low confidence.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-28T20:26:34.825Z · LW(p) · GW(p)

Confidence isn't really the issue, here.

If I want to know how important the argument from social good is to my judgments about wireheading, one approach to teasing that out is to consider a hypothetical world in which there is no net social good to my not wireheading, and see how I judge wireheading in that world. One way to visualize such a hypothetical world is to assume that automated tools capable of doing everything I can do already exist, which is to say tools at least as "smart" as I am for some rough-and-ready definition of "smart".

Yes, for such a world to be at all stable, I have to assume that such tools aren't full AGIs in the sense LW uses the term -- in particular, that they can't self-improve any better than I can. Maybe that's really unlikely, but I don't find that this limits my ability to visualize it for purposes of the thought experiment.

For my own part, as I said in an earlier comment, I find that the argument from social good is rather compelling to me... at least, if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.

Replies from: lavalamp
comment by lavalamp · 2013-01-28T20:59:03.102Z · LW(p) · GW(p)

...if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.

Agreed. If you'll reread my comment a few levels above, I mention the resource argument is an exception in that I could see situations in which it applied (I find my welfare recipient much more likely than your scenario, but either way, same argument).

It's primarily the "your friends will be happy for you" bit that I couldn't imagine, but trying to imagine it made me think of worlds where I was evil.

I mean, I basically have to think of scenarios where it'd really be best for everybody if I suicide. The only difference between wireheading and suicide with regards to the rest of the universe is that suicides consume even fewer resources. Currently I think suicide is a bad choice for everyone with the few obvious exceptions.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-28T21:54:59.827Z · LW(p) · GW(p)

Well, you know your friends better than I do, obviously.

That said, if a friend of mine moved somewhere where i could no longer communicate with them, but I was confident that they were happy there, my inclination would be to be happy for them. Obviously that can be overridden by other factors, but again it's not difficult to imagine.

Replies from: CAE_Jones, lavalamp
comment by CAE_Jones · 2013-01-29T04:08:54.885Z · LW(p) · GW(p)

That the social aspect is where most of the concern seems to be is interesting.

I have to wonder what situation would result in wireheading being permanent (no exceptions), without some kind of contact with the outside world as an option. If the economic motivation behind technology doesn't change dramatically by the time wireheading becomes possible, it'd need to have commercial appeal. Even if a simulation tricks someone who wants to get out into believing they've gotten out, if they had a pre-existing social network that notices them not coming out of it, the backlash could still hurt the providers.

I know for me personally, I have so few social ties at present that I don't see any reason not to wirehead. I can think of one person who I might be unpleasantly surprised to discover had wireheaded, but that person seems like he'd only do that if things got so incredibly bad that humanity looked something like doomed. (Where "doomed" is... pretty broadly defined, I guess.). If the option to wirehead was given to me tomorrow, though, I might ask it to wait a few months just to see if I could maintain sufficient motivation to attempt to do anything with the real world.

comment by lavalamp · 2013-01-29T03:55:28.003Z · LW(p) · GW(p)

I think the interesting discussion to be had here is to explore why my brain thinks of a wire-headed person as effectively dead, but yours thinks they've just moved to antartica.

I think it's the permanence that makes most of the difference for me. And the fact that I can't visit them even in principle, and the fact that they won't be making any new friends. The fact that their social network will have zero links for some reason seems highly relevant.

comment by ArisKatsaris · 2013-01-27T18:21:17.010Z · LW(p) · GW(p)

We don't need to be motivated by a single purpose. The part of our brains that is morality and considers what is good for the rest of the word, the part of our brains that just finds it aesthetically displeasing to be wireheaded for whatever reason, the part of our brains that just seeks pleasure, they may all have different votes of different weights to cast.

Replies from: Kawoomba, TheOtherDave
comment by Kawoomba · 2013-01-27T19:06:11.756Z · LW(p) · GW(p)

I against my brother, my brothers and I against my cousins, then my cousins and I against strangers.

Which bracket do I identify with at the point in time when being asked the question? Which perspective do I take? That's what determines the purpose. You might say - well, your own perspective. But that's the thing, my perspective depends on - other than the time of day and my current hormonal status - the way the question is framed, and which identity level I identify with most at that moment.

Replies from: Raoul589
comment by Raoul589 · 2013-01-28T01:21:28.185Z · LW(p) · GW(p)

Does it follow from that that you could consider taking the perspective of your post wirehead self?

Replies from: Kawoomba
comment by Kawoomba · 2013-01-28T07:00:31.770Z · LW(p) · GW(p)

Consider in the sense of "what would my wire headed self do", yes. Similar to Anja's recent post. However, I'll never (can't imagine the circumstances) be in a state of mind where doing so would seem natural to me.

comment by TheOtherDave · 2013-01-27T19:20:27.808Z · LW(p) · GW(p)

Yes. But insofar as that's true, lavalamp's idea that Raoul589 should wirehead if the social-good argument doesn't move them is less clear.

comment by Kindly · 2013-01-26T16:38:32.982Z · LW(p) · GW(p)

I simply don't believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you 'yay, understanding and exploration!'.

So what would "really valuing" understanding and exploration entail, exactly?

comment by ArisKatsaris · 2013-01-26T15:35:14.153Z · LW(p) · GW(p)

why don't we just hack the brain equivalent to say 'yay!' as much as possible?

Because my brain does indeed say "yay!" about stuff, but hacking my brain to constantly say "yay!" isn't one of the stuff that my brain says "yay!" about.

comment by [deleted] · 2010-01-26T00:20:44.842Z · LW(p) · GW(p)

What I'm observing in the various FAI debates is a tendency of people to shy away from wire-heading as something the FAI should do. This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want". This is not, however, clear to me at all.

I don't want that. There, did I make it clear?

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them.

Since when does shut-up-and-multiply mean "multiply utility by number of beings"?

If you don't want to be "reduced" to an eternal state of bliss, that's tough luck.

Heh heh.

Replies from: Raoul589, MugaSofer
comment by Raoul589 · 2013-01-20T12:02:28.542Z · LW(p) · GW(p)

'I don't want that' doesn't imply 'we don't want that'. In fact, if the 'we' refers to humanity as a whole, then denisbider's position refutes the claim by definition.

comment by MugaSofer · 2013-01-20T15:26:29.736Z · LW(p) · GW(p)

I don't want that. There, did I make it clear?

You do realize the same argument could "prove" that humans don't want to live forever and enjoy giving our money to anyone clever enough to notice our preferences are circular?

comment by Wei Dai (Wei_Dai) · 2010-01-26T01:02:21.643Z · LW(p) · GW(p)

denis, most utilitarians here are preference utilitarians, who believe in satisfying people's preferences, rather than maximizing happiness or pleasure.

To those who say they don't want to be wireheaded, how do you really know that, when you haven't tried wireheading? An FAI might reason the same way, and try to extrapolate what your preferences would be if you knew what it felt like to be wireheaded, in which case it might conclude that your true preferences are in favor of being wireheaded.

Replies from: ciphergoth, Stuart_Armstrong
comment by Paul Crowley (ciphergoth) · 2010-01-26T01:06:06.176Z · LW(p) · GW(p)

To those who say they don't want to be wireheaded, how do you really know that, when you haven't tried wireheading?

But it's not because I think there's some downside to the experience that I don't want it. The experience is as good as can possibly be. I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.

Replies from: Wei_Dai, byrnema
comment by Wei Dai (Wei_Dai) · 2010-01-26T03:37:55.108Z · LW(p) · GW(p)

The experience is as good as can possibly be.

You don't know how good "as good as can possibly be" is yet.

I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.

But surely the cost in happiness that you're willing to accept isn't infinite. For example, presumably you're not willing to be tortured for a year in exchange for a year of thinking and doing stuff. Someone who has never experienced much pain might think that torture is no big deal, and accept this exchange, but he would be mistaken, right?

How do you know you're not similarly mistaken about wireheading?

Replies from: Kaj_Sotala, ciphergoth, CannibalSmith
comment by Kaj_Sotala · 2010-01-26T10:11:34.588Z · LW(p) · GW(p)

How do you know you're not similarly mistaken about wireheading?

I'm a bit skeptical of how well you can use the term "mistaken" when talking about technology that would allow us to modify our minds to an arbitrary degree. One could easily fathom a mind that (say) wants to be wireheaded for as long as the wireheading goes on, but ceases to want it the moment the wireheading stops. (I.e. both prefer their current state of wireheadedness/non-wireheadedness and wouldn't want to change it.) Can we really say that one of them is "mistaken", or wouldn't it be more accurate to say that they simply have different preferences?

EDIT: Expanded this to a top-level post.

comment by Paul Crowley (ciphergoth) · 2010-01-27T08:40:49.569Z · LW(p) · GW(p)

Interesting problem! Perhaps I have a maximum utility to happiness, which increasing happiness approaches asymptotically?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-30T03:00:19.668Z · LW(p) · GW(p)

Perhaps I have a maximum utility to happiness, which increasing happiness approaches asymptotically?

Yes, I think that's quite possible, but I don't know whether it's actually the case or not. A big question I have is whether any of our values scales up to the size of the universe, in other words, doesn't asymptotically approach an upper bound well before we used up the resources in the universe. See also my latest post http://lesswrong.com/lw/1oj/complexity_of_value_complexity_of_outcome/ where I talk about some related ideas.

comment by CannibalSmith · 2010-01-26T10:01:40.326Z · LW(p) · GW(p)

The maximum amount of pleasure is finite too.

comment by byrnema · 2010-01-26T01:26:39.855Z · LW(p) · GW(p)

I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.

The FAI can make you feel as though you "think things and do stuff", just by changing your preferences. I don't think any reason beginning with "I want" is going to work, because your preferences aren't fixed or immutable in this hypothetical.

Anyway, can you explain why you are attached to your preferences? That "it's better to value this than value that" is incoherent, and the FAI will see that. The FAI will have no objective, logical reason to distinguish between values you currently have and are attached to and values that you could have and be attached to, and might as well modify you than modify the universe. (Because the universe has exactly the same value either way.)

Replies from: LucasSloan, Kutta, ciphergoth
comment by LucasSloan · 2010-01-26T01:30:48.873Z · LW(p) · GW(p)

If any possible goal is considered to have the same value (by what standard?), then the "FAI" is not friendly. If preferences don't matter, then why does them not mattering matter? Why change one's utility function at all, if anything is as good as anything else?

Replies from: byrnema
comment by byrnema · 2010-01-26T02:21:56.463Z · LW(p) · GW(p)

Well I understand I owe money to the Singularity Institute now for speculating on what the output of the CEV would be. (Dire Warnings #3)

Replies from: timtyler
comment by timtyler · 2010-01-26T10:22:37.178Z · LW(p) · GW(p)

That page said:

"None may argue on the SL4 mailing list about the output of CEV".

A different place, with different rules.

comment by Kutta · 2010-01-26T11:08:23.921Z · LW(p) · GW(p)

The FAI can make you feel as though you "think things and do stuff", just by changing your preferences.

I can't see how a true FAI can change my preferences if I prefer them not being changed.

Anyway, can you explain why you are attached to your preferences? That "it's better to value this than value that" is incoherent, and the FAI will see that. The FAI will have no objective, logical reason to distinguish between values you currently have and are attached to and values that you could have and be attached to, and might as well modify you than modify the universe. (Because the universe has exactly the same value either way.)

It does not work this way. We want to do what is right, not what would conform our utility function if we were petunias or paperclip AIs or randomly chosen expected utility maximizers; the whole point of Friendliness is to find out and implement what we care about and not anything else.

I'm not only attached to my preferences; I am great part my preferences. I even have a preference such that I don't want my preferences to be forcibly changed. Thinking about changing meta-preferences quickly leads to a strange loop, but if I look at specific outcome (like me being turned to orgasmium) I can still make a moral judgement and reject that outcome.

The FAI will have no objective, logical reason to distinguish between values you currently have and are attached to and values that you could have and be attached to, and might as well modify you than modify the universe. (Because the universe has exactly the same value either way.)

The FAI has a perfectly objective, logical reason to do what's right and not else; its existence and utility function is causally retractable to the humans that designed it. An AI that verges on nihilism and contemplates switching humanity's utility function to something else, partly because the universe has the "exactly same value" either way, is definitely NOT a Friendly AI.

Replies from: byrnema, tut
comment by byrnema · 2010-01-26T17:00:12.963Z · LW(p) · GW(p)

OK, I agree with this comment and this one that if you program an FAI to satisfy our actual preferences with no compromise, than that is what it is going to do. If people have a preference for their values being satisfied in reality, rather than them just being satisfied virtually, then no wire-heading for them.

However, if you do allow compromise so that the FAI should modify preferences that contradict each other, then we might be on our way to wire-heading. Eliezer observes there is a significant 'objective component to human moral intuition'. We also value truth and meaning. (This comment strikes me as relevant.) If the FAI finds that these thre e are incompatible, which preference should it modify?

(Background for this comment in case you're not familiar with my obsession -- how could you have missed it? -- is that objective meaning, from any kind of subjective/objective angle, is incoherent.)

Replies from: Kutta
comment by Kutta · 2010-01-26T18:11:05.825Z · LW(p) · GW(p)

you do allow compromise so that the FAI should modify preferences that contradict each other, then we might be on our way to wire-heading.

First, I just note that this is a full-blown speculation about Friendliness content which should be only done while wearing a gas mask or a clown suit, or after donating to SIAI.

Quoting CEV:

"In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted."

Also:

"Do we want our coherent extrapolated volition to satisfice, or maximize? My guess is that we want our coherent extrapolated volition to satisfice - to apply emergency first aid to human civilization, but not do humanity's work on our behalf, or decide our futures for us. If so, rather than trying to guess the optimal decision of a specific individual, the CEV would pick a solution that satisficed the spread of possibilities for the extrapolated statistical aggregate of humankind."

This should adddress your question. CEV would not typically modify humans on contradictions. But I repeat, this is all speculation.

It's not clear to me from your recent posts whether you've read the metaethics sequence and/or CEV; if you haven't, I recommend it whole-heartedly as it's the most detailed discussion of morality available. Regarding your obsession, I'm aware of it and I think I'm able to understand your history and vantage point that enable such distress to arise, although my current self finds the topic utterly trivial and essentially a non-problem.

comment by tut · 2010-01-26T11:33:00.886Z · LW(p) · GW(p)

...a perfectly objective, ... reason ...

How do you define this term?

Replies from: Kutta
comment by Kutta · 2010-01-26T11:50:40.522Z · LW(p) · GW(p)

"Reason" here: a normal, unexceptional instance of cause and effect. It should be understood in a prosaic way, e.g. reason in a causal sense.

As for "objective", I borrowed it from the parent post to illustrate my point. To expand on "objective" a bit: everything that exists in physical reality is, and our morality is as physical and extant as a brick (via our physical brains), so what sense does it make to distinguish between "subjective" and "objective," or to refer to any phenomena as "objective" when in reality it is not a salient distinguishing feature.

If anything is "objective", then I see no reason why human morality is not, that's why I included the word in my post. But probably the best would be to simply refrain from generating further confusion by the objective/subjective distinction.

Replies from: tut
comment by tut · 2010-01-26T12:25:14.664Z · LW(p) · GW(p)

Reason is not the same as cause. Cause is whatever brings something about in the physical world. Reason is a special kind of cause for intentional actions. Specifically a reason for an action is a thought which convinces the actor that the action is good. So an objective reason would need an objective basis for something being called good. I don't know of such a basis, and a bit more than a week ago half of the LW readers were beating up on Byrnema because she kept talking about objective reasons.

Replies from: Kutta
comment by Kutta · 2010-01-26T17:46:11.857Z · LW(p) · GW(p)

OK then, it was a misuse of the word from my part. Anyway, I'd never intend a teleological meaning for reasons discussed here before.

comment by Paul Crowley (ciphergoth) · 2010-01-26T08:43:00.276Z · LW(p) · GW(p)

The FAI can make you feel as though you "think things and do stuff", just by changing your preferences.

Please read Not for the Sake of Happiness (Alone) which addresses this point.

comment by Stuart_Armstrong · 2010-01-26T12:55:50.532Z · LW(p) · GW(p)

To those who say they don't want to be wireheaded, how do you really know that, when you haven't tried wireheading?

Same reason I don't try heroin. Wireheading (as generally conceived) imposes a predictable change on the user's utility function; huge and irreversible. Gathering this information is not without cost.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-26T13:20:46.541Z · LW(p) · GW(p)

I'm not suggesting that you try wireheading now, I'm saying that an FAI can obtain this information without a high cost, and when it does, it may turn out that you actually do prefer to be wireheaded.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-01-26T14:05:44.514Z · LW(p) · GW(p)

That's possible (especially the non-addictive type of wire heading).

Though this does touch upon issues of autonomy - I'd like the AI to run it by me, even though it will have correctly predicted that I'd accept.

comment by [deleted] · 2010-01-26T01:06:30.998Z · LW(p) · GW(p)

But I don't want to be a really big integer!

Replies from: Wei_Dai, ShardPhoenix, RobertWiblin, John_Maxwell_IV
comment by Wei Dai (Wei_Dai) · 2010-01-29T06:17:54.129Z · LW(p) · GW(p)

If being wireheaded is like being a really big positive integer, then being anti-wireheaded (i.e., having large amounts of pain directly injected into your brain) must be like being a really big negative integer. So I guess if you had to choose between the two, you'd be pretty much indifferent, right?

Replies from: None, wedrifid
comment by [deleted] · 2010-01-30T01:51:20.162Z · LW(p) · GW(p)

I wouldn't be indifferent. If I had to choose between being wireheaded and being antiwireheaded, I would choose the former. I don't simply assign utility = 0 to simple pleasure or pain. I just don't think that wireheading is the most fun we could be having. If you asked someone on their deathbed what the best experiences of their life were, they probably wouldn't talk about sex or heroin (yes, this might be an ineffectual status grab or selectively committing only certain types of fun to memory, but I doubt it).

Replies from: Wei_Dai, ciphergoth
comment by Wei Dai (Wei_Dai) · 2010-01-30T02:42:40.177Z · LW(p) · GW(p)

This seems like a good example of logical rudeness to me. Your original comment was premised on an equivalence (which you explicitly spelled out later) between being wireheaded and being a large integer. I pointed out that accepting this premise would lead to indifference between wireheading and anti-wireheading. That was obviously meant to be a reductio ad absurdum. But you ignored the reductio and switched to talking about why wireheading is not the most fun we could be having.

To be clear, I don't think wireheading is necessarily the most fun we could be having. I just think we don't know enough about the nature of pleasure, fun, and/or preference to decide that right now.

Replies from: None
comment by [deleted] · 2010-01-30T02:58:51.761Z · LW(p) · GW(p)

You know, you're right. That was a bit of a non sequitur.

Back to the original point, I think I'm starting to chang my mind about the equivalence between a wirehead and a number (insert disclaimer about how everything is a number): after all, I'd feel worse about killing one than tilting an abacus.

Maybe "But I don't want to spend a lot of time doing something so simple" would work for version 3.0

comment by Paul Crowley (ciphergoth) · 2010-01-30T08:59:16.548Z · LW(p) · GW(p)

If you were to ask me now what the best experiences of my life were, some of the sex I've had would definitely be up there, and I've had quite a variety of pleasurable experiences.

comment by wedrifid · 2010-01-29T07:28:28.304Z · LW(p) · GW(p)

So I guess if you had to choose between the two, you'd be pretty much indifferent, right?

You have a good point buried in there but the conclusion you suggest isn't necessarily implied.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-30T09:00:19.646Z · LW(p) · GW(p)

FWIW, I'm seeing his point better than I'm seeing yours at the moment, and I found uninverted's argument convincing until I read Wei_Dai's response. Try being more explicit?

Replies from: wedrifid
comment by wedrifid · 2010-01-30T13:24:26.868Z · LW(p) · GW(p)

I can't get all that much more explicit. It's near the level of raw logic and the 'conclusion you suggest' is included as a direct quote in case there was any doubt. Let's see.

A I don't want to be a really big integer!
B If you had to choose between being a really big integer and a really big negative integer, you'd be pretty much indifferent.

B is not implied by A.

I would have replaced my (grandparent) comment with the phrase 'non sequitur' except I wanted to acknowledge that Wei_Dai is almost certainly considering related issues beyond the conclusion he actually offered.

Replies from: Wei_Dai, ciphergoth
comment by Wei Dai (Wei_Dai) · 2010-01-31T03:57:13.366Z · LW(p) · GW(p)

B is implied by "C Being any integer is of no value." which I took as an unspoken assumption that's shared between uninverted and I (and I thought it was likely that he accepts this assumption based on A). Does that answer your criticism, or not?

Replies from: wedrifid
comment by wedrifid · 2010-01-31T07:10:14.329Z · LW(p) · GW(p)

C seems likely to me based on A only if I assume D (uninverted is silly). That's because there are other beliefs that could make one claim A that are more coherent than C. But let's ignore this little side track and just state what we (probably) all agree on:

  • Being a positive integer isn't particularly desirable.
  • Wireheading, orgasmium and positive floating point numbers or representations of 3(as many carats as fit in the galaxy here)3 are considered equivalent to 'positive integer' for most intents and purposes.
  • Being a negative integer is even worse than being a positive integer.
  • Being an integer at all is not that great.
  • Just being entropy sounds worse than just being a positive integer.
  • The universe ending up the same as if you weren't in it at all sounds worse than being a positive integer. (Depending on intuitive aversion to oblivion and torment some would say worse than being any sort of integer.)
  • Fun is better than orgasmic integerness.

If we disagree on these statements then that will actually be interesting. And it is quite possible that there is disagreement even on these. I've often been surprised when people have different intuitions than I expect.

comment by Paul Crowley (ciphergoth) · 2010-01-30T15:54:06.206Z · LW(p) · GW(p)

The force of the argument "I don't want to be a really big integer" is that "being wireheaded takes away what makes me me, and so I stop being a person I can identify with and become a really big integer". If that were so, the same would apply to anti-wireheading, and Wei Dai's question would apply. If you agree that wireheading is more desirable than anti-wireheading, then this and other arguments that it's not more desirable than any other state don't directly apply.

Replies from: RobinZ, wedrifid
comment by RobinZ · 2010-01-30T18:02:08.239Z · LW(p) · GW(p)

If we take the alternative reasonable interpretation "takes away almost everything what makes me me", no contradiction appears.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-30T18:24:23.319Z · LW(p) · GW(p)

Yes, that makes sense.

comment by wedrifid · 2010-01-30T16:47:29.992Z · LW(p) · GW(p)

this and other arguments that it's not more desirable than any other state

I reject this combination of words and maintain my previous position.

comment by ShardPhoenix · 2010-01-26T11:46:00.490Z · LW(p) · GW(p)

Too late?

Replies from: None
comment by [deleted] · 2010-01-28T20:34:24.426Z · LW(p) · GW(p)

"But I don't want to be a structure whose chief component is a very large integer with a straightforward isomorphism to something else, namely some unspecified notion of 'happiness'" is a little too cumbersome.

comment by RobertWiblin · 2010-01-29T14:05:36.456Z · LW(p) · GW(p)

You will be gone and something which does want to be a big integer will replace you and use your resources more effectively. Both hedonistic and preference utilitarianism demand it.

Replies from: None, thomblake
comment by [deleted] · 2010-01-30T01:32:07.180Z · LW(p) · GW(p)

Preference utilitarianism as I understand it implies nothing more than using utility to rank universe states. That doesn't imply anything about what the most efficient use of matter is. As for hedonistic utilitarians, why would any existing mind want to build something like that or grow into something like that? Further, why would something like that be better at seizing resources?

Replies from: RobertWiblin
comment by RobertWiblin · 2010-01-30T09:17:56.895Z · LW(p) · GW(p)

I am using (total) preference utilitarianism to mean: "we should act so as to maximise the number of beings' preferences that are satisfied anywhere at any time".

"As for hedonistic utilitarians, why would any existing mind want to build something like that or grow into something like that?"

Because they are not selfish and they are concerned about the welfare of that being in proportion to its ability to have experiences?

"Further, why would something like that be better at seizing resources?"

That's a weakness, but at some point we have to start switching from maximising resource capture to using those resources to generate good preference satisfaction (or good experiences if you're a hedonist). At that point a single giant 'utility monster' seems most efficient.

comment by thomblake · 2010-01-29T14:07:01.492Z · LW(p) · GW(p)

For reference, the "utilitarians" 'round these parts tend to be neither of those.

Replies from: RobertWiblin
comment by RobertWiblin · 2010-01-29T16:43:54.529Z · LW(p) · GW(p)

What are they then?

comment by John_Maxwell (John_Maxwell_IV) · 2010-01-27T03:46:47.030Z · LW(p) · GW(p)

You are confusing a thing and its measurement.

Replies from: None
comment by [deleted] · 2010-01-29T00:09:14.314Z · LW(p) · GW(p)

If a video game uses an unsigned 32 bit integer for your score, then how would that integer differ from your (abstract platonic) score?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-02-02T23:37:21.941Z · LW(p) · GW(p)

My "abstract platonic score" is a measurement of my happiness, and my happiness is determined by the chemical processes of my brain. My video game score is a measure of my success playing a video game. If the number on the screen ceases to correlate well with the sort of success I care about, I will disregard it. I won't be particularly thrilled if my score triples for no apparent reason, and I won't be particularly thrilled if the number you are using to approximate my happiness triples either.

comment by Tiiba · 2010-01-26T00:55:16.564Z · LW(p) · GW(p)

One reason I might not want to become a ball of ecstasy is that I don't really feel it's ME. I'm not even sure it's sentient, since it doesn't learn, communicate, or reason, it just enjoys.

Replies from: GodParty
comment by GodParty · 2016-06-20T00:20:26.352Z · LW(p) · GW(p)

Sentience is exactly just the ability to feel. If it can feel joy, it is sentient.

Replies from: hairyfigment
comment by hairyfigment · 2016-06-20T17:54:18.724Z · LW(p) · GW(p)

Yes, but for example in highway hypnosis people drive on 'boring' stretches of highway and then don't remember doing so. It seems as if they slowly lose the capacity to learn or update beliefs even slightly from this repetitive activity, and as this happens their sentience goes away. So we haven't established that the sentient ball of uniform ecstasy is actually possible.

Meanwhile, a badly programmed AI might decide that a non-sentient or briefly-sentient ball still fits its programmed definition of the goal. Or it might think this about a ball that is just barely sentient.

comment by andreas · 2010-01-26T00:45:06.201Z · LW(p) · GW(p)

Not for the Sake of Happiness (Alone) is a response to this suggestion.

comment by grouchymusicologist · 2010-01-26T00:47:31.297Z · LW(p) · GW(p)

I just so happened to read Coherent Extrapolated Volition today. Insofar as this post is supposed to be about "what an FAI should do" (rather than just about your general feeling that objections to wire-heading are irrational), it seems to me that this post all really boils down to navel-gazing once you take CEV into account. Or in other words, this post isn't really about FAI at all.

comment by JaapSuter · 2010-01-26T20:29:55.027Z · LW(p) · GW(p)

A number of people mention this one way or another, but an explicit search for "local maximum" doesn't match any specific comment - so I wanted to throw it out here.

Wireheading is very likely to put oneself in a local maximum of bliss. Though a wirehead may not care or even ponder about whether or not there exist greater maxima, it's a consideration that I'd take into account prior to wiring up.

Unless one is omniscient, the act of a permanent (-ish) state of wireheading means foregoing the possibility of discovering a greater point of wireheaded happiness.

I guess the very definition of wireheadedness bakes in the notion that you wouldn't care about that anymore - good for those taking the plunge and hooking up I suppose. Personally, the universe would have to throw me an above average amount of negative derivates before I'd say enough is enough, screw potential for higher maxima, I'll take this one...

comment by Stuart_Armstrong · 2010-01-26T12:52:11.320Z · LW(p) · GW(p)

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them.

Rewritting that: if you are an altruistic, total utilitarian whose utility function includes only hedonistic pleasure and with no birth-death asymmetry, then the correct thing for the FAI to do is...

Replies from: RobertWiblin
comment by RobertWiblin · 2010-01-29T13:58:23.231Z · LW(p) · GW(p)

Needn't be total - average would suggest creating one single extremely happy being - probably not human.

Needn't only include hedonic pleasure - a preference utilitarian might support eliminating humans and replacing them with beings whose preferences are cheap to satisfy (hedonic pleasure being one cheap preference). Or you could want multiple kinds of pleasure, but see hedonic as always more efficient to deliver as proposed in the post.

comment by [deleted] · 2010-01-26T07:45:35.866Z · LW(p) · GW(p)

If you're considering the ability to rewire one's utility function, why simplify the function rather than build cognitive tools to help people better satisfy the function? What you've proposed is that an AI destroys human intelligence, then pursues some abstraction of what it thinks humans wanted.

Your suggestion is that an AI might assume that the best way to reach its goal of making humans happy (maximizing their utility) is to attain the ends of humans' functions faster and better than we could, and rewire us to be satisfied. There are two problems here that I see.

First, the means are an end. Much of what we value isn't the goal we claim as our objective, but the process of striving for the goal. So here you have an AI that doesn't really understand what humans want.

Second, most humans aren't interested in every avenue of exploration, creation and pleasure. Our interests are distinct. They also do change over time (or some limited set of parameters do anyhow). We don't always notice them change, and when they do, we like to track down what decisions they made that led them to their new preferences. People value the (usually illusory) notion that they control changes to their utility functions. The offer to be a "wirehead" is an action which intrinsically violates peoples' utility in the illusion of autonomy. This doesn't apply to everyone - hedonists can apply to your heaven. I suspect that few others would want it.

Also,

If you don't want to be "reduced" to an eternal state of bliss, that's tough luck... The FAI can simply modify your preferences so you want an eternally blissful state.

That is not friendly.

I think you have an idea that there is a "global human utility function" and that FAI is that which satisfies this. Humans have commonalities in their functions, but they are localized around the notion of self. Your "FAI" generalizes what most people want in some form except for experience and autonomy, but the other values it extracts are, in humans, not independent from those.

comment by CronoDAS · 2010-01-26T10:41:37.731Z · LW(p) · GW(p)

I'd like to be a wirehead, but have no particular desire to impose that condition on others.

comment by nawitus · 2010-01-26T08:04:56.502Z · LW(p) · GW(p)

I think people shy away from wireheading because a future full of wireheads would be very boring indeed. People like to think there's more to existence than that. They wan't to experience something more interesting than eternal pleasure. And that's exactly what an FAI should allow.

Replies from: CannibalSmith, Vladimir_Nesov, RobinZ, timtyler
comment by CannibalSmith · 2010-01-26T10:05:23.427Z · LW(p) · GW(p)

Boring from the perspective of any onlookers not the wirehead.

comment by RobinZ · 2010-01-26T12:51:59.351Z · LW(p) · GW(p)

I think people shy away from wireheading because a future full of wireheads would be very boring indeed.

It falls below Reedspacer's lower bound, for sure.

comment by timtyler · 2010-01-26T10:17:42.157Z · LW(p) · GW(p)

Could be more boring. There's more than one kind of wirehead. For example, if everyone were a heroin addict, the world might be more boring - but it would still be pretty interesting.

comment by gregconen · 2010-01-27T14:59:21.428Z · LW(p) · GW(p)

If all the AI cares about is the utility of each being times the number of beings, and is willing to change utility functions to get there, why should it bother with humans? Humans have all sorts of "extra" mental circuitry associated with being unhappy, which is just taking up space (or computer time in a simulator). Instead, it makes new beings, with easily satisfied utility functions and as little extra complexity as possible.

The end result is just as unFriendly, from a human perspective, as the naive "smile maximizer".

Replies from: RobertWiblin
comment by RobertWiblin · 2010-01-29T13:53:49.544Z · LW(p) · GW(p)

Who cares about humans exactly? I care about utility. If the AI thinks humans aren't an efficient way of generating utility, we should be eliminated.

Replies from: gregconen, tut, thomblake
comment by gregconen · 2010-01-29T14:36:12.280Z · LW(p) · GW(p)

That's a defensible position, if you care about the utility of beings that don't currently exist, to the extent that you trade the utility of currently existing beings to create new, happier ones.

The point is that the result of total utility maximization is unlikely to be something we'd recognize as people, even wireheads or Super Happy People.

comment by tut · 2010-01-29T14:24:58.202Z · LW(p) · GW(p)

Who cares about humans exactly? I care about utility.

That is nonsense. Utility is usefulness to people. If there are no humans there is no utility. An AI that could become convinced that "humans are not an efficient way to generate utility" would be what is referred to as a paperclipper.

This is why I don't like the utile jargon. It makes it sound as though utility was something that could be measured independently of human emotions. Perhaps some kind of substance. But if statements about utility are not translated back to statements about human action or goals then they are completely meaningless.

Replies from: ciphergoth, RobertWiblin
comment by Paul Crowley (ciphergoth) · 2010-01-29T14:38:50.865Z · LW(p) · GW(p)

Utility is usefulness to people. If there are no humans there is no utility.

Utility is goodness measured according to some standard of goodness; that standard doesn't have to reference human beings. In my most optimistic visions of a far future, human values outlive the human race.

Replies from: tut
comment by tut · 2010-01-29T14:51:58.518Z · LW(p) · GW(p)

Utility is goodness measured according to some standard of goodness; that standard doesn't have to reference human beings. In my most optimistic visions of a far future, human values outlive the human race.

Are we using the same definition of "human being"? We would not have to be biologically identical to what we are now in order to be people. But human values without humans also sounds meaningless to me. There are no values atoms or goodness atoms sitting around somewhere. To be good or to be valuable something must be good or valuable by the standards of some person. So there would have to be somebody around to do the valuing. But the standards don't have to be explicit or objective.

comment by RobertWiblin · 2010-01-29T16:39:44.646Z · LW(p) · GW(p)

Utility as I care about it is probably the result of information processing. Not clear why information should only be able to be processed in that way by human type minds, let alone fleshy ones.

comment by thomblake · 2010-01-29T14:00:23.534Z · LW(p) · GW(p)

Starting with the assumption of utilitarianism, I believe you're correct. I think the folks working on this stuff assign a low probability to "kill all humans" being Friendly. But I'm pretty sure people aren't supposed to speculate about the output of CEV.

Replies from: RobertWiblin
comment by RobertWiblin · 2010-01-29T17:24:58.076Z · LW(p) · GW(p)

Probably the proportion of 'kill all humans' AIs that are friendly is low. But perhaps the proportion of FAIs that 'kill all humans' is large.

Replies from: gregconen, Vladimir_Nesov
comment by gregconen · 2010-01-30T03:17:42.527Z · LW(p) · GW(p)

That depends on your definition of Friendly, which in turn depends on your values.

comment by Vladimir_Nesov · 2010-01-29T23:53:04.044Z · LW(p) · GW(p)

But perhaps the proportion of FAIs that 'kill all humans' is large.

Maybe probability you estimate for that to happen is high, but "proportion" doesn't makes sense, since FAI is defined as an agent acting for specific preference, so FAIs have to agree on what to do.

Replies from: RobertWiblin
comment by RobertWiblin · 2010-01-30T04:07:38.392Z · LW(p) · GW(p)

OK, I'm new to this.

comment by JamesAndrix · 2010-01-26T03:13:30.440Z · LW(p) · GW(p)

Even from a hedonistic perspective, 'Shut up and multiply' wouldn't necessarily equate to many beings experiencing pleasure.

It could come out to one superbeing experiencing maximal pleasure.

Actually, I think thinking this out (how big are entities, who are they?) will lead to good reasons why wireheading is not the answer.

Example: If I'm concerned about my personal pleasure, then maximizing the number of agents isn't a big issue. If my personal identity is less important than total pleasure maximizing, then I get killed and converted to orgasmium (be it one being or many). If my personal identity is more important... well, then we're not just multiplying hedons anymore.

comment by byrnema · 2010-01-26T02:55:06.133Z · LW(p) · GW(p)

This is a very relevant post for me because I've been asking these questions in one form or another for several months. A framework of objective value (FOV) seems to be precluded by physical materialism. However, without it, I cannot see any coherent difference between being happy (or satisfied) because of what is going on in a simulation and what is going on in reality. Since value (that is, our personal, subjective value) isn't tied to any actual objective good in the universe, it doesn't matter to our subjective fulfillment if the universe is modified to be 'better' (with respect to our POV), a simulation we're in is modified to be better, or our preferences are modified.

For example, I asked the question several weeks ago here.

When I began to complain (at length...) that without FOV I felt like I was trapped in a machine carrying out instructions to satisfy preferences I neither care about nor am able to abort, it was recommended that I replace my preference for objective value with a preference for subjective value.

If it is true that the only solution to my problem with the non-existence of an FOV is to change my preference -- and I've already understood that the logical consequence of this is that any kind of preference fulfillment is equivalent to wire-heading -- then I'm simply not going to be very sympathetic to objections to wire-heading based on having preferences for not being wire-headed. It's simply not coherent; there's no difference.

Replies from: Furcas, Jack
comment by Furcas · 2010-01-26T03:43:33.511Z · LW(p) · GW(p)

It's simply not coherent; there's no difference.

Yes there is.

The desire to be alive, to live in the real universe, and to continue having the same preferences/values is not at all like the desire to feel like our desires have been fulfilled. Our desires are patterns encoded within our brains that correspond to a (hopefully) possible state of reality. If we were to take the two desires/patterns described above and transform them into two strings of bits, the two strings would not be equal. There is an objective difference between them, just as there is an objective difference between Windows and Mac OS.

You seem to believe that because desires are something that can only exist inside a mind, therefore desires can only be about the state of one's mind. This is false; desires can be about all of reality, of which the state of one mind's is only a very small part.

Replies from: byrnema
comment by byrnema · 2010-01-26T05:09:21.952Z · LW(p) · GW(p)

You seem to believe that because desires are something that can only exist inside a mind, therefore desires can only be about the state of one's mind.

I don't believe this, but I was concerned I would be interpreted this way.

I can have a subjective desire that a cup be objectively filled. I fill it with water, and my desire is objectively satisfied.

The problem I'm describing is that filling the cup is a terminal value with no objective value. I'm not going to drink it, I'm not going to admire how beautiful it is, I just want it filled because that is my desire.

I think that's useless. Since all the "goodness" is in my subjective preference, I might as well desire that an imaginary cup be filled, or write a story in which an imaginary cup is filled. (You may have trouble relating to filling a cup for no reason being a terminal value, but it is a good example because terminal values are equally objectively useless.)

But let's consider the example of saving a person from drowning. I understand that the typical preference is to actually save a person from drowning. However, my point is that if I am forced to acknowledge that there is no objective value in saving the person from drowning, then I must admit that my preference to save a person from drowning-actually is no better than a preference to save a person from drowning-virtually. It happens that I have the former preference, but I'm afraid it is incoherent.

Replies from: Alicorn, thomblake, Blueberry, CronoDAS
comment by Alicorn · 2010-01-26T05:13:40.275Z · LW(p) · GW(p)

The preference to really save a drowning person rather than virtually is better for the person who is drowning.

Of course, best would be for no one to need to be saved from drowning; then you could indulge an interest in virtually saving drowning people for fun as much as you liked without leaving anyone to really drown.

Replies from: denisbider
comment by denisbider · 2010-01-26T14:26:39.110Z · LW(p) · GW(p)

Actually, most games involve virtually killing, rather than virtually saving. I think that says something...

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-06-14T04:55:13.989Z · LW(p) · GW(p)

In most of those games the people you are killing are endangering someone. There are some games where you play a bad guy, but in the majority you're some sort of protector.

comment by thomblake · 2010-01-26T19:20:05.126Z · LW(p) · GW(p)

Caring about what's right might be as arbitrary (in some objective sense) as caring about what's prime, but we do actually happen to care about what's right.

comment by Blueberry · 2010-01-26T19:07:42.720Z · LW(p) · GW(p)

I must admit that my preference to save a person from drowning-actually is no better than a preference to save a person from drowning-virtually. It happens that I have the former preference, but I'm afraid it is incoherent.

It's better, because it's what your preference actually is. There's nothing incoherent about having the preferences you have. In the end, we value some things just because we value them. An alien with different morality and different preferences might see the things we value as completely random. But they matter to us, because they matter to us.

comment by CronoDAS · 2010-01-26T10:34:25.306Z · LW(p) · GW(p)

There is one way that I know of to handle this; I don't know if you'll find it satisfactory or not, but it's the best I've found so far. You can go slightly meta and evaluate desires as means instead of as ends, and ask which desires are most useful to have.

Of course, this raises the question "Useful for what?". Well, one thing desires can be useful for is fulfilling other desires. If I desire that people don't drown, which causes me to act on that desire by saving people from drowning so they can go on to fulfill whatever desires they happen to have, then my desire than people don't drown is a useful means for fulfilling other desires. Wanting to stop fake drownings isn't as useful a desire as wanting to stop actual drownings. And there does seem to be a more-or-less natural reference point against which to evaluate a set of desires: the set of all other desires that actually exist in the real world.

As luck would have it, this method of evaluating desires tends to work tolerably well. For example, the desire held by Clippy, the paperclip maximizer, to maximize the number of paperclips in the universe, doesn't hold up very well under this standard; relatively few desires that actually exist get fulfilled by maximizing paperclips. A desire to make only the number of paperclips that other people want is a much better desire.

(I hope that made sense.)

Replies from: byrnema
comment by byrnema · 2010-01-28T17:52:44.636Z · LW(p) · GW(p)

It does make sense. However, what would you make of the objection that it is semi-realist? A first-order realist position would claim that what is desired has objective value, while this represents the more subtle belief that the fulfillment of desire has objective value. I do agree -- it is very close to my own original realist position about value. I reasoned that there would be objective (real rather than illusory) value in the fulfillment of the desires of any sentient/valuing being, as some kind of property of their valuing.

comment by Jack · 2010-01-26T03:50:41.939Z · LW(p) · GW(p)

Maybe just have a rule that says:

  1. Fulfill preferences when possible.
  2. Change preferences when they are impossible to fulfill.
Replies from: CronoDAS
comment by CronoDAS · 2010-01-26T10:40:15.458Z · LW(p) · GW(p)

"The strength to change what I can, the ability to accept what I can't, and the wisdom to tell the difference?"

Personally, I prefer the Calvin and Hobbes version: the strength to change what I can, the inability to accept what I can't, and the incapacity to tell the difference. ;)

comment by Ghatanathoah · 2012-06-14T06:33:02.042Z · LW(p) · GW(p)

This is one of the most horrifying things I have ever read. Most of the commenters have done a good just of poking holes in it, but I thought I'd add my take on a few things.

This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want".

Some good and detailed explanations are here, here, here, here, and here.

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings,

No, the correct thing for an FAI to do is to use some resources to increase the number of beings and some to increase the utility of existing beings. You are assuming that creating new beings does not have diminishing returns. I find this highly unlikely. Most activities generate less value the more we do them. I don't see why this would change for creating new beings.

Having new creatures that enjoy life is certainly a good thing. But so is enhancing the life satisfaction of existing creatures. I don't think one of these things is categorically more valuable than the other. I think they are both incrementally valuable.

In other words, as I've said before, the question is not, "Should we maximize total utility or average utility?" It's "How many resources should be devoted to increasing total utility, and how many to increasing average utility?"

and then induce a state of permanent and ultimate enjoyment in every one of them.

Wouldn't it be even more efficient to just create creatures that feel nothing except a vague preference to keep on existing, which is always satisfied?

Or maybe we shouldn't try to minmax morality. Maybe we should understand that phrases like "maximize pleasure" and "maximize preference satisfaction" are just rules of thumb that reflect a deeper and more complex set of moral values.

The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly.

Again, you're assuming all enjoyments are equivalent and don't generate diminishing returns. Pleasure is valuable, but it has diminishing returns, you get more overall value by increasing lots of different kinds of positive things, not just pleasure.

In fact, I can hardly tell this apart from the concept of a Christian Heaven, which appears to be a place where Christians very much want to get.

If you're right this is just proof that Christians are really bad at constructing Heaven. But I don't think you are, most Christians I know think heaven is far more complex than just sitting around feeling good.

If you don't want to be "reduced" to an eternal state of bliss, that's tough luck. The alternative would be for the FAI to create an environment for you to play in, consuming precious resources that could sustain more creatures in a permanently blissful state.

The alternative you suggest is a very good alternative. Creating all those blissful creatures would be a waste of valuable resources that could be used to enhance the preferences of already existing creatures. Again, creating new creatures is often a good thing, but it has diminishing returns.

Now for some rebuttals to your statements in the comments section:

If we all think it's so great to be autonomous, to feel like we're doing all of our own work, all of our own thinking, all of our own exploration - then why does anyone want to build an AI in the first place?

Again, complex values and diminishing returns. Autonomy is good, but if an FAI can help us obtain some other values it might be good to cede a little of our autonomy to it.

I find that comparable to a depressed person who doesn't want to cure his depression, because it would "change who he is". Well, yeah; but for the better.

It's immoral and illegal to force people to medicate for a reason. That being said, depression isn't a disease that changes what your desires are. It's a disease that makes it harder to achieve your desires. If you cured it you'd be better at achieving your desires, which would be a good thing. If a cure radically changed what your desires were it would be a bad thing.

That being said, I wouldn't necessarily object to rewiring humans so that we feel pleasure more easily,, as long as it fulfilled two conditions:

  1. That pleasure must have a referent. You have to do something to trigger the reward center in order to feel it, stimulating the brain directly would be bad.
  2. The increase must be proportional. I should still enjoy a good movie better then a bad movie, even if I enjoy them both a lot more.

Wei explains that most of the readership are preference utilitarians, who believe in satisfying people's preferences, not maximizing pleasure.

That's fine enough, but if you think that we should take into account the preferences of creatures that could exist, then I find it hard to imagine that a creature would prefer not to exist, than to exist in a state where it permanently experiences amazing pleasure.

I don't think that that it's ethical, or possible to take into account the hypothetical preferences of nonexistant creatures. That's not even a logically coherent concept. If a creature doesn't exist, then it doesn't have preferences. I don't think it's logically possible to prefer to exist if you don't already. Besides, as I said before, it would be even more efficient to create a creature that can't feel pleasure, that just has a vague preference to keep on existing that would always be satisfied as long as it existed. But I doubt you would want to do that.

Besides, for every hypothetical creature that wants to exist and feel pleasure, there's another hypothetical creature that wants that creature to not exist, or feel pain. Why are we ignoring those creature's preferences?

The only way preference utilitarianism can avoid the global maximum of Heaven is to ignore the preferences of potential creatures. But that is selfish.

No, it isn't. Selfishness is when you severely thwart someone's preferences to mildly enhance your own. It's not selfish to thwart nonexistant preferences because they don't exist. That's like saying it's gluttonous to eat nonexistant food, or vain to wear nonexistant costume jewelry.

The reason some people find the idea that you have to respect the preferences of all potential creatures is that they believe (correctly) that they have an obligation to make sure people who exist in the future will have satisfied preferences. But that isn't because nonexistant people's preference have weight. It's because it's good for whoever exists at the moment to have highly satisfied preferences, so as soon as a creature comes into existence you have a duty to make sure it is satisfied. And the reason those people's preferences are highly satisfied should be that they are strong, powerful, and have lots of friends, not because they were genetically modified to have really really unambitious preferences.

If you don't want Heaven, then you don't want a universally friendly AI. What you really want is an AI that is friendly just to you.

I want a universally friendly AI, but since nonexistant creatures don't exist in this universe, not creating them isn't universally unfriendly.

Also, I find it highly suspect, to say the least, that you start by arguing for "Heaven" because you think that all human desires can be reduced to the desire to feel certain emotions, but then when the commenters have poked holes in that idea you suddenly change and use a completely different justification (the logically incoherent idea that we have to respect the nonexistant preferences of nonexistant people) to defend it.

The infinite universe argument can be used as an excuse to do pretty much anything. Why not just torture and kill everyone and everything in our Hubble volume? ... If there are infinite copies of everyone and everything, then there's no harm done.

I find it helpful to think of having a copy as a form of life extension, except done serially instead of linearly. An exact duplicate of you who lives for 70 years is similar to living an extra 70 years. So torturing everyone because they have duplicates would be equivalent to torturing someone for half their lifespan and then saying that it's okay because they still have half a lifespan leftover.

Whatever happens outside of our Hubble volume has no consequence for us, and neither adds to nor alleviates our responsibility.

Again, if these creatures exist somewhere else, then if you create them you aren't really creating them, you're extending their lifespan. Now, having a long lifespan is one way of having a high quality of life, but it isn't the only way, and it does have diminishing returns, especially when it's serial instead of linear, and you don't share your copy's memories. So it seems logical that, in addition to focusing on making people live longer, we should increase their quality of life in other ways, such as devoting resources to making them richer and more satisfied.

comment by denisbider · 2010-01-26T13:35:15.638Z · LW(p) · GW(p)

If we take for granted that an AI that is friendly to all potential creatures is out of the question - that the only type of FAI we really want is one that's just friendly to us - then the following is the next issue I see.

If we all think it's so great to be autonomous, to feel like we're doing all of our own work, all of our own thinking, all of our own exploration - then why does anyone want to build an AI in the first place?

Isn't the world, as it is, lacking an all-powerful AI, perfectly suited to our desires of control and autonomy?

Suppose an AI-friendly-to-you exists, and you know that you can always ask it to expand your mind, and download into you everything it knows about the issues you care for, short-circuiting thousands of years of work that it would otherwise take for you to make the same discoveries.

Doesn't it seem pointless to be doing all that work, if you know that FAI can already provide you with all the answers?

Furthermore, again supposing an AI-friendly-to-you exists - you know that you can always ask it to wire-head you. In any given year, there is a negligible, but non-zero probability that you'll succumb to the temptation. Once you do succumb to the temptation, it will feel so great that you will never ever want to do the boring "thinking and doing stuff" again. You will be constantly blissful, and anything you want to know about the universe will be immediately available to you, through a direct interface with FAI.

It doesn't take much to see that, whether it takes a thousand years or a million years before you succumb, you will eventually choose to be wire-headed; you will choose this much sooner than the universe ends; and the vast majority of your total existence will be lived that way.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-26T13:44:32.218Z · LW(p) · GW(p)

If the FAI values that we value independence, and values that we value autonomy - which I think it would have to, to be considered properly Friendly - and if wireheading is an threat to our ability to maintain those values, it doesn't make sense that the FAI would make wireheading available for the asking. It makes much more sense that the FAI would actively protect us from wireheading as it would from any other existential threat, in that case.

(Also, just because it would protect us from existential threats, that wouldn't imply that it would protect us from non-existential ones. Part of the idea is that it's very smart: It can figure out the balance of protecting and not-protecting that best preserves its values, and by extension ours.)

Replies from: denisbider
comment by denisbider · 2010-01-26T14:04:35.578Z · LW(p) · GW(p)

Just like the government tries to "protect" you from marijuana, alcohol, cigarettes and fatty food, you mean?

All mistakes are existential mistakes, in that they affect your existence. All of them deprive you from an experience, or substitute a worse experience for a better one, or shorten the time you have available for experiencing. If smoking takes away a decade of your life, or if obesity replaces what could be a great lifestyle with one that's mediocre, those seem to me as much existential threats as anything.

And I don't even think that wireheading is a threat. All it is is the best possible experience which you can safely continue to have until the end of time. The only reason you don't want it is because you know that once you experience it, you will want to keep experiencing it.

I find that comparable to a depressed person who doesn't want to cure his depression, because it would "change who he is". Well, yeah; but for the better.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-26T14:09:39.864Z · LW(p) · GW(p)

Those aren't existential threats, especially if we had enough medical technology to easily deal with any negative side effects of them.

ETA: When I posted this, the above comment was only "Just like the government tries to "protect" you from marijuana, alcohol, cigarettes and fatty food, you mean?".

Replies from: denisbider
comment by denisbider · 2010-01-26T14:17:53.296Z · LW(p) · GW(p)

The analogy is between governmental paternalism today, with the technology today, and FAI paternalism tomorrow, with the technology tomorrow.

The analogy is not between FAI tomorrow, and governmental paternalism today, but with the technology of tomorrow. That would be a conflicted state of affairs.

Today, smoking is an existential threat, and governmental paternalism is... paternalism.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-26T14:33:51.886Z · LW(p) · GW(p)

The things you mentioned are not existential threats; they do not threaten the existence of the human species. They may reduce the lifespan, or total experienced hedonic pleasure, of people who indulge in them, but as I pointed out in my parenthetical, that's a separate issue.

FAI paternalism is an almost entirely different issue than governmental paternalism, because a FAI an a government are two very different kinds of things. A properly friendly AI can be trusted not to be corrupt, for example, or to be swayed by special interest groups, or to be confused by bad science or religious arguments.

Replies from: denisbider
comment by denisbider · 2010-01-26T14:56:46.497Z · LW(p) · GW(p)

If FAI paternalism is okay, then the FAI can decide that you would be best-off being wire-headed, apply the modification, and there you are, happy for all eternity.

You'll never regret it.

Replies from: Alicorn, AdeleneDawner
comment by Alicorn · 2010-01-26T15:00:02.492Z · LW(p) · GW(p)

Merely having the inability to regret an occurrence doesn't make the occurrence coincide with one's preferences. I couldn't regret an unexpected, instantaneous death from which I was never revived, either; I emphatically don't prefer one.

Replies from: denisbider
comment by denisbider · 2010-01-26T15:06:48.831Z · LW(p) · GW(p)

But wire-heading is not death. It is the opposite - the most fulfilling experience possible, to which everything else pales in comparison.

It seems you think paternalism is okay if it is pure in intent and flawless in execution.

It has been shown that vulnerability to smoking addiction is due to a certain gene. Suppose we could create a virus that would silently spread through the human population and fix this gene in everyone, willing or not. Suppose our intent is pure, and we know that this virus would operate flawlessly, only affecting this gene and having no other effects.

Would you be in favor of releasing this virus?

Replies from: RobinZ, Alicorn
comment by RobinZ · 2010-01-26T15:46:21.666Z · LW(p) · GW(p)

But wire-heading is not death. It is the opposite - the most fulfilling experience possible, to which everything else pales in comparison.

..."fulfilling"? Wire-heading only fulfills "make me happy" - it doesn't fulfill any other goal that a person may have.

"Fulfilling" - in the sense of "To accomplish or carry into effect, as an intention, promise, or prophecy, a desire, prayer, or requirement, etc.; to complete by performance; to answer the requisitions of; to bring to pass, as a purpose or design; to effectuate" (Webster 1913) - is precisely what wire-heading cannot do.

Replies from: denisbider
comment by denisbider · 2010-01-26T15:51:04.798Z · LW(p) · GW(p)

Your other goals are immaterial and pointless to the outside world.

Nevertheless, suppose the FAI respects such a desire. This is questionable, because in the FAI's mind, this is tantamount to letting a depressed patient stay depressed, simply because a neurotransmitter imbalance causes them to want to stay depressed. But suppose it respects this tendency.

In that case, the cheapest way to satisfy your desire, in terms of consumption of resources, is to create a simulation where you feel like you are thinking, learning and exploring, though in reality your brain is in a vat.

You'd probably be better off just being happy and sharing in the FAI's infinite wisdom.

Replies from: RobinZ
comment by RobinZ · 2010-01-26T15:57:44.261Z · LW(p) · GW(p)

Would you do me a favor and refer to this hypothesized agent as a DAI (Denis Artificial Intelligence)? Such an entity is nothing I would call Friendly, and, given the widespread disagreement on what is Friendly, I believe any rhetorical candidates should be referred to by other names. In the meantime:

Your other goals are immaterial and pointless to the outside world.

I reject this point. Let me give a concrete example.

Recently I have been playing a lot of Forza Motorsport 2 on the XBox 360. I have recently made some gaming buddies who are more experienced in the game than I am - both better at driving in the game and better at tuning cars in the game. (Like Magic: the Gathering, Forza 2 is explicitly played on both the preparation and performance levels, although tilted more towards the latter.) I admire the skills they have developed in creating and controlling their vehicles and, wishing to admire myself in a similar fashion, wish to develop my own skills to a similar degree.

What is the DAI response to this?

Replies from: denisbider, denisbider
comment by denisbider · 2010-01-26T16:09:07.057Z · LW(p) · GW(p)

What is the DAI response to this?

An FAI-enhanced World of Warcraft?

You can still interact with others even though you're in a vat.

Though as I commented elsewhere, chances are that FAI could fabricate more engaging companions for you than mere human beings.

And chances are that all this is inferior to being the ultimate wirehead.

Replies from: RobinZ
comment by RobinZ · 2010-01-26T17:41:33.645Z · LW(p) · GW(p)

What is the DAI response to this?

An FAI-enhanced World of Warcraft?

That could be fairly awesome.

You can still interact with others even though you're in a vat.

If it comes to that, I could see making the compromise.

Though as I commented elsewhere, chances are that FAI could fabricate more engaging companions for you than mere human beings.

And chances are that all this is inferior to being the ultimate wirehead.

This relates to subjects discussed in the other thread - I'll let that conversation stand in for my reply to it.

comment by denisbider · 2010-01-26T16:05:21.232Z · LW(p) · GW(p)

Well...

Consider you want to explore and learn and build ad infinitum. Progress in your activities requires you to control increasing amounts of matter and consume increasing amounts of energy, until such point as you conflict with others who also want to build and explore. When that point is reached, the only way the FAI can make you all happy is to intervene while you all sleep, put you in separate vats, and from then on let each of you explore an instance of the universe that it simulates for you.

Should it let you wage Star Wars on each other instead? And how would that be different from no AI to begin with?

Replies from: RobinZ
comment by RobinZ · 2010-01-26T16:24:42.426Z · LW(p) · GW(p)

You seem to be engaging in all-or-nothing thinking. Because I want more X does not mean that I want to maximize X to the exclusion of all other possibilities. I want to explore and learn and build, but I also want to act fairly toward my fellow sapients/sentients. And I want to be happy, and I want my happiness to stem causally from exploring, learning, building, and fairness. And I want a thousand other things I'm not aware of.

An AI which examines my field of desires and maximizes one to the exclusion of all others is actively inimical to my current desires, and to all extrapolations of my current desires I can see.

Replies from: denisbider
comment by denisbider · 2010-01-26T16:30:01.044Z · LW(p) · GW(p)

But everything you do is temporary. All the results you get from it are temporary.

If you seek quality of experience, then the AI can wirehead you and give you that, with minimal consumption of resources. Even if you do not want a constant ultimate experience, all the thousands of your needs are more efficiently fulfilled in a simulation, than letting you directly manipulate matter. Allowing you to waste real resources is inimical both to the length of your life and that of everyone else.

If you seek personal growth, then the AI already is everything you can aspire to be. Your best bet at personal growth is interfacing or merging with its consciousness. And everyone can do that, as opposed to isolated growth of individual beings, which would consume resources that need to be available for others and for the AI.

Replies from: RobinZ
comment by RobinZ · 2010-01-26T17:12:40.450Z · LW(p) · GW(p)

If you seek personal growth, then the AI already is everything you can aspire to be.

Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn't sound like the kind of future I want to build.

Edit:

But everything you do is temporary. All the results you get from it are temporary.

That just adds a constraint to what I may accomplish - it doesn't change my preferences.

Replies from: denisbider
comment by denisbider · 2010-01-26T17:22:38.060Z · LW(p) · GW(p)

Why would I build an AI which would steal everything I want to do and leave me with nothing worth doing? That doesn't sound like the kind of future I want to build.

Because only one creature can be maximized, and it's better it's an AI than a person.

Even if we don't necessarily want the AI to maximize itself immediately, it will always need to be more powerful than any possible threat, and therefore more powerful than any other creature.

If you want the ultimate protector, it has to be the ultimate thing.

Replies from: RobinZ
comment by RobinZ · 2010-01-26T17:32:53.991Z · LW(p) · GW(p)

I don't want it maximized, I want it satisficed - and I, at least, am willing to exchange a small existential risk for a better world. "They who can give up essential liberty to obtain a little temporary safety" &c.

If the AI can search the universe and determine that it is adequately secure from existential threats, I don't want it expanding very quickly beyond that. Leave some room for us!

Replies from: denisbider
comment by denisbider · 2010-01-26T17:39:04.296Z · LW(p) · GW(p)

But the AI has to plan for a maximized outcome until the end of the universe. In order to maximize the benefit from energy before thermal death, resource efficiency right now is as important as when resources will be scarcest.

This is unless the AI discovers that thermal death can be overcome, in which case, great! But what we know so far indicates that the universe will eventually die, even if many billions of year in the future. So conservative resource management is important from day 1.

Replies from: RobinZ
comment by RobinZ · 2010-01-26T17:48:34.541Z · LW(p) · GW(p)

There are things I could say in reply, but I suspect we are simply talking past each other. I may reply later if I have some new insight into the nature of our disagreement.

Replies from: denisbider
comment by denisbider · 2010-01-26T17:51:38.227Z · LW(p) · GW(p)

The way I understand our disagreement is, you see FAI as a limited-functionality add-on that makes a few aspects of our lives easier for us, while I see it as an unstoppable force, with great implications for everything in its causal future, which just can't not revolutionize everything, including how we feel, how we think, what we do. I believe I'm following the chain of reasoning to the end, whereas you appear to think we can stop after the first couple steps.

Replies from: Vladimir_Nesov, RobinZ, AdeleneDawner
comment by Vladimir_Nesov · 2010-01-27T16:17:28.151Z · LW(p) · GW(p)

You also keep asserting to know in which particular way FAI is going to change things. Instead of repeating the same statements, you should recognise the disagreement, and address it directly, instead of continuing to profess the original assertions.

comment by RobinZ · 2010-01-26T20:12:33.223Z · LW(p) · GW(p)

I don't think that's the source of our disagreement - as I mentioned in another thread, if prudence demanded that the population (or some large fraction thereof) be uploaded in software to free up the material substance for other purposes, I would not object. I could even accept major changes to social norms (such as legalization of nonconsensual sex, to use Eliezer Yudkowsky's example). Our confirmed point of disagreement is not your thesis that "a human population which acquired an FAI would become immensely different from today's", it is your thesis that "a human population which acquired an FAI would become wireheads". Super Happy People, maybe - not wireheads.

comment by AdeleneDawner · 2010-01-26T18:10:03.789Z · LW(p) · GW(p)

One quality that's relevant to Friendly AI is that it does stop, when appropriate. It's entirely plausible (according to Eliezer; last time I checked) that a FAI would never do anything that wasn't a response to an existential threat (i.e. something that could wipe out or severely alter humanity), if that was the course of action most in keeping with our CEV.

comment by Alicorn · 2010-01-26T15:10:45.336Z · LW(p) · GW(p)

It seems you think paternalism is okay if it is pure in intent and flawless in execution.

Whoa whoa whoa wait what? No. Not under a blanket description like that, at any rate. If you want to wirehead, and that's your considered and stable desire, I say go for it. Have a blast. Just don't drag us into it.

Would you be in favor of releasing this virus?

No. I'd be in favor of making it available in a controlled non-contagious form to individuals who were interested, though.

Replies from: denisbider
comment by denisbider · 2010-01-26T15:14:48.590Z · LW(p) · GW(p)

Apologies, Alicorn - I was confusing you with Adelene. I was paying all attention to the content and not enough to who is the author.

Only the first paragraph (but wire-heading is not death) is directed at your comment. The rest is actually directed at Adelene.

Replies from: Alicorn, AdeleneDawner
comment by Alicorn · 2010-01-26T15:19:27.070Z · LW(p) · GW(p)

My point was that you used "you won't regret it" as a point in favor of wireheading, whereas it does not serve as a point in favor of death.

Replies from: denisbider
comment by denisbider · 2010-01-26T15:24:03.694Z · LW(p) · GW(p)

Can you check the thread of this comment:

http://lesswrong.com/lw/1o9/welcome_to_heaven/1iia?context=3#comments

and let me know what your response to that thread is?

Replies from: Alicorn
comment by Alicorn · 2010-01-26T15:32:28.759Z · LW(p) · GW(p)

I would save the drunk friend (unless I had some kind of special knowledge, such as that the friend got drunk in order to enable him or herself to go through with a plan to indulge a considered and stable sober desire for death). In the case of the depressed friend, I'd want to refer to my best available knowledge of what that friend would have said about the situation prior to acquiring the neurotransmitter imbalance, and act accordingly.

comment by AdeleneDawner · 2010-01-26T15:28:28.997Z · LW(p) · GW(p)

It seems you think paternalism is okay if it is pure in intent and flawless in execution.

You're twisting my words. I said that FAI paternalism would be different - which it would be, qualitatively and quantitatively. "Pure in intent and flawless in execution" are very fuzzy words, prone to being interpreted differently by different people, and only a very specific set of interpretations of those words would describe FAI.

It has been shown that vulnerability to smoking addiction is due to a certain gene. Suppose we could create a virus that would silently spread through the human population and fix this gene in everyone, willing or not. Suppose our intent is pure, and we know that this virus would operate flawlessly, only affecting this gene and having no other effects.

Would you be in favor of releasing this virus?

I'm with Alicorn on this one: If it can be made into a contagious virus, it can almost certainly be made into a non-contagious one, and that would be the ethical thing to do. However, if it can't be made into a non-contagious virus, I would personally not release it, and I'm going to refrain from predicting what a FAI would do in that case; part of the point of building a FAI is to be able to give those kinds of decisions to a mind that's able to make unbiased (or much less biased, if you prefer; there's a lot of room for improvement in any case) decisions that affect groups of people too large for humans to effectively model.

Replies from: denisbider
comment by denisbider · 2010-01-26T15:45:57.213Z · LW(p) · GW(p)

I understand. That makes some sense. Though smokers' judgement is impaired by their addiction, one can imagine that at least they will have periods of sanity when they can choose to fix the addiction gene themselves.

We do appear to differ in the case when an infectious virus is the only option to help smokers fix that gene. I would release the virus in that case. I have no qualms taking that decision and absorbing the responsibility.

Replies from: Blueberry
comment by Blueberry · 2010-01-26T19:24:42.422Z · LW(p) · GW(p)

This seems contradictory to your earlier claims about wireheading. Say that some smokers get a lot of pleasure from smoking, and don't want to stop, and in fact would experience more pleasure in their lives if they kept the addiction. You'd release the virus?

comment by AdeleneDawner · 2010-01-26T15:01:16.740Z · LW(p) · GW(p)

I maintain that an AI that would do that isn't Friendly.

I believe that my definition of Friendliness is in keeping with the standard definition that's in use here.

How are you defining Friendliness, that you would consider an AI that would wirehead someone against their will to be Friendly?

Replies from: denisbider
comment by denisbider · 2010-01-26T15:09:27.300Z · LW(p) · GW(p)

Is it friendly to rescue a drunk friend who is about to commit suicide, knowing that they'll come to their senses? Or is it friendly to let them die, because their current preference is to die?

Replies from: AdeleneDawner, aausch
comment by AdeleneDawner · 2010-01-26T15:13:01.480Z · LW(p) · GW(p)

That depends on whether they decided to commit suicide while in a normal-for-them frame of mind, not on their current preference. The first part of the question implies that they didn't, in which case the correct response is to rescue them, wait for them to get sober, and talk it out - and then they can commit suicide, if they still feel the need.

Replies from: denisbider
comment by denisbider · 2010-01-26T15:20:02.328Z · LW(p) · GW(p)

Very well, then. Next example. Your friend is depressed, and they want to commit suicide. You know that their real problem is a neurotransmitter imbalance that can be easily fixed. However, that same neurotransmitter imbalance is depriving them of any will to fix it, and in fact they refuse to cooperate. You know that if you fix their imbalance regardless, they will be happy, they will live a fulfilled life, and they will be grateful to you for it. Is it friendly to intervene and fix the imbalance, or is it friendly to let them die, seeing as depression and thoughts of suicide are a normal-for-them frame of mind?

Replies from: Vladimir_Nesov, Morendil, AdeleneDawner
comment by Vladimir_Nesov · 2010-01-27T15:41:57.002Z · LW(p) · GW(p)

Your friend is depressed, and they want to commit suicide.

It doesn't follow that they prefer to commit suicide.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-27T16:20:54.211Z · LW(p) · GW(p)

This is an excellent answer, and squares well with mine: If they merely want to commit suicide, they may not have considered all the alternatives. If they have considered all the achievable alternatives, and their preference is to commit suicide, I'd support them doing so.

comment by Morendil · 2010-01-26T16:09:05.013Z · LW(p) · GW(p)

If this is leading in a direction where "wireheading" is identified with "being happy and living a fulfilled life", then we might as well head it off at the pass.

Being happy - being in a pleasurable state - isn't enough, we would insist that our future lives should also be meaningful (which I would argue is part of "fulfilled").

This isn't merely a subjective attribute, as is "happy" which could be satisfied by permanently blissing out. It has objective consequences; you can tell "meaningful" from the outside. Meaningful arrangements of matter are improbable but lawful, structured but hard to predict, and so on.

"Being totally happy all the time" is a state of mind, the full description of which would compress very well, just as the description of zillions of molecules of gas can be compressed to a handful of parameters. "Meaningful" corresponds to states of mind with more structure and order.

If we are to be somehow "fixed" we would want the "fix" to preserve or restore the property we have now, of being the type of creature who can (and in fact do) choose for themselves.

Replies from: denisbider
comment by denisbider · 2010-01-26T16:21:25.623Z · LW(p) · GW(p)

The preference for "objective meaningfulness" - for states which do not compress very well - seems to me a fairly arbitrary (meaningless) preference. I don't think it's much different from paperclip maximization.

Who is to observe the "meaningful" states, if everyone is in a state where they are happy?

I am not even convinced that "happy and fulfilled" compresses easily. But if it did, what is the issue? Everyone will be so happy as to not mind the absence of complicated states.

I would go so far as to say that seeking complicated states is something we do right now because it is the most engaging substitute we have for being happy.

And not everyone does this. Most people prefer to empty their minds instead. It may even be that seeking complexity is a type of neurotic tendency.

Should the FAI be designed with a neurotic tendency?

I'm not so sure.

comment by AdeleneDawner · 2010-01-26T15:40:33.438Z · LW(p) · GW(p)

I can give you my general class of answers to this kind of problem: I will always attempt to the best of my ability to talk someone I care about out of doing something that will cause them to irretrievably cease to function as a person - a category which includes both suicide and wireheading. However, if in spite of my best persuasive efforts - which are likely to have a significant effect, if I'm actually friends with the person - they still want to go through with such a thing, I will support them in doing so.

The specific implementation of the first part in this case would be to try to talk them into trying the meds, with the (accurate) promise that I would be willing to help them suicide if they still wanted to do that after a certain number of months (dependent on how long the meds take to work).

Replies from: Kevin
comment by Kevin · 2010-01-27T02:06:01.778Z · LW(p) · GW(p)

There are so many different anti-depressants and the methods for choosing which ones are optimal basically come down to the intuition of the psychiatrist. It can take years to iterate through all the possible combinations of psychiatric medication if they keep failing to fix the neurotransmitter imbalance. I think anything short of 2 years is not long enough to conclude that a person's brain is irreparably broken. It's also a field that has a good chance of rapid development, such that a brain that seems irreparably broken today will certainly not always be unfixable.

--

I explored a business in psychiatric genetic testing, and identified about 20 different mutations that could help psychiatrists make treatment decisions, but it was infeasible to bring to market right now without having millions of dollars for research, and the business case is not strong enough for me to raise millions of dollars for research. It'll hit the market within 10 years, sooner if the business case becomes stronger for me doing it or if I have the spare $20k to go out and get the relevant patent to see what doors that opens.

I expect the first consequence of widespread genetic testing for mental health is for NRIs to become much more widely prescribed as the firstline treatment for depression.

comment by aausch · 2010-01-27T16:21:27.629Z · LW(p) · GW(p)

Probably the "friendly" action would be to create an un-drunk copy of them, and ask the copy to decide.

Replies from: RobinZ
comment by RobinZ · 2010-01-27T17:46:48.288Z · LW(p) · GW(p)

And what do you do with the copy? Kill it?

Replies from: ciphergoth, aausch
comment by Paul Crowley (ciphergoth) · 2010-01-27T17:58:54.778Z · LW(p) · GW(p)

I'm OK with the deletion of very-short-lived copies of myself if there are good reasons to do it. For example, supposing after cryonic suspension I'm revived with scanning and WBE. Unfortunately, unbeknownst to those reviving me, I have a phobia of the Michelin Man and the picture of him on the wall means I deal with the shock of my revival very badly. I'd want the revival team to just shut down, change the picture on the wall and try again.

I can also of course imagine lots of circumstances where deletion of copies would be much less morally justifiable.

Replies from: Blueberry, RobinZ
comment by Blueberry · 2010-01-27T18:10:05.932Z · LW(p) · GW(p)

I'm OK with the deletion of very-short-lived copies of myself if there are good reasons to do it.

There's a very nice thought experiment that helps demonstrate this (I think it's from Nozick). Imagine a sleeping pill that makes you fall asleep in thirty minutes, but you won't remember the last fifteen minutes of being awake. From the point of view of your future self, the fifteen minutes you don't remember is exactly like a short-lived copy that got deleted after fifteen minutes. It's unlikely that anyone would claim taking the pill is unethical, or that you're killing a version of yourself by doing so.

Replies from: MrHen, Psy-Kosh
comment by MrHen · 2010-01-27T18:20:58.925Z · LW(p) · GW(p)

It's unlikely that anyone would claim taking the pill is unethical, or that you're killing a version of yourself by doing so.

Armchair reasoning: I can imagine the mental clone and the original existing at the same time, side-by-side. I cannot imagine myself with the memory loss and myself without the memory loss as existing at the same time. Also, whatever actions my past self does actually affects my future self regardless of what I remember. As such, my instinct is to think of the copy as a separate identity and my past self as the same identity.

Replies from: JGWeissman
comment by JGWeissman · 2010-01-27T18:30:48.198Z · LW(p) · GW(p)

Also, whatever actions my past self does actually affects my future self regardless of what I remember.

Your copy would also take actions that affects your future self. What is the difference here?

Replies from: MrHen
comment by MrHen · 2010-01-27T18:35:00.737Z · LW(p) · GW(p)

Imagine a scenario where I cut off my arm. I am responsible. If my copy cuts off my arm, he would be responsible, not "me."

This is all playing semantics with personal identity. I am not trying to espouse any particular belief; I am only offering one possible difference between the idea of forgetting your past and copying yourself.

Replies from: JGWeissman
comment by JGWeissman · 2010-01-27T18:41:10.650Z · LW(p) · GW(p)

If my copy cuts off my arm, he would be responsible, not "me."

That doesn't make any sense. Your copy is you.

Replies from: MrHen
comment by MrHen · 2010-01-27T19:06:27.679Z · LW(p) · GW(p)

Yeah, okay. You are illustrating my point exactly. Not everyone thinks the way you do about identity and not everyone thinks the way I mentioned about identity. I don't hold hard and fast about it one way or the other.

But the original example of someone who loses 15 minutes being similar to killing off a copy who only lived for 15 minutes implies a whole ton of things about identity. The word "copy" is too ambiguous to say, "Your copy is you."

If I switched in, "X's copy is X" and then started talking about various cultural examples of copying we quickly run into trouble. Why does "X's copy is X" work for people? Unless I missed a definition of terms comment or post somewhere, I don't see how we can just assume that is true.

The first use of "copy" I found in this thread is:

Probably the "friendly" action would be to create an un-drunk copy of them, and ask the copy to decide.

It was followed by:

And what do you do with the copy? Kill it?

As best as I can tell, you take the sentence, "Your copy is you" to be a tautology or definition or something along those veins. (I could obviously be wrong; please correct me if I am.) What would you call a functionally identical version of X with a separate, distinct Identity? Is it even possible? If it is, use that instead of "copy" when reading my comment:

Imagine a scenario where I cut off my arm. I am responsible. If my copy cuts off my arm, he would be responsible, not "me."

When I read the original comment I responded to:

From the point of view of your future self, the fifteen minutes you don't remember is exactly like a short-lived copy that got deleted after fifteen minutes.

I was not assuming your definition of copy. Which could entirely be my fault, but I find it hard to believe that you didn't understand my point enough to predict this response. If you did, it would have been much faster to simply say, "When people at LessWrong talk about copies they mean blah." In which case I would have responded, "Oh, okay, that makes sense. Ignore my comment."

Replies from: Blueberry
comment by Blueberry · 2010-01-27T19:10:54.078Z · LW(p) · GW(p)

The semantics get easier if you think of both as being copies, so you have past-self, copy-1, and copy-2. Then you can ask which copy is you, or if they're both you. (If past-self is drunk, copy-1 is drunk, and copy-2 is sober, which copy is really more "you"?)

Replies from: MrHen
comment by MrHen · 2010-01-27T19:17:04.567Z · LW(p) · GW(p)

Yeah, actually, that helps a lot. Using that language most of the followup questions I have obvious enough to skip bringing up. Thanks.

comment by Psy-Kosh · 2010-01-27T18:39:36.044Z · LW(p) · GW(p)

I'd actually be kinda hesitant of such pills and would need to think it out. The version of me that is in those 15 minutes might be a bit unhappy about the situation, for one thing.

Replies from: Blueberry
comment by Blueberry · 2010-01-27T19:06:27.775Z · LW(p) · GW(p)

Such pills do exist in the real world: a lot of sleeping pills have similar effects, as does consuming significant amounts of alcohol.

Replies from: Psy-Kosh, Splat
comment by Psy-Kosh · 2010-02-01T04:37:58.011Z · LW(p) · GW(p)

And it basically results in 15 minutes of experience that simply "go away"? no gradual transition/merging into the mainline experience, simply 15 minutes that get completely wiped?

eeew.

comment by Splat · 2010-02-01T04:35:49.120Z · LW(p) · GW(p)

For that matter, so does falling asleep in the normal way.

comment by RobinZ · 2010-01-27T20:26:12.572Z · LW(p) · GW(p)

Certainly - this is the restore-from-backup scenario, for which Blueberry's sleeping-pill comparison was apt. (I would definitely like to make a secure backup before taking a risk, personally.) What I wanted to suggest was that duplicate-for-analysis was less clear-cut.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-28T13:00:19.664Z · LW(p) · GW(p)

What's the difference? Supposing that as a matter of course the revival team try a whole bunch of different virtual environments looking for the best results, is that restore-from-backup or duplicate-for-analysis?

Suppose that we ironically find that the limitations on compute hardware mean that no matter how much we spend we hit an exact 1:1 ratio between subjective and real time, but that the hardware is super-cheap. Also, there's no brain "merge" function. I might fork off a copy to watch a movie to review it for myself, to decide whether the "real me" should watch it.

Replies from: RobinZ
comment by RobinZ · 2010-01-28T15:10:34.741Z · LW(p) · GW(p)

As MrHen pointed out, you can imagine the 'duplicate' and 'original' existing side-by-side - this affects intuitions in a number of ways. To pump intuition for a moment, we consider identical twins to be different people due to the differences in their experiences, despite their being nearly identical on a macro level. I haven't done the calculations to decide where the border of acceptable use of duplication lies, but deleting a copy which diverged from the original twenty years before clearly appears to be over the line.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-28T16:02:46.627Z · LW(p) · GW(p)

Absolutely, which is why I specified short-lived above.

Though it's very hard to know how I would face the prospect of being deleted and replaced with a twenty-minute-old backup in real life!

Replies from: AdeleneDawner, RobinZ
comment by AdeleneDawner · 2010-01-28T16:36:48.684Z · LW(p) · GW(p)

It's very hard to know how I would face the prospect of being deleted and replaced with a twenty-minute-old backup in real life!

I may be answering an un-asked question, since I haven't been following this conversation, but the following solution to the issue of clones occurs to me:

Leave it up to the clone.

Make suicide fully legal and easily available (possibly 'suicide of any copy of a person in cases where more than one copy exists', though that could allow twins greater leeway depending on how you define 'person' - perhaps also add a time limit: the split must have occurred within N years). When a clone is created, it's automatically given the rights to 1/2 of the original's wealth. If the clone suicides, the original 'inherits' the wealth back. If the clone decides not to suicide, it automatically keeps the wealth that it has the rights to.

Given that a clone is functionally the same person as the original, this should be an ethical solution (assuming that you consider suicide ethical at all) - someone would have to be very sure that they'd be able to go through with suicide, or very comfortable with the idea of splitting their wealth in half, in order to be willing to take the risk of creating a clone. The only problem that I see is with unsplittable things like careers and relationships. (Flip a coin? Let the other people involved decide?)

Replies from: Blueberry
comment by Blueberry · 2010-01-28T18:11:20.576Z · LW(p) · GW(p)

Leave it up to the clone.

This seems like a good solution. If I cloned myself, I'd want it to be established beforehand which copy would stay around, and which copy would go away. For instance, if you're going to make a copy that goes to watch a movie to see if the movie is worth your time, the copy that watches the movie should go away, because if it's good the surviving version of yourself will watch it anyway.

someone would have to be very sure that they'd be able to go through with suicide

I (and thus my clones) don't see it as suicide, more like amnesia, so we'd have no problem going through with it if the benefit outweighed the amnesia.

If you keep the clone around, in terms of splitting their wealth, both clones can work and make money, so you should get about twice the income for less than twice the expenses (you could share some things). In terms of relationships, you could always bring the clones into a relationship. A four way relationship, made up of two copies of each original person, might be interesting.

Replies from: MrHen, AdeleneDawner
comment by MrHen · 2010-01-28T18:21:37.724Z · LW(p) · GW(p)

A four way relationship, made up of two copies of each original person, might be interesting.

Hmm... *Imagines such a relationship with significant other.* Holy hell that would be weird. The amount of puzzling scenarios I can think of just by sitting here is extravagant. Does anyone know of a decent novel based on this premise?

comment by AdeleneDawner · 2010-01-28T18:27:04.614Z · LW(p) · GW(p)

I don't think those kinds of situations will need to be spelled out in advance, actually. Coming up with a plan that's acceptable to both versions of yourself before going through with the cloning should be about as easy as coming up with a plan that's acceptable to just one version, once you're using the right kind of framework to think about it. (You should be about equally willing to take either role, in other words, otherwise your clone is likely to rebel, and since they're considered independent from the get-go (and not bound by any contracts they didn't sign, I assume), there's not much you can do about that.)

Setting up four-way relationships would definitely be interesting. Another scenario that I like is one where you make a clone to pursue an alternate life-path that you suspect might be better but think is too risky - after a year (or whatever), whichever of you is less happy could suicide and give their wealth to the other one, or both could decide that their respective paths are good and continue with half-wealth.

Replies from: Blueberry
comment by Blueberry · 2010-01-28T18:46:16.172Z · LW(p) · GW(p)

The more I think about this, the more I want to make a bunch of clones of myself. I don't even see why I'd need to destroy them. I shouldn't have to pay for them; they can get their own jobs, so wealth isn't that much of a concern.

Coming up with a plan that's acceptable to both versions of yourself before going through with the cloning should be about as easy as coming up with a plan that's acceptable to just one version, once you're using the right kind of framework to think about it.

The concern is that immediately after you clone, both copies agree that Copy 1 should live and Copy 2 should die, but afterwards, Copy 2 doesn't want to lose those experiences. If you decide beforehand that you only want one of you around, and Copy 2 is created specifically to be destroyed, there should be a way to bind Copy 2 to suicide.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-28T18:50:38.887Z · LW(p) · GW(p)

there should be a way to bind Copy 2 to suicide.

Disagree. I would class that as murder, not suicide, and consider creating a clone who would be subject to such binding to be unethical.

Replies from: Blueberry
comment by Blueberry · 2010-01-28T18:56:50.112Z · LW(p) · GW(p)

Calling it murder seems extreme, since you end up surviving. What's the difference between binding a copy to suicide and binding yourself to take a sleep-amnesia pill?

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-28T19:19:00.726Z · LW(p) · GW(p)

If it's not utterly voluntary when committed, I don't class it as suicide. (I also consider 'driving someone to suicide' to actually be murder.)

My solution to resolving the ethical dilemma is, to reword it, to give the clone full human rights from the moment it's created (actually a slightly expanded version of current human rights, since we're currently prohibited from suiciding). I assume that it's not currently possible to enforce a contract that will directly cause one party's death; that aspect of inter-human interaction should remain. The wealth-split serves as a balance in two ways: Suddenly having your wealth halved would be traumatic for almost anyone, which gives a clone that had planned to suicide extra impetus to do so, and also should strongly discourage people from taking unnecessary risks when making clones. In other words, that's not a bug, it's a feature.

The difference between what you proposed and the sleeping pill scenario is that in the latter, there's never a situation where an individual is deprived of rights.

Replies from: Blueberry
comment by Blueberry · 2010-01-28T19:50:56.423Z · LW(p) · GW(p)

If it's not utterly voluntary when committed, I don't class it as suicide.

I'm still unclear why you classify it as death at all. You end up surviving it.

I think you're thinking of a each copy as an individual. I'm thinking of the copies collectively as a tool used by an individual.

The difference between what you proposed and the sleeping pill scenario is that in the latter, there's never a situation where an individual is deprived of rights.

Ok, say you enter into a binding agreement forcing yourself to take a sleeping pill tomorrow. You have someone there to enforce it if necessary. The next day, you change your mind, and the person forces you to take the pill anyway. Have you been deprived of rights? (If it helps, substitute eating dessert, or gambling, or doing heroin for taking the pill.)

Replies from: pdf23ds, AdeleneDawner
comment by pdf23ds · 2010-01-28T20:17:25.456Z · LW(p) · GW(p)

Ok, say you enter into a binding agreement forcing yourself to take a sleeping pill tomorrow.

I don't think any such agreement could be legally binding under current law, which is relevant since we're talking about rights.

comment by AdeleneDawner · 2010-01-28T20:02:21.652Z · LW(p) · GW(p)

I think you're thinking of a each copy as an individual. I'm thinking of the copies collectively as a tool used by an individual.

Yes, I am, and as far as I can tell mine's the accurate model. Each copy is separately alive and conscious; they should no more be treated as the same individual than twins are treated as the same individual. (Otherwise, why is there any ethical question at all?)

Ok, say you enter into a binding agreement forcing yourself to take a sleeping pill tomorrow. ... Have you been deprived of rights?

This kind of question comes up every so often here, and I still haven't heard or thought of an answer that satisfies me. I don't see it as relevant here, though, because I do recognize the clone as a separate individual who shouldn't be coerced.

Replies from: Blueberry
comment by Blueberry · 2010-01-28T20:14:43.514Z · LW(p) · GW(p)

Yes, I am, and as far as I can tell mine's the accurate model.

But if my copies and I don't think that way, is it still accurate for us? We agree to be bound by any original agreement, and we think any of us are still alive as long as one of us is, so there's no death involved. Well, death of a living organism, but not death of a person.

I don't see it as relevant here, though, because I do recognize the clone as a separate individual who shouldn't be coerced.

It's the same question, because I'm assuming both copy A and copy B agree to be bound by the agreement immediately after copying (which is the same as the original making a plan immediately before copying). Both copies share a past, so if you can be bound by your past agreements, so can each copy. Even if the copies are separate individuals, they don't have separate pasts.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-28T20:30:36.221Z · LW(p) · GW(p)

If you and all your copies think that way, then you shouldn't have to worry about them defecting in the first place, and the rule is irrelevant for you. How sure are you that that's what you really believe, though? Sure enough to bet 1/2 your wealth?

My concern with having specific copies be bound to past agreements is that I don't trust that people won't abuse that: It's easy not to see the clone as 'yourself', but as an easily exploitable other. Here's a possible solution to that problem (though one that I don't like as well as not having the clone bound by prior agreements at all): Clones can only be bound by prior agreements that randomly determine which one acts as the 'new' clone and which acts as the 'old' clone. So, if you split off a clone to go review a movie for you, and pre-bind the clone to die after reporting back, there's a 50% chance - determined by a coin flip - that it's you, the original, who will review the movie, and the clone who will continue with your life.

Replies from: Blueberry
comment by Blueberry · 2010-01-28T21:12:47.147Z · LW(p) · GW(p)

There isn't an "original". After the copying, there's Copy A and Copy B. Both are me. I'm fine with randomly selecting whether Copy A or Copy B goes to see the movie, but it doesn't matter, since they're identical (until one sees the movie). In fact, there is no way to not randomly select which copy sees the movie.

From the point of view of the clone who sees the movie (say it's bad), "suiciding" is the same as him going back in time and not seeing the movie. So I'd always stick to a prior agreement in a case like that.

If you and all your copies think that way, then you shouldn't have to worry about them defecting in the first place, and the rule is irrelevant for you. How sure are you that that's what you really believe, though? Sure enough to bet 1/2 your wealth?

I don't really have any wealth to speak of. But they're all me. If I won't defect, then they won't. The question is just whether or not we might disagree on what's best for me. In which case, we can either go by prior agreement, or just let them all live. If the other mes really wanted to live, I'd let them. For instance, say I made 5 copies and all 5 of us went out to try different approaches to a career, agreeing the best one would survive. If a year later more than one claimed to have the best result for Blueberry, I might as well let more than one live.

ETA: However, there might be situations where I can only have one copy survive. For instance, I'm in a grad program now that I'd like to finish, and more than one of me can't be enrolled for administrative reasons. So if I really need only one of me, I guess we could decide randomly which one would survive. I'm all right with forcing a copy to suicide if he changes his mind, since I'm making that decision for all the clones ahead of time to lead to the best outcome for Blueberry.

Replies from: AdeleneDawner, AdeleneDawner
comment by AdeleneDawner · 2010-01-28T21:43:44.357Z · LW(p) · GW(p)

Response to ETA:

If one of the clones developed enough individuality to change his mind and disagree with the others, I definitely don't see how you could consider that one anything other than an individual.

Likewise, if all of the clones decided to change their minds and go their separate ways, that would be functionally the same as you-as-a-single-person-with-a-single-body changing your mind about something, and the general rule there is that humans are allowed to do that, without being interfered with. I don't see any reason to change that rule.

comment by AdeleneDawner · 2010-01-28T21:28:19.641Z · LW(p) · GW(p)

Be careful of generalizing from one example. I'm relatively certain that the vast majority of people who might consider cloning themselves wouldn't see it the way you do, and would in fact need significant safeguards to protect the version of themselves who remembers waking up in a lab from being abused by the version of themselves who remembers going home after having their DNA sampled and their brain scanned.

I did have people like you in mind, at least peripherally, in my original suggestion, though: I'm fairly sure that the original proposal doesn't take away any rights that you already have. (To the best of my knowledge, it is illegal for someone to force you to take a sleeping pill, even if you previously agreed to it, and my knowledge there is a bit better than average; remember that I worked at a nursing home.)

Replies from: Blueberry
comment by Blueberry · 2010-01-28T22:22:42.984Z · LW(p) · GW(p)

I'm relatively certain that the vast majority of people who might consider cloning themselves wouldn't see it the way you do, and would in fact need significant safeguards to protect the version of themselves who remembers waking up in a lab from being abused by the version of themselves who remembers going home after having their DNA sampled and their brain scanned.

I'd like to hear more about this. First, I was imagining an identical atom-for-atom duplicate being constructed, in such a way that there is no fact of the matter who's the original. As in, you press a button and there are two of you. I wasn't thinking about an organism grown in a lab. But I'm not sure that matters, except that the lab scenario makes it easier to think of one copy being in control of the other copy.

You think the majority of people would worry about, and would need to worry about, one copy abusing the other copy? Why? The copies would have to fight for control first, which should be an even fight. And what would the point be?

I'm fairly sure that the original proposal doesn't take away any rights that you already have. To the best of my knowledge, it is illegal for someone to force you to take a sleeping pill, even if you previously agreed to it.

Yes, that's illegal except maybe in an emergency psychiatric situation. Here's an idea: a time-delayed suicide pill, with no antidote, that one of the copies can take immediately after the cloning. That's equivalent to having the agreement enforced, but it doesn't take away any rights either. I think that addresses your concern.

Replies from: ciphergoth, AdeleneDawner
comment by Paul Crowley (ciphergoth) · 2010-01-28T22:41:25.348Z · LW(p) · GW(p)

Next up: a game of Russian Roulette against YOURSELF!

comment by AdeleneDawner · 2010-01-29T02:09:40.891Z · LW(p) · GW(p)

I expect to get back to this; I had to take care of something for work and now I'm too tired to do it justice. If I haven't responded to it within 18 hours, please remind me.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-01-29T19:29:17.459Z · LW(p) · GW(p)

After conferring with Blueberry via PM, we agree that we'll need to talk in realtime to get much further with this. Our schedules are both fairly busy right now, but we intend to try to turn the discussion into a top post. (I'd also be amenable to making the log public, or letting other people observe or participate, but I haven't talked to Blue about that.)

comment by RobinZ · 2010-01-28T16:08:08.750Z · LW(p) · GW(p)

I imagine it would be much like a case of amnesia, only with less disorientation.

Edit: Wait, I'm looking at the wrong half. One moment.

Edit: I suppose it would depend on the circumstances - "fear" is an obvious one, although mitigated to an extent by knowing that I would not be leaving a hole behind me (no grieving relatives, etc.).

comment by aausch · 2010-01-27T18:10:28.093Z · LW(p) · GW(p)

Depends on how much it cost me to make it, and how much it costs to keep it around. I'm permanently busy, I'm sure I could use a couple of extra hands around the house ;)

comment by jimrandomh · 2010-01-26T00:08:12.651Z · LW(p) · GW(p)

It is worth noting that in Christian theology, heaven is only reached after death, and both going there early and sending people there early are explicitly forbidden.

While an infinite duration of bliss has very high utility, that utility must be finite, since infinite utility anywhere causes things to go awry when handling small probabilities of getting that utility. It is also not the only term in a human utility function; living as a non-wirehead for awhile to collect other types of utilons and then getting wireheaded is better than getting wireheaded immediately. Therefore, it seems like the sensible thing for a FAI to do is to offer wireheading as an option, but not to force the issue except in cases of imminent death.

Replies from: FinalFormal2
comment by FinalFormal2 · 2022-02-28T18:34:48.871Z · LW(p) · GW(p)

I don't think Christians agree that the utility of heaven is finite, I think they think it is infinite they're just not interested in thinking about the implications

comment by jacoblyles · 2012-07-18T22:03:00.851Z · LW(p) · GW(p)

This reminds me of a thought I had recently - whether or not God exists, God is coming - as long as humans continue to make technological progress. Although we may regret it (for one, brief instant) when he gets here. Of course, our God will be bound by the laws of the universe, unlike the Theist God.

The Christian God is an interesting God. He's something of a utilitarian. He values joy and created humans in a joyful state. But he values freedom over joy. He wanted humans to be like himself, living in joy but having free will. Joy is beautiful to him, but it is meaningless if his creations don't have the ability to choose not-joy. When his creations did choose not-joy, he was sad but he knew it was a possibility. So he gave them help to make it easier to get back to joy.

I know that LW is sensitive to extended religious reference. Please forgive me for skipping the step of translating interesting moral insights from theology into non-religious speak.

I do hope that the beings we make which are orders of magnitude more powerful than us have some sort of complex value system, and not anything as simple as naive algebraic utilitarianism. If they value freedom first, then joy, then they will not enslave us to the joy machines - unless we choose it.

(Side note: this post is tagged with "shut-up-and-multiply". That phrase trips the warning signs for me of a fake utility function, as it always seems to be followed by some naive algebraic utilitarian assertion that makes ethics sound like a solved problem).

edit: Whoa, my expression of my emotional distaste for "shut up and multiply" seems to be attracting down-votes. I'll take it out.

comment by timtyler · 2010-01-25T23:52:25.206Z · LW(p) · GW(p)

In biology, 1 and 2 are proximate goals and 3 is an implementation detail.

comment by [deleted] · 2012-01-28T20:57:09.100Z · LW(p) · GW(p)

So this begs(?) the question: Our brains pleasure circuitry is the ultimate arbitrator on whether an action is "good" [y/N]?

I would say that our pleasure centre is, like our words-feel-like-meaningful, our map-feels-like-territory, our world-feels-agent-driven, our consciousness-feels-special, etc. It is a good enough evolutionary heuristic that made our ancestors survive to breed us.

I am at this point tempted to shout "Stop the presses, pleasure isn't the ultimate good!" Yes, wire heading is of course the best way to fulfil that little feeling-good part of your brain. Is it constructive? Meh.

I would trade my pleasure centre for intuitive multiplication any day.

comment by aausch · 2010-02-08T14:28:56.593Z · LW(p) · GW(p)

Point 3. doesn´t seem to belong in the same category as 1. and 2.

comment by RobertWiblin · 2010-01-30T05:28:38.140Z · LW(p) · GW(p)

What if we could create a wirehead that made us feel as though we were doing 1 or 2? Would that be satisfactory to more people?

Replies from: wedrifid
comment by wedrifid · 2010-01-30T16:57:36.366Z · LW(p) · GW(p)

What if we could create a wirehead that made us feel as though we were doing 1 or 2? Would that be satisfactory to more people?

Only if they find a way to turn me into this

comment by [deleted] · 2010-01-26T01:25:49.952Z · LW(p) · GW(p)

Oh heaven Heaven is a place A place where nothing Nothing ever happens

comment by Alexxarian · 2010-01-29T08:36:21.537Z · LW(p) · GW(p)

The idea of wireheading violates my aesthetic sensibilities. I'd rather keep experiencing some suffering on my quest for increasing happiness, even if my final subjective destination were the same as that of a wirehead, which I doubt, because I consider my present path as important as the end goal. I doubt value and morality can be fully deconstructed through reason.

How is wireheading different from this http://i.imgur.com/wKpLx.jpg ? I think James Hughes makes a very good case for what is wrong with current transhumanist thought in his 'Problems of Transhumanism' http://ieet.org/index.php/IEET/more/hughes20100105/Problems

comment by denisbider · 2010-01-26T13:04:22.457Z · LW(p) · GW(p)

I'll just comment on what most people are missing, because most reactions seem to be missing a similar thing.

Wei explains that most of the readership are preference utilitarians, who believe in satisfying people's preferences, not maximizing pleasure.

That's fine enough, but if you think that we should take into account the preferences of creatures that could exist, then I find it hard to imagine that a creature would prefer not to exist, than to exist in a state where it permanently experiences amazing pleasure.

Given that potential creatures outnumber existing creatures many times over, the preferences of existing creatures - that we wish to selfishly keep the universe's resources to ourselves, so we can explore and think and have misguided lofty impressions about ourselves, and whatnot - all of those preferences don't count that much in the face of many more creatures that would prefer to exist, and be wireheaded, than not to exist at all.

The only way preference utilitarianism can avoid the global maximum of Heaven is to ignore the preferences of potential creatures. But that is selfish.

If you don't want Heaven, then you don't want a universally friendly AI. What you really want is an AI that is friendly just to you.

Replies from: timtyler, Wei_Dai
comment by timtyler · 2010-01-27T11:05:41.829Z · LW(p) · GW(p)

I doubt anyone here acts in a manner remotely similar to the way utilitarianism recommends. Utilitarianism is an unbiological conception about how to behave - and consequently is extremely difficult for real organisms to adhere to. Real organisms frequently engage in activities such as nepotism. Some people pay lip service to utilitarianism because it sounds nice and signals a moral nature - but they don't actually adhere to it.

comment by Wei Dai (Wei_Dai) · 2010-01-26T13:30:33.030Z · LW(p) · GW(p)

Eliezer posted an argument against taking into account the preferences of people who don't exist. I think utilitarianism, in order to be consistent, perhaps does need to take into account those preferences, but it's not clear how that would really work. What weights do you put on the utility functions of those non-existent creatures?

Replies from: denisbider
comment by denisbider · 2010-01-26T13:52:34.025Z · LW(p) · GW(p)

I don't find Eliezer's argument convincing. The infinite universe argument can be used as an excuse to do pretty much anything. Why not just torture and kill everyone and everything in our Hubble volume? Surely identical copies exist elsewhere. If there are infinite copies of everyone and everything, then there's no harm done.

That doesn't fly. Whatever happens outside of our Hubble volume has no consequence for us, and neither adds to nor alleviates our responsibility. Infinite universe or not, we are still responsible not just for what is, but also for what could be, in the space under our influence.

comment by V_V · 2013-01-26T16:34:28.743Z · LW(p) · GW(p)

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them. This enjoyment could be of any type - it could be explorative or creative or hedonic enjoyment as we know it. The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly. Therefore, the greatest utility will be achieved by wire-heading. Everything else falls short of that.

That's why utilitarianism is a bad idea, expecially if you are allowed to modify the agents. Think about it, humans would be a terrible waste of energy if their only purpose was to have their hedonic pleasure maximized. Mice would be more efficient. Or monocellular organisms. Or registers inside the memory of a computer that get incremented as fast as possible.

What I don't quite understand is why everyone thinks that this would be such a horrible outcome. As far as I can tell, these seem to be cached emotions that are suitable for our world, but not for the world of FAI. In our world, we truly do need to constantly explore and create, or else we will suffer the consequences of not mastering our environment.

You have it backwards. Why do you need not to suffer the consequences of not mastering our environment?

In a world where FAI exists, there is no longer a point, nor even a possibility, of mastering our environment. The FAI masters our environment for us, and there is no longer a reason to avoid hedonic pleasure. It is no longer a trap.

There is no longer any "us" in your hedonic "heaven". It is a world populated by minimalistic agents, all equal to each other, with no memories, no sense of personal identity, no conscious experiences. Life and death would be meanignless concepts to those things, like any other concept, since they wouldn't be capable of anything close to what we call thinking.

Is that what you want the world to become?

Since the FAI can sustain us in safety until the universe goes poof, there is no reason for everyone not to experience ultimate enjoyment in the meanwhile. In fact, I can hardly tell this apart from the concept of a Christian Heaven, which appears to be a place where Christians very much want to get.

Yes, and in fact the Christian Heaven is not a coherent concept. There can't be happiness without pain. No satisfaction without unquenched desire. If you give an agent anything it can possibly ever want, it stops being an agent.

These parallels reinforce my belief that Singularitarianism is just a thinly veiled version of Christianity.

comment by Kevin · 2010-01-26T00:23:15.839Z · LW(p) · GW(p)

What of Buddhist nirvana as another goal? Buddhist post-nirvana in a transhuman universe could consist of literally obliterating one's consciousness and becoming one with the universe, but it could be many things.

Replies from: grouchymusicologist
comment by grouchymusicologist · 2010-01-26T00:50:10.495Z · LW(p) · GW(p)

What would it mean to "become one with the universe" after "literally obliterating one's consciousness"?

Replies from: Kevin
comment by Kevin · 2010-01-26T01:07:08.232Z · LW(p) · GW(p)

I'd have to think about it more; I was engaging in quick speculative futurism. It just seems that if we are going to talk about Christian heaven as a metaphor for post-singularity existence, we should think of what the analog of Eastern religion should.

Replies from: grouchymusicologist
comment by grouchymusicologist · 2010-01-26T01:12:34.334Z · LW(p) · GW(p)

It just sounds blatantly self-contradictory, whereas the metaphor with Christian heaven is inexact but at least I sort of understand it. Here, I feel like adopting the rhetoric of Eastern religion actively impedes my having any idea what the hell it means (and doubly so if it's just a metaphor for some other concept).

Replies from: Jack, Kevin
comment by Jack · 2010-01-26T03:43:11.094Z · LW(p) · GW(p)

You're confused because Kevin (no offense to him) doesn't really know what he is talking about. Nirvana has nothing to do with "becoming one with the universe" and "literally obliterating one's consciousness" is a really bad translation of the doctrine of anatman. It isn't a metaphor, it is a genuine metaphysical and prescriptive doctrine.

He is right that Buddhism should be part of the conversation. The damage New Age, Depak Chopra bullshit has done to the West's image of Buddhism is really a shame, though.

Replies from: grouchymusicologist
comment by grouchymusicologist · 2010-01-26T04:42:29.236Z · LW(p) · GW(p)

Oh, I suppose so. I'm reasonably conversant with Buddhism, and I know that neither of those two phrases is close to being a good description of nirvana. I was more concerned that the borderline word-salad of "literally obliterating one's consciousness and becoming one with the universe" was being used as if it weren't a completely meaningless turn of phrase. Garbage in ...

comment by Kevin · 2010-01-26T01:18:37.739Z · LW(p) · GW(p)

It just sounds blatantly self-contradictory

:D We're talking about religious metaphors; are you surprised? I think the heaven metaphor is also self-contradictory because I don't think the idea that Christian heaven = absolute, total, eternal, maximum pleasure is accepted by most theologians.

http://www.ted.com/talks/jill_bolte_taylor_s_powerful_stroke_of_insight.html Jill Bolte Taylor talks about one thing that I think would go into a real physical equivalent of Buddhist Nirvana, an ability to dissolve the barrier your mind creates between the atoms that make up your body and the atoms that make up everything else around you.

Replies from: denisbider
comment by denisbider · 2010-01-26T14:38:37.819Z · LW(p) · GW(p)

Your mind creates that barrier? I thought that was a property of the particles themselves.

Replies from: Cyan, Kevin
comment by Cyan · 2010-01-26T14:51:01.029Z · LW(p) · GW(p)

I believe the idea is that there is no such barrier in the territory, only in the map. (I express no opinion on the value of this idea.)

Replies from: Kevin
comment by Kevin · 2010-01-27T12:58:18.234Z · LW(p) · GW(p)

Experiencing reality without a consciousness barrier is a seriously bad medical condition in humans, but having no conceptual barrier between the self and the rest of reality is something a transhuman mind might be able to handle without going insane.

Replies from: Blueberry
comment by Blueberry · 2010-01-27T16:02:12.395Z · LW(p) · GW(p)

What would it be like to experience reality without a consciousness barrier? That sounds intriguing but I'm not sure exactly what it means. Is it the same as being unconscious? And it's an actual medical condition? Please tell me more!

Replies from: Kevin
comment by Kevin · 2010-01-28T00:41:12.793Z · LW(p) · GW(p)

I was referring to Jill Bolte's description of her stroke during her Ted talk linked by me earlier in this thread.

comment by Kevin · 2010-01-27T12:58:57.524Z · LW(p) · GW(p)

Sure, there is an obvious physical barrier, but your mind chooses whether or not to recognize that barrier as meaningful.