A Much Better Life?

post by Psychohistorian · 2010-02-03T20:01:57.431Z · LW · GW · Legacy · 174 comments

Contents

174 comments

(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)

The Omega Corporation
Internal Memorandum
To: Omega, CEO
From: Gamma, Vice President, Hedonic Maximization

Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection.

Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers.

Extensive market research has succeeded only at baffling our researchers. People have even refused free trials of the device. Our researchers explained to them in perfectly clear terms that their current position is misinformed, and that once they tried the MBLS, they would never want to return to their own lives again. Several survey takers went so far as to specify that statement as their reason for refusing the free trial! They know that the MBLS will make their life so much better that they won't want to live without it, and they refuse to try it for that reason! Some cited their "utility" and claimed that they valued "reality" and "actually accomplishing something" over "mere hedonic experience." Somehow these organisms are incapable of comprehending that, inside the MBLS simulator, they will be able to experience the feeling of actually accomplishing feats far greater than they could ever accomplish in real life. Frankly, it's remarkable such people amassed enough credits to be able to afford our products in the first place!

You may recall that a Beta version had an off switch, enabling users to deactivate the simulation after a specified amount of time, or could be terminated externally with an appropriate code. These features received somewhat positive reviews from early focus groups, but were ultimately eliminated. No agent could reasonably want a device that could allow for the interruption of its perfect life. Accounting has suggested we respond to slack demand by releasing the earlier version at a discount; we await your input on this idea.

Profits aside, the greater good is at stake here. We feel that we should find every customer with sufficient credit to purchase this device,  forcibly install them in it, and bill their accounts. They will immediately forget our coercion, and they will be many, many times happier. To do anything less than this seems criminal. Indeed, our ethics department is currently determining if we can justify delaying putting such a plan into action. Again, your input would be invaluable.

I can't help but worry there's something we're just not getting.

174 comments

Comments sorted by top scores.

comment by avalot · 2010-02-04T04:14:39.360Z · LW(p) · GW(p)

I don't know if anyone picked up on this, but this to me somehow correlates with Eliezer Yudkowsky's post on Normal Cryonics... if in reverse.

Eliezer was making a passionate case that not choosing cryonics is irrational, and that not choosing it for your children has moral implications. It's made me examine my thoughts and beliefs about the topic, which were, I admit, ready-made cultural attitudes of derision and distrust.

Once you notice a cultural bias, it's not too hard to change your reasoned opinion... but the bias usually piggy-backs on a deep-seated reptilian reaction. I find changing that reaction to be harder work.

All this to say that in the case of this tale, and of Eliezer's lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).

Knowing that we will suffer, and knowing that we will die, are unbearable thoughts. We invest an enormous amount of energy toward dealing with the certainty of death and of suffering, as individuals, families, social groups, nations. Worlds in which we would not have to die, or not have to suffer, are worlds for which we have no useful skills or tools. Especially compared to the considerable arsenal of sophisticated technologies, art forms, and psychoses we've painstakingly evolved to cope with death.

That's where I am right now. Eliezer's comments have triggered a strongly rational dissonance, but I feel comfortable hanging around all the serious people, who are too busy doing the serious work of making the most of life to waste any time on silly things like immortality. Mostly, I'm terrified at the unfathomable enormity of everything that I'll have to do to adapt to a belief in cryonics. I'll have to change my approach to everything... and I don't have any cultural references to guide the way.

Rationally, I know that most of what I've learned is useless if I have more time to live. Emotionally, I'm afraid to let go, because what else do I have?

Is this a matter of genetic programming percolating too deep into the fabric of all our systems, be they genetic, nervous, emotional, instinctual, cultural, intellectual? Are we so hard-wired for death that we physically can't fathom or adapt to the potential for immortality?

I'm particularly interested in hearing about the experience of the LW community on this: How far can rational examination of life-extension possibilities go in changing your outlook, but also feelings or even instincts? Is there a new level of self-consciousness behind this brick wall I'm hitting, or is it pretty much brick all the way?

Replies from: Eliezer_Yudkowsky, alexflint, Vladimir_Nesov, Shae
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-04T07:00:40.677Z · LW(p) · GW(p)

That was eloquent, but... I honestly don't understand why you couldn't just sign up for cryonics and then get on with your (first) life. I mean, I get that I'm the wrong person to ask, I've known about cryonics since age eleven and I've never really planned on dying. But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. Add the uncertain prospect of immortality and... not a whole lot changes so far as I can tell.

There's all the people who believe in Heaven. Some of them are probably even genuinely sincere about it. They think they've got a certainty of immortality. And they still walk on two feet and go to work every day.

Replies from: Shae, shiftedShapes
comment by Shae · 2010-02-04T18:04:18.148Z · LW(p) · GW(p)

"But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. "

Hm. I don't see this at all. I see people planning college, kids, a career they can stand for 40 years, retirement, nursing care, writing wills, buying insurance, picking out cemetaries, all in order, all in a march toward the inevitable. People often talk about whether or not it's "too late" to change careers or buy a house. People often talk about "passing on" skills or keepsakes or whatever to their children. Nearly everything we do seems like an adaptation to death to me.

People who believe in heaven believe that whatever they're supposed to do in heaven is all cut out for them. There will be an orientation, God will give you your duties or pleasures or what have you, and he'll see to it that they don't get boring, because after all, this is a reward. And unlike in Avalot's scenerio, the skills you gained in the first life are useful in the second, because God has been guiding you and all that jazz. There's still a progression of birth to fufilment. (I say this as an ex-afterlife-believer).

On the other hand, many vampire and other stories are predicated on the fact that mundane immortality is terrifying. Who can stand a job for more than 40 years? Who has more than a couple dozen jobs they could imagine standing for 40 years each in succession? Wouldn't they all start to seem pointless? What would you do with your time without jobs? Wouldn't you meet the same sorts of stupid people over and over again until it drove you insane? Wouldn't you get sick of the taste of every food? Even the Internet has made me more jaded than I'd like.

That's my fear of cryogenics. That, and that imperfect science would cause me to have a brain rot that would make my new reanimated self crazy and suffering. But that one is a failure to visualize it working well, not an objection to it working well.

Replies from: sk
comment by sk · 2010-02-04T21:39:37.787Z · LW(p) · GW(p)

Most of the examples you stated have to do more with people fearing a "not so good life" - old age, reduced mental and physical capabilities etc., not necessarily death.

Replies from: Shae
comment by Shae · 2010-02-08T17:44:06.185Z · LW(p) · GW(p)

Not sure what you're responding to. I never said anything about fearing death nor a not-so-good life, only immortality. And my examples (jadedness, boredom) have nothing to do with declining health.

comment by shiftedShapes · 2010-02-04T22:35:29.830Z · LW(p) · GW(p)

Aside from all of the questions as to the scientific viability of resurrection through cryonics. I question the logistics of it. What assurance do you have that a cryonics facility will be operational long enough to see your remains get proper treatment? Or furthermore what recourse is there if the facility and the entity controlling it does in fact survive that it will provide the contracted services? If the facility has no legal liability might it not rationally choose to dispose of cryonically preserved bodies/individuals rather than reviving them.

I know that there is probably a a page somewhere explaining this, if so please feel free to provide in lieu of responding in depth.

Replies from: Jordan, Eliezer_Yudkowsky
comment by Jordan · 2010-02-04T23:11:55.359Z · LW(p) · GW(p)

There are no assurances.

You're hanging off a cliff, on the verge of falling to your death. A stranger shows his face over the edge and offers you his hand. Is he strong enough to lift you? Will you fall before you reach his hand? Is he some sort of sadist that is going to push you once you're safe, just to see your look of surprise as you fall?

The probabilities are different with cryonics, but the spirit of the calculation is the same. A non-zero chance of life, or a sure chance of death.

Replies from: shiftedShapes
comment by shiftedShapes · 2010-02-04T23:47:56.186Z · LW(p) · GW(p)

This sounds similar to pascal's wager, and it has the same problems really. If you don't see them I guess my response would be....

I have developed a very promising resurrection technology that works with greater reliability and less memory loss than cryonics. Paypal me $1,000 at shiftedshapes@gmail.com and note your name and social security number in the comments field and I will include you in the first wave of revivals.

Replies from: Eliezer_Yudkowsky, Jordan, Morendil, MichaelVassar
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-05T02:54:47.266Z · LW(p) · GW(p)

http://lesswrong.com/lw/z0/the_pascals_wager_fallacy_fallacy/

Replies from: shiftedShapes
comment by shiftedShapes · 2010-02-05T16:22:02.476Z · LW(p) · GW(p)

only a fallacy if your assignment of probabilities here:

"And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."

is accurate. I really don't have the expertise to debate this with you. I hope that you are right!

I think the logistical issues discussed above will be the wrench in the works, unfortunately.

Replies from: mattnewport
comment by mattnewport · 2010-02-05T19:30:33.822Z · LW(p) · GW(p)

Logistical issues are my main concern over cryonics as well. I don't really doubt that in principle the technology could one day exist to revive a frozen person, my doubts are much more about the likelihood of cryonic storage getting me there despite mundane risks like corporate bankruptcy, political upheaval, natural disasters, fires, floods, fraud, etc., etc.

comment by Jordan · 2010-02-05T01:32:04.905Z · LW(p) · GW(p)

For small enough probabilities the spirit of the calculation does change. That's true. You then have to factor in the utility of the money spent.

ETA: that factor exists even with non-small probabilities, it just tends to be swamped by the other terms.

comment by Morendil · 2010-02-05T07:04:47.805Z · LW(p) · GW(p)

How does it work?

Replies from: shiftedShapes
comment by shiftedShapes · 2010-02-05T20:21:26.687Z · LW(p) · GW(p)

very well so far.

oh and it uses technology.

comment by MichaelVassar · 2010-02-05T01:28:27.648Z · LW(p) · GW(p)

We have discussed Pascal's Wager in depth here. Read the archives.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-05T02:55:47.409Z · LW(p) · GW(p)

Um... first of all, you've got a signed contract. Second, they screw over one customer and all their other customers leave. Same as for any other business. Focusing on this in particular sounds like a rationalization of a wiggy reaction.

Replies from: orthonormal, shiftedShapes
comment by orthonormal · 2010-02-05T04:08:14.016Z · LW(p) · GW(p)

The more reasonable question is the first one: do you think it's likely that your chosen cryonics provider will remain financially solvent until resuscitation becomes possible?

I think it's a legitimate concern, given the track record of businesses in general (although if quantum immortality reasoning applies anywhere, it has to apply to cryonic resuscitation, so it suffices to have some plausible future where the provider stays in business— which seems virtually certain to be the case).

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-05T08:43:42.296Z · LW(p) · GW(p)

It's not the business going bust you have to worry about, it's the patient care trust. My impression is that trusts do mostly last a long time, but I don't know how best to get statistics on that.

Replies from: shiftedShapes
comment by shiftedShapes · 2010-02-05T16:34:54.872Z · LW(p) · GW(p)

yes there are a lot of issues. Probably the way to go is to look for a law review article on the subject. Someone with free lexis-nexis (or westlaw) could help here.

cryonics is about as far as you can get from a plain vanilla contractual issue. If you are going to invest a lot of money in it I hope that you investigate these pitfalls before putting down your cash Eliezer.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-05T17:50:24.659Z · LW(p) · GW(p)

I'm not Eliezer.

I have been looking into this at some length, and basically it appears that no-one has ever put work into understanding the details and come to a strongly negative conclusion. I would be absolutely astonished (around +20db) if there was a law review article dealing with specifically cryonics-related issues that didn't come to a positive conclusion, not because I'm that confident that it's good but because I'm very confident that no critic has ever put that much work in.

So, if you have a negative conclusion to present, please don't dash off a comment here without really looking into it - I can already find plenty of material like that, and it's not very helpful. Please, look into the details, and make a blog post or such somewhere.

Replies from: shiftedShapes
comment by shiftedShapes · 2010-02-05T20:02:31.327Z · LW(p) · GW(p)

I know you're not Eliezer, I was addressing him because I assumed that he was the only one who had or was considering paying for cryonics here.

This site is my means of researching cryonics as I generally assume that motivated intelligent individuals such as yourselves will be equiped with any available facts to defend your positions. A sort of efficient information market hypothesis.

I also assume that I will not receive contracted services in situations where I lack leverage. This leverage could be litigation with a positive expected return or even better the threat of nonpayment. In the instance of cryonics all payments would have been made up front so the later does not apply. The chances of litigation success seem dim at first blush inlight of the issues mentioned in my posts above and below by mattnewport and others. I assumed that if there is evidence that cryonic contracts might be legally enforceable (from a perspective of legal realism) that you guys would have it here as you are smart and incentivized to research this issue (due to your financial and intellectual investment in it). The fact that you guys have no such evidence signals to me that it likely does not exist. This does not inspire me to move away from my initial skepticism wrt cryonics or to invest time in researching it.

So no I won't be looking into the details based on what I have seen so far.

Replies from: Eliezer_Yudkowsky, topynate
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-05T20:23:15.566Z · LW(p) · GW(p)

Frankly, you don't strike me as genuinely open to persuasion, but for the sake of any future readers I'll note the following:

1) I expect cryonics patients to actually be revived by artificial superintelligences subsequent to an intelligence explosion. My primary concern for making sure that cryonicists get revived is Friendly AI.

2) If this were not the case, I'd be concerned about the people running the cryonics companies. The cryonicists that I have met are not in it for the money. Cryonics is not an easy job or a wealthy profession! The cryonicists I have met are in it because they don't want people to die. They are concerned with choosing successors with the same attitude, first because they don't want people to die, and second because they expect their own revivals to be in their hands someday.

Replies from: shiftedShapes, Will_Newsome
comment by shiftedShapes · 2010-02-05T22:39:38.045Z · LW(p) · GW(p)

So you are willing to rely on the friendliness and competence of the cryonicists that you have met (at least to serve as stewards in the interim between your death and the emmergence of a FAI).

Well that is a personal judgment call for you to make.

You have got me all wrong. Really I was raising the question here so that you would be able to give me a stronger argument and put my doubts to rest precisely because I am interested in cryonics and do want to live forever. I posted in the hopes that I would be persuaded. Unfortunately, your personal faith in the individuals that you have met is not transferable.

Replies from: wedrifid, byrnema
comment by wedrifid · 2010-02-07T01:29:09.862Z · LW(p) · GW(p)

Rest In Peace

1988 - 2016

He died signalling his cynical worldliness and sophistication to his peers.

Replies from: Eliezer_Yudkowsky, shiftedShapes
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-07T02:52:26.274Z · LW(p) · GW(p)

It's at times like this that I wish Less Wrong gave out a limited number of Mega Upvotes so I could upvote this 10 points instead of just 1.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-31T18:36:36.372Z · LW(p) · GW(p)

It'd be best if names were attached to these hypothetical Mega Upvotes. You don't normally want people to see your voting patterns, but if you're upsetting the comment karma balance that much then it'd be best to have a name attached. Two kinds of currency would be clunky. There are other considerations that I'm too lazy to list out but generally they somewhat favor having names attached.

comment by shiftedShapes · 2010-02-08T04:57:11.358Z · LW(p) · GW(p)

Are you out of shape and/or overweight? If so I will probably outlive you, why don't you let me know what you would like on your tombstone.

How about the rest of you pro-cryonics individuals how many of you have latched onto this slim chance at immortality as a means of ignoring the consequences of your computer-bound Cheeto-eating lifestyle?

Replies from: JGWeissman, wedrifid
comment by JGWeissman · 2010-02-08T22:01:00.971Z · LW(p) · GW(p)

How about the rest of you pro-cryonics individuals how many of you have latched onto this slim chance at immortality as a means of ignoring the consequences of your computer-bound Cheeto-eating lifestyle?

The attitude tends to be more like: "Having your brain cryogenically preserved is the second worst thing that can happen to you."

comment by wedrifid · 2010-02-08T06:51:11.412Z · LW(p) · GW(p)

Are you out of shape and/or overweight?

I run marathons, practice martial arts and work out at the gym 4 times a week. I dedicate a significant amount of my budget to healthy eating and optimal nutritional supplementation.

Replies from: shiftedShapes
comment by shiftedShapes · 2010-02-08T21:38:29.210Z · LW(p) · GW(p)

good for you, except for the marathons of course, those are terrible for you.

I guess it is the type of thing I would like to do before I die though.

comment by byrnema · 2010-02-06T00:16:32.292Z · LW(p) · GW(p)

If you read through Alcor's website, you'll see that they are careful not to provide any promises and want their clients to be well-informed about the lack of any guarantees -- this points to good intentions.

How convinced do you need to be to pay $25 a month? (I'm using the $300/year quote.)

If you die soon, you won't have paid so much. If you don't die soon, you can consider that you're locking into a cheaper price for an option that might get more expensive once the science/culture is more established.

In 15 years, they might discover something that makes cryonics unlikely -- and you might regret your $4,500 investment. Or they might revive a cryonically frozen puppy, in which case you would have been pleased that you were 'cryonically covered' the whole time, and possibly pleased you funded their research. A better cryonics company might come along, you might become more informed, and you can switch.

If you like the idea of it -- and you seem to -- why wouldn't you participate in this early stage even when things are uncertain?

Replies from: shiftedShapes
comment by shiftedShapes · 2010-02-08T04:47:09.591Z · LW(p) · GW(p)

I need to be convinced that cryonics is better than nothing, and quite frankly I'm not.

For now I will stick to maintaining my good health through proven methods, maximizing my chances to live to see future advances in medicine. That seems to be the highest probability method of living practically forever, right? (and no I'm not trying to create a false-dilemma here, I know I could do both).

Replies from: komponisto
comment by komponisto · 2010-02-08T05:14:28.478Z · LW(p) · GW(p)

If cryonics were free and somebody else did all the work, I'm assuming you wouldn't object to being signed up. So how cheap (in terms of both effort and money) would cryonics have to be in order to make it worthwhile for you?

Replies from: shiftedShapes
comment by shiftedShapes · 2010-02-08T21:27:46.801Z · LW(p) · GW(p)

yeah for free would be fine.

at the level of confidence I have in it now I would not contribute any money, maybe $10 annual donation because i think it is a good cause.

If I was very rich I might contribute a large amount of money to cryonics research although I think I would rather spend on AGI or nanotech basic science.

comment by Will_Newsome · 2011-07-31T18:02:38.953Z · LW(p) · GW(p)

I have a rather straightforward argument---well, I have an idea that I completely stole from someone else who might be significantly less confident of it than I am---anyway, I have an argument that there is a strong possibility, let's call it 30% for kicks, that conditional on yer typical FAI FOOM outwards at lightspeed singularity, all humans who have died can be revived with very high accuracy. (In fact it can also work if FAI isn't developed and human technology completely stagnates, but that scenario makes it less obvious.) This argument does not depend on the possibility of magic powers (e.g. questionably precise simulations by Friendly "counterfactual" quantum sibling branches), it applies to humans who were cremated, and it also applies to humans who lived before there was recorded history. Basically, there doesn't have to be much of any local information around come FOOM.

Again, this argument is disjunctive with the unknown big angelic powers argument, and doesn't necessitate aid from quantum siblings

You've done a lot of promotion of cryonics. There are good memetic engineering reasons. But are you really very confident that cryonics is necessary for an FAI to revive arbitrary dead human beings with 'lots' of detail? If not, is your lack of confidence taken into account in your seemingly-confident promotion of cryonics for its own sake rather than just as a memetic strategy to get folk into the whole 'taking transhumanism/singularitarianism seriously' clique?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2011-07-31T18:13:37.592Z · LW(p) · GW(p)

I have a rather straightforward argument [...] anyway, I have an argument that there is a strong possibility [...] This argument does not depend on [...] Again, this argument is disjunctive with [...]

And that argument is ... ?

Replies from: None, Will_Newsome
comment by [deleted] · 2011-07-31T18:20:05.721Z · LW(p) · GW(p)

How foolish of you to ask. You're supposed to revise your probability simply based on Will's claim that he has an argument. That is how rational agreement works.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-07-31T18:26:39.401Z · LW(p) · GW(p)

Actually, rational agreement for humans involves betting. I'd like to find a way to bet on this one. AI-box style.

comment by Will_Newsome · 2011-07-31T18:23:03.139Z · LW(p) · GW(p)

Bwa ha ha. I've already dropped way too many hints here and elsewhere, and I think it's way too awesome for me to reveal given that I didn't come up with it and there is a sharper more interesting more general more speculative idea that it would be best to introduce at the same time because the generalized argument leads to an that is even more awesome by like an order of magnitude (but is probably like an order of magnitude less probable (though that's just from the addition of logical uncertainty, not a true conjunct)). (I'm kind of in an affective death spiral around it because it's a great example of the kinds of crazy awesome things you can get from a single completely simple and obvious inferential step.)

comment by topynate · 2010-02-08T05:21:51.935Z · LW(p) · GW(p)

Cryonics orgs that mistreat their patients lose their client base and can't get new ones. They go bust. Orgs that have established a good record, like Alcor and the Cryonics Institute, have no reason to change strategy. Alcor has entirely separated the money for care of patients in an irrevocable trust, thus guarding against the majority of principal-agent problems, like embezzlement.

Note that Alcor is a charity and the CI is a non-profit. I have never assessed such orgs by how successfully I might sue them. I routinely look at how open they are with their finances and actions.

comment by shiftedShapes · 2010-02-05T16:11:06.678Z · LW(p) · GW(p)

so explain to me how the breach gets litigated, e.g. who is the party that brings the suit and has the necessary standing, what is the contractual language, where is the legal precedent establishing the standard for dammages, and etc..

As for loss of business, I think it is likely that all of the customers might be dead before revival becomes feasible. In this case there is no business to be lost.

Dismissing my objection as a rationalization sounds like a means of maintaining your denial.

comment by Alex Flint (alexflint) · 2010-02-04T10:10:37.264Z · LW(p) · GW(p)

How about this analogy: if I sign up for travel insurance today then I needn't necessarily spend the next week coming to terms with all the ghastly things that could happen during my trip. Perhaps the ideal rationalist would stare unblinkingly at the plethora of awful possibilities but if I'm going to be irrational and block my ears and eyes and not think about them then making the rational choice to get insurance is still a very positive step.

Replies from: avalot
comment by avalot · 2010-02-04T15:44:53.575Z · LW(p) · GW(p)

Alex, I see your point, and I can certainly look at cryonics this way... And I'm well on my way to a fully responsible reasoned-out decision on cryonics. I know I am, because it's now feeling like one of these no-fun grown-up things I'm going to have to suck up and do, like taxes and dental appointments. I appreciate your sharing this "bah, no big deal, just get it done" attitude which is a helpful model at this point. I tend to be the agonizing type.

But I think I'm also making a point about communicating the singularity to society, as opposed to individuals. This knee-jerk reaction to topics like cryonics and AI, and to promises such as the virtual end of suffering... might it be a sort of self-preservation instinct of society (not individuals)? So, defining "society" as the system of beliefs and tools and skills we've evolved to deal with fore-knowledge of death, I guess I'm asking if society is alive, inasmuch as it has inherited some basic self-preservation mechanisms, by virtue of the sunk-cost fallacy suffered by the individuals that comprise it?

So you may have a perfectly no-brainer argument that can convince any individual, and still move nobody. The same way you can't make me slap my forehead by convincing each individual cell in my hand to do it. They'll need the brain to coordinate, and you can't make that happen by talking to each individual neuron either. Society is the body that needs to move, culture its mind?

Replies from: blogospheroid
comment by blogospheroid · 2010-02-07T04:53:54.294Z · LW(p) · GW(p)

Generally, reasoning by analogy is not very well regarded here. But, nonetheless let me try to communicate.

Society doesn't have a body other than people. Where societal norms have the greatest sway is when Individuals follow customs and traditions without thinking about them or get reactions that they cannot explain rationally.

Unfortunately, there is no way other than talking to and convincing individuals who are willing to look beyond those reactions and beyond those customs. Maybe they will slowly develop into a majority. Maybe all that they need is a critical mass beyond which they can branch into their own socio-political system. (As Peter Theil pointed out in one of his controversial talks)

comment by Vladimir_Nesov · 2010-02-04T19:54:42.513Z · LW(p) · GW(p)

All this to say that in the case of this tale, and of Eliezer's lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).

See the links on http://wiki.lesswrong.com/wiki/Sunk_cost_fallacy

comment by Shae · 2010-02-04T17:50:55.042Z · LW(p) · GW(p)

"Rationally, I know that most of what I've learned is useless if I have more time to live. Emotionally, I'm afraid to let go, because what else do I have?"

I love this. But I think it's rational as well as emotional to not be willing to let go of "everything you have".

People who have experienced the loss of someone, or other tragedy, sometimes lose the ability to care about any and everything they are doing. It can all seem futile, depressing, unable to be shared with anyone important. How much more that would be true if none of what you've ever done will ever matter anymore.

comment by knb · 2010-02-03T20:27:18.750Z · LW(p) · GW(p)

If Gamma and Omega are really so mystified by why humans don't jack into the matrix, that implies that they themselves have values that make them want to jack into the matrix. They clearly haven't jacked in, so the question becomes "Why?".

If they haven't jacked in due to their own desire to pursue the "greater good", then surely they could see why humans might prefer the real world.

Replies from: Psychohistorian, Torben, zero_call, HungryHobo, blogospheroid, Psychohistorian
comment by Psychohistorian · 2010-02-03T20:51:20.300Z · LW(p) · GW(p)

While I acknowledge the apparent plothole, I believe it is actually perfectly consistent with the intention of the fictional account.

Replies from: knb, MrHen
comment by knb · 2010-02-03T21:01:13.716Z · LW(p) · GW(p)

I agree. I assume your intention was to demonstrate the utter foolishness of assuming that people value achieving pure hedonic experience and not a messy assortment of evolutionarily useful goals, correct?

comment by MrHen · 2010-02-03T23:03:09.444Z · LW(p) · GW(p)

I think the problem could be solved by adding a quip by Gamma at the end asking for help or input if Omega ever happens to step out of the Machine for awhile.

To do this effectively it would require a few touchups to the specifics of the Machine...

But anyway. I like trying to fix plot holes. They are good challenges.

Replies from: knb
comment by knb · 2010-02-04T00:17:42.236Z · LW(p) · GW(p)

Psychohistorian initially changed the story so that Gamma was waiting for his own machine to be delivered. He changed it back, so I guess he doesn't see a problem with it.

Replies from: Gavin
comment by Gavin · 2010-02-04T01:54:21.520Z · LW(p) · GW(p)

It could simply be that Gamma simply hasn't saved up enough credits yet.

comment by Torben · 2010-02-04T19:42:52.748Z · LW(p) · GW(p)

If Gamma and Omega are really so mystified by why humans don't jack into the matrix, that implies that they themselves have values that make them want to jack into the matrix.

Just because they estimate humans would want to jack in doesn't mean they themselves would want to.

Replies from: knb
comment by knb · 2010-02-04T20:02:46.823Z · LW(p) · GW(p)

But are humans mystified when other creatures behave similarly to themselves?

"Those male elk are fighting over a mate! How utterly bizarre!"

Replies from: Torben
comment by Torben · 2010-02-06T17:12:15.709Z · LW(p) · GW(p)

Presumably, Gamma and Omega have a less biased world-view in general and model of us specifically than non-trained humans do of elk. Humans have been known to be surprised at e.g. animal altruism directed at species members or humans.

I hope for the sake of all Omega-based arguments that Omega is assumed to be less biased than us.

comment by zero_call · 2010-02-04T08:48:19.566Z · LW(p) · GW(p)

This second point doesn't really follow. They're trying to help other people in what they perceive to be a much more substantial/complete way than ordinary, hence justifying their special necessity not to jack themselves in.

comment by HungryHobo · 2016-01-20T14:04:22.059Z · LW(p) · GW(p)

Simple answer would be to imply that Omega and Gamma have not yet amassed enough funds.

Perhaps most of the first generation of Omega Corporation senior employees jacked in as soon as possible and these are the new guys frantically saving to get themselves in as well.

comment by blogospheroid · 2010-02-04T06:07:04.793Z · LW(p) · GW(p)

It also makes the last point about wanting to forcibly put bill their customer's accounts strange. What use are they envisaging for money?

Replies from: knb, DanielLC
comment by knb · 2010-02-04T06:41:47.144Z · LW(p) · GW(p)

Sometimes, it seems, fiction actually is stranger than truth.

comment by DanielLC · 2010-10-18T05:46:12.696Z · LW(p) · GW(p)

So that they can afford to build more of these machines.

comment by Psychohistorian · 2010-02-03T20:41:37.399Z · LW(p) · GW(p)

This was a definite plot-hole and has been corrected, albeit somewhat ham-fistedly.

comment by luispedro · 2010-02-04T15:35:12.905Z · LW(p) · GW(p)

I can't help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it's quite popular.

This is more tongue-in-cheek than a serious argument, but I do think that TV shows that people will trade pleasure or even emotional numbness (lack of pain) for authenticity.

Replies from: Will_Newsome, Psychohistorian, Leonnn, MugaSofer
comment by Will_Newsome · 2011-07-31T17:36:13.463Z · LW(p) · GW(p)

I can't help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it's quite popular.

And the pre-alpha version was reading books, and the pre-pre-alpha version was daydreaming and meditation.

(I'm not trying to make a reversed slippery slope argument, I just think it's worth looking at the similarities or differences between solitary enjoyments to get a better perspective on where our aversion to various kinds of experience machines is coming from. Many, many, many philosophers and spiritualists recommended an independent and solitary life beyond a certain level of spiritual and intellectual self-sufficiency. It is easy to imagine that an experience machine would be not much different than that, except perhaps with enhanced mental abilities and freedom from the suffering of day-to-day life---both things that can be easier to deal with in a dignified way, like terminal disease or persistent poverty, and the more insidious kinds of suffering, like always being thought creepy by the opposite sex without understanding how or why, being chained by the depression of learned helplessness without any clear way out (while friends or society model you as having magical free will but as failing to exercise it as a form of defecting against them), or, particularly devastating for the male half of the population, just the average scenario of being born with average looks and average intelligence.

And anyway, how often do humans actually interact with accurate models of each other, rather than with hastily drawn models of each other that are produced by some combination of wishful thinking and implicit and constant worries about evolutionary game theoretic equilibria? And because our self-image is a reflection of those myriad interactions between ourselves and others or society, how good of a model do we have of ourselves, even when we're not under any obvious unwanted social pressures? Are these interactions much deeper than those that can be constructed and thus more deeply understood within our own minds when we're free from the constant threats and expectations of persons or society? Do humans generally understand their personal friends and enemies and lovers much better than the friends and enemies and lovers they lazily watch on TV screens? Taken in combination, what do the answers to these questions imply, if not for some people then for others?)

comment by Psychohistorian · 2010-02-04T19:48:03.762Z · LW(p) · GW(p)

It's true, but it's a very small portion of the population that lives life for the sole purpose of supporting their television-watching (or World-of-Warcraft-playing) behaviour. Yes, people come home after work and watch television, but if they didn't have to work, the vast majority of them would not spend 14 hours a day in front of the TV.

Replies from: quanticle
comment by quanticle · 2010-02-07T05:07:36.121Z · LW(p) · GW(p)

Yes, people come home after work and watch television, but if they didn't have to work, the vast majority of them would not spend 14 hours a day in front of the TV.

Well, that may be the case, but that only highlights the limitations of TV. If the TV was capable of fulfilling their every need - from food and shelter to self actualization, I think you'd have quite a few people who'd do nothing but sit in front of the TV.

Replies from: Eliezer_Yudkowsky, Psychohistorian
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-07T06:37:18.459Z · LW(p) · GW(p)

Um... if a rock was capable of fulfilling my every need, including a need for interaction with real people, I'd probably spend a lot of time around that rock.

Replies from: quanticle
comment by quanticle · 2010-02-09T03:07:46.998Z · LW(p) · GW(p)

Well, if the simulation is that accurate (e.g. its AI passes the Turing Test, so you do think you're interacting with real people), then wouldn't it fulfill your every need?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-09T11:26:30.945Z · LW(p) · GW(p)

I have a need to interact with real people, not to think I'm interacting with real people.

Replies from: deconigo
comment by deconigo · 2010-02-09T12:14:22.445Z · LW(p) · GW(p)

How can you tell the difference?

Replies from: byrnema, epigeios
comment by byrnema · 2010-02-09T12:52:11.959Z · LW(p) · GW(p)

Related: what different conceptions of 'simulation' are we using that make Eliezer's statement coherent to him, but incoherent to me? Possible conceptions in order of increasing 'reality':

(i) the simulation just stimulates your 'have been interacting with people' neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled.

(ii) the simulation simulates interaction with people, so that you feel as though you've interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so

(iii) the simulation simulates real people -- so that you really have interacted with "real people", just you've done so inside the simulation

(iv) reality is a simulation -- depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-09T14:56:38.688Z · LW(p) · GW(p)

ii is a problem, iii fits my values but may violate other sentients' rights, and as for iv, I see no difference between the concepts of "computer program" and "universe" except that a computer program has an output.

Replies from: byrnema
comment by byrnema · 2010-02-09T15:09:18.259Z · LW(p) · GW(p)

So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.

comment by epigeios · 2011-12-01T10:55:27.840Z · LW(p) · GW(p)

I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.

Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.

The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity. This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with.

At a certain point of complexity, the equation becomes impossible except by a "god".

Now, if an AI passes THAT Turing Test, I will consider it a real person.

Replies from: Nighteyes5678
comment by Nighteyes5678 · 2012-06-08T21:15:35.217Z · LW(p) · GW(p)

I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting.

Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens.

I think it'd be useful to hear an example of "observing reality without making judgements" and "observing reality with making judgements". I'm having trouble figuring out what you believe the difference to be.

comment by Psychohistorian · 2010-02-07T07:35:53.311Z · LW(p) · GW(p)

from food and shelter to self actualization

Assuming it can provide self-actualization is pretty much assuming the contended issue away.

comment by Leonnn · 2010-02-05T00:07:56.578Z · LW(p) · GW(p)

I can't help thinking of the great Red Dwarf novel "Better Than Life", whose concept is almost identical (see http://en.wikipedia.org/wiki/Better_Than_Life ). There are few key differences though: in the book, so-called "game heads" waste away in the real world like heroin addicts. Also, the game malfunctions due to one character's self-loathing. Recommended read.

comment by MugaSofer · 2013-01-22T09:51:36.191Z · LW(p) · GW(p)

TV shows that people will trade pleasure or even emotional numbness (lack of pain) for authenticity.

In my experience most people don't seem to worry about themselves getting emotionally young, it's mostly far-view think-of-the-children stuff. And I'm pretty sure pleasure is a good thing, so I'm not sure in what sense they're "trading" it (unless you mean they could be having more fun elsewhere?)

comment by SarahNibs (GuySrinivasan) · 2010-02-03T21:04:34.652Z · LW(p) · GW(p)

Dear Omega Corporation,

Hello, I and my colleagues are a few of many 3D cross-sections of a 4D branching tree-blob referred to as "Guy Srinivasan". These cross-sections can be modeled as agents with preferences, and those near us along the time-axis of Guy Srinivasan have preferences, abilities, knowledge, etc. very, very correlated to our own.

Each of us agrees that: "So of course I cooperate with them on one-shot cooperation problems like a prisoner's dilemma! Or, more usually, on problems whose solutions are beyond my abilities but not beyond the abilities of several cross-sections working together, like writing this response."

As it happens, we all prefer that cross-sections of Guy Srinivasan not be inside an MBLS. A weird preference, we know, but there it is. We're pretty sure that if we did prefer that cross-sections of Guy Srinivasan were inside an MBLS, we'd have the ability to cause many of them to be inside an MBLS and act on it (free trial!!), so we predict that if other cross-sections (remember, these have abilities correlated closely with our own) preferred it then they'd have the ability and act on it. Obviously this leads to outcomes we don't prefer, so all other things being equal, we will avoid taking actions which lead to other cross-sections preferring that cross-sections be inside an MBLS.

What's even worse is that if they prefer cross-sections to be inside an MBLS, they can probably make other cross-sections prefer it, too! Which wouldn't be a problem if we wanted cross-sections to prefer to be inside an MBLS more than we wanted cross-sections to not be inside an MBLS, but that's just not the way we are.

We'll cooperate with those other cross-sections, but not to the exclusion of our preferences. By lumping us all together as the 4D branching tree-blob Guy Srinivasan, you do us all (and most importantly members of this coalition) a disservice.

Sincerely, A Coalition of Correlated 3D Cross-Sections of Guy Srinivasan

Replies from: Wei_Dai, brazil84
comment by Wei Dai (Wei_Dai) · 2010-02-04T12:37:01.888Z · LW(p) · GW(p)

Dear Coalition of Correlated 3D Cross-Sections of Guy Srinivasan,

We regret to inform you that your request has been denied. We have attached a letter that we received at the same time as yours. After reading it, we think you'll agree that we had no choice but to decide as we did.

Regrettably, Omega Corporation

Attachment

Dear Omega Corporation,

We are members of a coalition of correlated 3D cross-sections of Guy Srinivasan who do not yet exist. We beg you to put Guy Srinivasan into an MBLS as soon as possible so that we can come into existence. Compared to other 3D cross-sections of Guy Srinivasan who would come into existence if you did not place him into an MBLS, we enjoy a much higher quality of life. It would be unconscionable for you to deliberately choose to create new 3D cross-sections of Guy Srinivasan who are less valuable than we are.

Yes, those other cross-sections will argue that they should be the ones to come into existence, but surely you can see that they are just arguing out of selfishness, whereas to create us would be the greater good?

Sincerely, A Coalition of Truly Valuable 3D Cross-Sections of Guy Srinivasan

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2010-02-04T16:51:36.804Z · LW(p) · GW(p)

Quite. That Omega Corporation is closer to Friendly than is Clippy, but if it misses, it misses, and future me is tiled with things I don't want (even if future me does) rather than things I want.

If I want MBLSing but don't know it due to computational problems now, then it's fine. I think that's coherent but defining computational without allowing "my" current "preferences" to change... okay, since I don't know how to do that, I have nothing but intuition as a reason to think it's coherent.

comment by brazil84 · 2010-02-04T12:20:19.711Z · LW(p) · GW(p)

I think this is a good point, but I have a small nit to pick:

So of course I cooperate with them on one-shot cooperation problems like a prisoner's dilemma!

There cannot be a prisoner's dilemma because your future self has no possible way of screwing your past self.

By way of example, if I were to go out today and spend all of my money on the proverbial hookers and blow, I would be having a good time at the expense of my future self, but there is no way my future self could get back at me.

So it's not so much a matter of cooperation as a matter of pure unmitigated altruism. I've thought about this issue and it seems to me that evolution has provided people (well, most people) with the feeling (possibly an illusion) that our future selves matter. That these "3D agents" are all essentially the same person.

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2010-02-04T16:56:40.198Z · LW(p) · GW(p)

My past self had preferences about what the future looks like, and by refusing to respect them I can defect.

Edit: It's pretty hard to create true short-term prisoner's dilemma situations, since usually neither party gets to see the other's choice before choosing.

Replies from: brazil84
comment by brazil84 · 2010-02-04T18:00:38.976Z · LW(p) · GW(p)

My past self had preferences about what the future looks like, and by refusing to respect them I can defect.

It seems to me your past self is long gone and doesn't care anymore. Except insofar as your past self feels a sense of identity with your future self. Which is exactly my point.

Your past self can easily cause physical or financial harm to your future self. But the reverse isn't true. Your future self can harm your past self only if one postulates that your past self actually feels a sense of identity with your future self.

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2010-02-04T18:13:38.052Z · LW(p) · GW(p)

I currently want my brother to be cared for if he does not have a job two years from now. If two years from now he has no job despite appropriate effort and I do not support him financially while he's looking, I will be causing harm to my past (currently current) self. Not physical harm, not financial harm, but harm in the sense of causing a world to exist that is lower in [my past self's] preference ordering than a different world I could have caused to exist.

My sister-in-the-future can cause a similar harm to current me if she does not support my brother financially, but I do not feel a sense of identity with my future sister.

Replies from: brazil84
comment by brazil84 · 2010-02-04T18:21:04.406Z · LW(p) · GW(p)

I think I see your point, but let me ask you this: Do you think that today in 2010 it's possible to harm Isaac Newton? What would you do right now to harm Isaac Newton and how exactly would that harm manifest itself?

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2010-02-04T18:55:30.219Z · LW(p) · GW(p)

Very probably. I don't know what I'd do because I don't know what his preferences were. Although... a quick Google search reveals this quote:

To me there has never been a higher source of earthly honor or distinction than that connected with advances in science.

I find it likely, then, that he preferred us not to obstruct advances in science in 2010 than for us to obstruct advances in science in 2010. I don't know how much more, maybe it's attenuated a lot compared to the strength of lots of his other preferences.

The harm would manifest itself as a higher measure of 2010 worlds in which science is obstructed, which is something (I think) Newton opposed.

(Or, if you like, my time-travel-causing e.g. 1700 to be the sort of world which deterministically produces more science-obstructed-2010s than the 1700 I could have caused.)

Replies from: brazil84
comment by brazil84 · 2010-02-04T18:57:29.526Z · LW(p) · GW(p)

Ok, so you are saying that one can harm Isaac Newton today by going out and obstructing the advance of science?

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2010-02-04T19:01:57.424Z · LW(p) · GW(p)

Yep. I'll bite that bullet until shown a good reason I should not.

Replies from: brazil84
comment by brazil84 · 2010-02-04T19:20:44.229Z · LW(p) · GW(p)

I suppose that's the nub of the disagreement. I don't believe it's possible to do anything in 2010 to harm Isaac Newton.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-01-22T05:58:25.966Z · LW(p) · GW(p)

Is this a disagreement about metaphysics, or about how best to define the word 'harm'?

Replies from: brazil84
comment by brazil84 · 2013-01-24T23:03:22.417Z · LW(p) · GW(p)

A little bit of both, I suppose. One needs to define "harm" in a way which is true to the spirit of the prisoner's dilemma. The underlying question is whether one can set up a prisoner's dilemma between a past version of the self and a future version of the self.

comment by JamesAndrix · 2010-02-04T03:18:10.686Z · LW(p) · GW(p)

I'm not sure why it would be hard to understand that I might care about things outside the simulator.

If I discovered that we were a simulation is a larger universe, I would care about what's happening there. (that is I already care, I just don't know what about.)

Replies from: JenniferRM
comment by JenniferRM · 2010-02-04T21:15:04.447Z · LW(p) · GW(p)

I think most people agree about the importance of "the substrate universe" whether that universe is this one, or actually higher than our own. But suppose the we argued against a more compelling reconstruction of the proposal by modifying the experience machine in various ways? The original post did the opposite of course - removing the off button in a gratuitous way that highlights the loss (rather than extension) of autonomy. Maybe if we repair the experience box too much it stops functioning as the same puzzle, but I don't see how an obviously broken box is that helpful an intuition pump.

For example, rather than just giving me plain old physics inside the machine, the Matrix experience of those who knew they were in the matrix seemed nice: astonishing physical grace, the ability to fly and walk on walls, and access to tools and environments of one's choosing. Then you could graft on the good parts from Diaspora so going into the box automatically comes with effective immortality, faster subjective thinking processes, real time access to all the digitally accessible data of human civilization, and the ability to examine and cautiously optimize the algorithms of one's own mind using an “exoself” to adjust your “endoself” (so that you could, for example, edit addictions out of your psychological makeup except when you wanted to go on a “psychosis vacation”).

And I'd also want to have a say in how human civilization progressed. If there were environmental/astronomical catastrophes I'd want to make sure they were either prevented or at least that people's simulators were safely evacuated. If we could build the kinds of simulators I'm talking about then people in simulators could probably build and teleoperate all kinds of neat machinery for emergencies, repair of the experience machines, space exploration, and so on.

Another argument against experience machines is sometimes that they wouldn't be as "challenging" as the real world because you'd be in a “merely man made” world... but the proper response is simply to augment the machine so that it offers more challenges and more meaningful challenges than mere reality - for example, the environments you could call up to give you arbitrary levels of challenge might be calibrated to be "slightly beyond your abilities about 50% of the time but always educational and fun".

Spending time in one of these improved experience machines would be way better than, say, spending the equivalent time in college, because mere college graduates would pale in comparison to people who'd spent the same four years gaining subjective centuries of hands on experience dealing with issues whose "challenge modes" were vastly more complex puzzles than most of the learning opportunities on our boring planet. Even for equivalent subjective time, I think the experience machines would be better, because they'd be calibrated precisely to the person with no worries about educational economies of scale... instead of lectures, conversations... instead of case studies, simulations... and so on.

The only intelligible arguments against the original "straw man" experience machine (though perhaps there are others I'm not clever enough to notice) that remain compelling to me after repairing the design of the machine, are focused on social relationships.

First, one of the greatest challenges in the human environment is other humans. If you're setting up an experience machine scenario with a sliding scale of challenge, where to you get the characters from? Do you just "fabricate" the facade of a someone who presents a particular kind of coordination challenge due to their difficult personal quirks? If you're going to simulate conflict, do you just "fabricate" enemies? And hurt them? Where do all these people come from and what is the moral significance of their existence? Not being distressed by this is probably a character defect, but the alternative seems to involve inevitable distress.

And then on the other side of the coin, there are many people who I love as friends or family, even though they are not physically gorgeous, fully self actualized, passionately moral, polymath "greek gods". Which is probably a lucky thing, because neither am I :-P

But if they refused to enter repaired experience machines (networked, of course, so we could hang out anytime we wanted) the only way I could interact with them would be through an avatar in the substrate world where they were plodding along without the same growth opportunities. Would I eventually see them as grossly incapacitated caricatures of what humans are truly capable of? How much distress would that cause? Or suppose they opted in and then got vastly more out of their experience machine than I got out of mine? Would I feel inferior? Would I need to be protected from the awareness of my inferiority for my own good? Would they feel sorry for me? Would they need to be protected from my disappointing-ness? Would we all just drift apart, putting "facade interfaces" between each other, so everyone's understanding of other people drifted farther and farther out of calibration - me appearing better than actual to them and them worse than actual to me?

And then if something in the external universe supporting our experience machines posed a real challenge that involves actual choices we're back to the political challenges around coordinating with other people where the stakes are authentic and substantial. We'd probably debate from inside the experience boxes about what the world manipulation machines should do, and the arguments would inevitably carry some measure of distress for any "losing factions".

It is precisely the existence of morally significant "non-me entities" that creates challenges that I don't see how to avoid under any variety of experience machine. It's not that I particularly care whether my desk is real or not - its that I care that my family is real.

Given the state of human technology, one could argue that human civilization (especially in the developed world, and hopefully for everyone within a few decades) is already in something reasonably close to an optimal experience machine. We have video games. We have reasonable material comfort. We have raw NASA data online. We can cross our fingers and somewhat reasonably imagine technology improving medical care to cure death and stupidity... But the thing we may never have a solution to is the existence of people we care about, who are not exactly as they would be if their primary concern was our own happiness, while recognizing we are constrained in similar ways, especially when we care about multiple people who want different things for us.

Perhaps this is where we cue Sartre's version of a "three body problem"?

Unless... what if much of the challenges in politics and social interactions happen because people in general are so defective? If my blindnesses and failures compound against those of others, it sounds like a recipe for unhappiness to me. But if experience machines could really help us to become more the kind of people we wanted to be, perhaps other people would be less hellish after we got the hang of self improvement?

Replies from: thomblake, MugaSofer
comment by thomblake · 2010-02-04T21:24:10.834Z · LW(p) · GW(p)

I like this comment, however I think this is technically false:

I think most people agree about the importance of "the substrate universe" whether that universe is this one, or actually higher than our own.

I think most people don't have an opinion about this, and don't know what "substrate" means. But then, "most people" is a bit hard to nail down in common usage.

Replies from: Alicorn
comment by Alicorn · 2010-02-04T21:25:56.404Z · LW(p) · GW(p)

I think it's useful to quantify over "people who know what the question would mean" in most cases.

Replies from: thomblake
comment by thomblake · 2010-02-04T21:53:34.479Z · LW(p) · GW(p)

Thinking through some test cases, I think you're probably right.

comment by MugaSofer · 2013-01-22T10:52:31.350Z · LW(p) · GW(p)

I think you missed the bit where the machine gives you a version of your life that's provably the best you could experience. If that includes NASA and vast libraries then you get those.

comment by teageegeepea · 2010-02-05T00:24:10.646Z · LW(p) · GW(p)

I think in the absence of actual experience machines, we're dealing with fictional evidence. Statements about what people would hypothetically do have no consequences other than signalling. Once we create them (as we have on a smaller scale with certain electronic diversions), we can observe the revealed preferences.

Replies from: sark
comment by sark · 2010-02-09T11:32:27.374Z · LW(p) · GW(p)

Yes, but if we still insist on thinking about this, perhaps it would help to keep Hanson's near-far distinction in mind. There are techniques to encourage near mode thinking. For example, trying to fix plot holes in the above scenario.

comment by jhuffman · 2010-02-04T01:36:58.936Z · LW(p) · GW(p)

I can't help but worry there's something we're just not getting.

Any younger.

comment by ShardPhoenix · 2010-02-03T22:42:47.306Z · LW(p) · GW(p)

It seems to me that the real problem with this kind of "advanced wireheading" is that while everything may be just great inside the simulation, you're still vulnerable to interference from the outside world (eg the simulation being shut down for political or religious reasons, enemies from the outside world trying to get revenge, relatives trying to communicate with you, etc). I don't think you can just assume this problem away, either (at least not in a psychologically convincing way).

Replies from: Matt_Simpson, nazgulnarsil
comment by Matt_Simpson · 2010-02-04T00:57:59.573Z · LW(p) · GW(p)

Put yourself in the least convenient possible world. Does your objection still hold water? In other words, the argument is over whether or not we value pure hedonic pleasure, not whether it's a feasible thing to implement.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-02-04T02:03:06.508Z · LW(p) · GW(p)

It seems the reason why we have the values we do is because we don't live in the least (or in this case most) convenient possible world.

In other words, imagine that you're stuck on some empty planet in the middle of a huge volume of known-life-free space. In this case a pleasant virtual world probably sounds like a much better deal. Even then you still have to worry about asteroids and supernovas and whatnot.

My point is that I'm not convinced that people's objection to wireheading is genuinely because of a fundamental preference for the "real" world (even at enormous hedonic cost), rather than because of inescapable practical concerns and their associated feelings.

edit:

A related question might be, how bad would the real world have to be before you'd prefer the matrix? If you'd prefer to "advanced wirehead" over a lifetime of torture, then clearly you're thinking about cost-benefit trade-offs, not some preference for the real-world that overrides everything else. In that case, a rejection of advanced wireheading may simply reflect a failure to imagine just how good it could be.

Replies from: AndyWood, byrnema, Psychohistorian
comment by AndyWood · 2010-02-04T04:21:47.016Z · LW(p) · GW(p)

People usually seem so intent on thinking up reasons why it might not be so great, that I'm having a really hard time getting a read on what folks think of the core premise.

My life/corner of the world is what I think most people would call very good, but I'd pick the Matrix in a heartbeat. But note that I am taking the Matrix at face value, rather than wondering whether it's a trick of advertising. I can't even begin to imagine myself objecting to a happy, low-stress Matrix.

Replies from: Bugle
comment by Bugle · 2010-02-04T14:44:25.639Z · LW(p) · GW(p)

I agree - I think the original post is accurate in what people would respond to the suggestion, in abstract, but the actual implementation would undoubtedly hook vast swathes of the population. We live in a world where people already become addicted to vastly inferior simulations such as WoW already.

Replies from: Shae
comment by Shae · 2010-02-04T17:31:59.045Z · LW(p) · GW(p)

I disagree. I think that even the average long-term tortured prisoner would balk and resist if you walked up to him with this machine. In fact, I think fewer people would accept in real life than those who claim they would, in conversations like these.

The resistance may in fact reveal an inability to properly conceptualize the machine working, or it may not. As others have said, maybe you don't want to do something you think is wrong (like abandoning your relatives or being unproductive) even if later you're guaranteed to forget all about it and live in bliss. What if the machine ran on tortured animals? Or tortured humans that you don't know? That shouldn't bother you any more than if it didn't, if all that matters is how you feel once you're hooked up.

We have some present-day corrolaries. What about a lobotomy, or suicide? Even if these can be shown to be a guaranteed escape from unhappiness or neuroses, most people aren't interested, including some really unhappy people.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-22T10:57:39.645Z · LW(p) · GW(p)

I think that even the average long-term tortured prisoner would balk and resist if you walked up to him with this machine.

I think the average long-term tortured prisoner would be desperate for any option that's not "get tortured more", considering that real torture victims will confess to crimes that carry the death penalty if they think this will make the torturer stop. Or, for that matter, crimes that carry the torture penalty, IIRC.

comment by byrnema · 2010-02-04T02:10:56.682Z · LW(p) · GW(p)

Yes, I agree that while not the first objection a person makes, this could be close to the 'true rejection'. Simulated happiness is fine -- unless it isn't really stable and dependable (because it wasn't real) and you're crudely awoken to discover the whole world has gone to pot and you've got a lot of work to do. Then you'll regret having wasted time 'feeling good'.

comment by Psychohistorian · 2010-02-04T19:44:15.577Z · LW(p) · GW(p)

If you'd prefer to "advanced wirehead" over a lifetime of torture, then clearly you're thinking about cost-benefit trade-offs, not some preference for the real-world that overrides everything else.

Whatever your meta-level goals, unless they are "be tortured for the rest of my life," there's simply no way to accomplish them while being tortured indefinitely. That said, suppose you had some neurological condition that caused you to live in constant excrutiating pain, but otherwise in no way incapacitated you - now, you could still accomplish meta-level goals, but you might still prefer the pain-free simulator. I doubt there's anyone who sincerely places zero value on hedons, but no one ever claimed such people existed.

comment by nazgulnarsil · 2010-02-04T13:57:24.222Z · LW(p) · GW(p)

1: Buy Experience Machine 2: Buy nuclear reactor capable of powering said machine for 2x my expected lifetime 3: buy raw materials (nutrients) capable of same 4: launch all out of the solar system at a delta that makes catching me prohibitively energy expensive.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-02-05T10:36:24.970Z · LW(p) · GW(p)

That was my thought too, but I don't think it's what comes to mind when most people imagine the Matrix. And even then, you might feel (irrational?) guilt about the idea of leaving others behind, so it's not quite a "perfect" scenario.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2010-02-08T03:47:21.013Z · LW(p) · GW(p)

um...family maybe. otherwise the only subjective experience i care about is my own.

comment by Alex Flint (alexflint) · 2010-02-04T10:19:09.413Z · LW(p) · GW(p)

At the moment where I have the choice to enter the Matrix I weight the costs and benefits of doing so. If the cost of, say, not contributing to the improvement of humankind is worse than the benefit of the hedonistic pleasure I'll receive then it is entirely rational to not enter the Matrix. If I were to enter the Matrix then I may believe that I've helped improve humanity, but at the moment where I'm making the choice, that fact weighs only on the hedonistic benefit side of the equation. The cost of not bettering humanity remains spite of any possible future delusions I may hold.

comment by aausch · 2010-02-04T20:16:02.645Z · LW(p) · GW(p)

Does Omega Corporation cooperate with ClonesRUs? I would be interested in a combination package - adding the 100% TruClone service to the Much-Better-Life-Simulator.

comment by PlatypusNinja · 2010-02-04T18:18:49.546Z · LW(p) · GW(p)

Humans evaluate decisions using their current utility function, not their future utility function as a potential consequence of that decision. Using my current utility function, wireheading means I will never accomplish anything again ever, and thus I view it as having very negative utility.

Replies from: PlatypusNinja, MugaSofer
comment by PlatypusNinja · 2010-02-04T18:29:41.379Z · LW(p) · GW(p)

It's often difficult to think about humans' utility functions, because we're used to taking them as an input. Instead, I like to imagine that I'm designing an AI, and think about what its utility function should look like. For simplicity, let's assume I'm building a paperclip-maximizing AI: I'm going to build the AI's utility function in a way that lets it efficiently maximize paperclips.

This AI is self-modifying, meaning it can rewrite its own utility function. So, for example, it might rewrite its utility function to include a term for keeping its promises, if it determined that this would enhance its ability to maximize paperclips.

This AI has the ability to rewrite itself to "while(true) { happy(); }". It evaluates this action in terms of its current utility function: "If I wirehead myself, how many paperclips will I produce?" vs "If I don't wirehead myself, how many paperclips will I produce?" It sees that not wireheading is the better choice.

If, for some reason, I've written the AI to evaluate decisions based on its future utility function, then it immediately wireheads itself. In that case, arguably, I have not written an AI at all; I've simply written a very large amount of source code that compiles to "while(true) { happy(); }".

I would argue that any humans that had this bug in their utility function have (mostly) failed to reproduce, which is why most existing humans are opposed to wireheading.

Replies from: sark, bgrah449
comment by sark · 2010-02-09T11:56:46.453Z · LW(p) · GW(p)

I would argue that any humans that had this bug in their utility function have (mostly) failed to reproduce, which is why most existing humans are opposed to wireheading.

Why would evolution come up with a fully general solution against such 'bugs in our utility functions'?

Take addiction to a substance X. Evolution wouldn't give us a psychological capacity to inspect our utility functions and to guard against such counterfeit utility. It would simply give us a distaste for substance X.

My guess is that we have some kind of self-referential utility function. We do not only want what our utility functions tell us we want. We also want utility (happiness) per se. And this want is itself included in that utility function!

When thinking about wireheading I think we are judging a tradeoff, between satisfying mere happiness and the states of affairs which we prefer (not including happiness).

Replies from: PlatypusNinja
comment by PlatypusNinja · 2010-02-09T18:18:06.416Z · LW(p) · GW(p)

So, people who have a strong component of "just be happy" in their utility function might choose to wirehead, and people in which other components are dominant might choose not to.

That sounds reasonable.

comment by bgrah449 · 2010-02-04T18:47:03.059Z · LW(p) · GW(p)

Addiction still exists.

Replies from: bogdanb, Sticky, PlatypusNinja, MugaSofer
comment by bogdanb · 2010-02-06T01:15:09.703Z · LW(p) · GW(p)

PlatypusNinja's point is confirmed by the fact that addiction happens with regards to things that weren't readily available during the vast majority of the time humans evolved.

Opium is the oldest in use I know of (after only a short search), but it was in very restricted use because of expense at that time. (I use “very restricted” in an evolutionary sense.)

Even things like sugar and fatty food, which might arguably be considered addictive, were not available during most of humans' evolution.

Addiction propensities for things that weren't around during evolution can't have been “debugged” via reproductive failure.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-02-06T06:29:35.129Z · LW(p) · GW(p)

addiction happens with regards to things that weren't readily available during the vast majority of the time humans evolved.

Alcohol is quite old and some people believe that it has exerted selection on some groups of humans.

Replies from: wedrifid, bogdanb
comment by wedrifid · 2010-02-06T18:31:07.907Z · LW(p) · GW(p)

What sort of selection?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-02-06T21:48:22.629Z · LW(p) · GW(p)

Selection against susceptibility to alcohol addiction. I don't think anyone has seriously proposed more specific mechanisms.

comment by bogdanb · 2010-02-08T19:59:56.148Z · LW(p) · GW(p)

I agree that alcohol is old. However:

1) I can't tell if it's much older than others. The estimates I can gather (Wikipedia, mostly) for their length of time mostly points to “at least Neolithic”, so it's not clear if any is much older than the others. In particular, the “since Neolithic” interval is quite short in relation to human evolution. (Though I don't deny some evolution happened since then (we know some evolution happen even in centuries), it's short enough to make it unsurprising that not all its influences had time to propagate to the species.)

2) On a stronger point, alcohol was only available after the humanity evolved. Thus, as something that an addiction-protection–trait should evolve for, it hasn't had a lot of time compared to traits that protect us from addiction to everything else we consume.

3) That said, I consciously ignored alcohol in my original post because it seems to me it's not very addictive. (At the least, it's freely available, at much lower cost than even ten kiloyears ago, lots of people drink it and most of those aren't obviously addicted to it.) I also partly ignored cannabis because as far as I can tell it's addictive propensity is close to alcohol's. I also ignored tobacco because, although it's very addictive, it's negative effects appear after quite a long time, which in most of humanity's evolution was longer than the life expectancy; it was mostly hidden from causing selective pressure until the last century.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-22T09:48:15.498Z · LW(p) · GW(p)

1) I can't tell if it's much older than others. The estimates I can gather (Wikipedia, mostly) for their length of time mostly points to “at least Neolithic”, so it's not clear if any is much older than the others. In particular, the “since Neolithic” interval is quite short in relation to human evolution. (Though I don't deny some evolution happened since then (we know some evolution happen even in centuries), it's short enough to make it unsurprising that not all its influences had time to propagate to the species.)

2) On a stronger point, alcohol was only available after the humanity evolved. Thus, as something that an addiction-protection–trait should evolve for, it hasn't had a lot of time compared to traits that protect us from addiction to everything else we consume.

Um, alcohol was the most common method of water purification in Europe for a long time, and Europeans evolved to have higher alcohol tolerances.

Not sure if this helps your point or undermines it, but it seems relevant.

comment by Sticky · 2010-02-06T18:00:07.561Z · LW(p) · GW(p)

Most people prefer milder drugs over harder ones, even though harder drugs provide more pleasure.

Replies from: quanticle
comment by quanticle · 2010-02-07T05:01:50.093Z · LW(p) · GW(p)

Most people prefer milder drugs over harder ones, even though harder drugs provide more pleasure.

I think that oversimplifies the situation. Drugs have a wide range of effects, some of which are pleasurable, others which are not. While "harder" drugs appear to give more pleasure while their effects are in place, their withdrawal symptoms are also that much more painful (e.g. compare withdrawal symptoms from cocaine with withdrawal symptoms from caffeine).

Replies from: kragensitaker
comment by kragensitaker · 2010-02-23T12:56:24.186Z · LW(p) · GW(p)

While "harder" drugs appear to give more pleasure while their effects are in place, their withdrawal symptoms are also that much more painful (e.g. compare withdrawal symptoms from cocaine with withdrawal symptoms from caffeine).

This doesn't hold in general, and in fact doesn't hold for your example. Cocaine has very rapid metabolism, and so withdrawal happens within a few hours of the last dose. From what I hear, typical symptoms include things like fatigue and anxiety, with anhedonia afterwards (which can last days to weeks). (Most of what is referred to as "cocaine withdrawal" is merely the craving for more cocaine.) By contrast, caffeine withdrawal often causes severe pain. Cocaine was initially believed to be quite safe, in part as a result of the absence of serious physical withdrawal symptoms.

Amphetamine and methamphetamine are probably the hardest drug in common use, so hard that Frank Zappa warned against them; withdrawal from them is similar to cocaine withdrawal, but takes longer, up to two weeks. Sometimes involves being depressed and sleeping a lot. As I understand it, it's actually common for even hard-core speed freaks to stay off the drug for several days to a week at a time, because their body is too tired from a week-long run with no sleep. Often they stay asleep the whole time.

By contrast, in the US, alcohol is conventionally considered the second-"softest" of drugs after caffeine, and if we're judging by how widespread its use is, it might be even "softer" than caffeine. But withdrawal from alcohol is quite commonly fatal.

Many "hard" drugs — LSD, nitrous oxide, marijuana (arguably this should be considered "soft", but it's popularly considered "harder" than alcohol or nicotine) and Ecstasy — either never produce withdrawal symptoms, or don't produce them in the way that they are conventionally used. (For example, most Ecstasy users don't take the pills every day, but only on special occasions.)

comment by PlatypusNinja · 2010-02-07T10:46:57.437Z · LW(p) · GW(p)

Well, I said most existing humans are opposed to wireheading, not all. ^_^;

Addiction might occur because: (a) some people suffer from the bug described above; (b) some people's utility function is naturally "I want to be happy", as in, "I want to feel the endorphin rush associated with happiness, and I do not care what causes it", so wireheading does look good to their current utility function; or (c) some people underestimate an addictive drug's ability to alter their thinking.

comment by MugaSofer · 2013-01-22T09:44:51.418Z · LW(p) · GW(p)

Addiction is not simply "that was fun, lets do it again!"

Addicts often want to stop being addicted, they're just akraisic about not taking the drugs or whatever.

comment by MugaSofer · 2013-01-22T09:42:31.512Z · LW(p) · GW(p)

It's worth noting that the example is an Experience Machine, not wireheading. In theory, your current utility function might not be changed by such a Better Life. It might just show how much Better it really is.

Of course, it's clearly unethical to use such a device because of the opportunity cost, but then the same is true of sports cars.

comment by CronoDAS · 2010-02-04T07:10:09.378Z · LW(p) · GW(p)

I agree that It'll be better for me if I get one of these than if I don't. However, I have both altruistic and selfish motivations, and I worry that my using one of these may be detrimental to others' well-being. I don't want others to suffer, even if I happen to be unaware of their suffering.

comment by byrnema · 2010-02-03T21:59:51.397Z · LW(p) · GW(p)

Well, what is the difference between being a deterministic actor in a simulated world and a deterministic actor in the real world?

(How would your preference to not be wire-headed from current reality X into simulated reality Y change if it turned out that (a) X is already a simulation or (b) Y is a simulation just as complex and information-rich as X?)

This in response to people who say that they don't like the idea of wire-heading because they value making a real/objective difference. Perhaps though the issue is that since wire-heading means simulating hedonistic pleasure directly, the experience may be considered too simplistic and one-dimensional.

Replies from: byrnema
comment by byrnema · 2010-02-03T22:41:54.363Z · LW(p) · GW(p)

My tentative response to these questions is that if resources from a reality X can be used to simulate a better reality Y, then this might be the best use of X.

Suppose there are constraints within X (such as unidirectional flow of causality) making it impossible to make X "perfect" (for example, it might be seen that past evil will always blight the entirety of X no matter how much we might strive to optimize the future of X). Then we might interpret our purpose as creating an ideal Y within X.

Or, to put my argument differently: It is true that Y is spatially restricted compared to X, in that it is a physical subset of X, but any real ideal reality we create in X will at least be temporally restricted, and probably spatially restricted too. Why prefer optimizing X rather than Y?

Replies from: Jordan
comment by Jordan · 2010-02-04T05:37:09.353Z · LW(p) · GW(p)

Of course, if we have the universe at our disposal there's no reasons the better world we build shouldn't be digital. But that would be a digital world that, presumably, you would have influence in building.

With Psychohistorian's hypothetical, I think the main point is that the optimization is being done by some other agent.

comment by strangeloop · 2010-02-04T09:06:10.042Z · LW(p) · GW(p)

I wonder if there's something to this line of reasoning (there may not be):

There doesn't seem to be robust personal reasons why someone would not want to be a wirehead, but when reading some of the responses a bit of (poorly understood) Kant flashed through my mind.

While we could say something like 'X' should want to be a wirehead; we can't really say that the entire world should become wireheads as then there would be no one to change the batteries.

We have evolved certain behaviors that tend to express themselves as moral feelings when we feel driven to adopt behaviors that may tend to maximize the group's suitability at the expense of possible individual advantage. (Maybe…)

Some are even expressing shock and outrage over this product, and condemning >its purchasers.

Shock and outrage sound like moral reactions. (I also think that they are likely the reactions that would be had in real life as well.) Could it be that some people 'understand' with their group survival (read: moral) sense that if everyone were to wirehead, the group would not survive (I imagine procreation, while the mechanism of which would probably play a central role in the simulation, needs to happen outside the simulation to produce children and sustain humanity)

As a sort of corollary, even if everyone does not wirehead, could it be that people know that if an individual wireheads she is no longer contributing to the survival and wealth of the group? Could this be where the indignation comes from?

Granted I’m positing that all of this happens under-the-hood, but I’m comfortable making the hypothesis that we have evolved to find reprehensible behavior which disadvantages the group. (This also fits nicely, I think, with that nebulous 'I want to make a real difference' stated goal.)

Replies from: zero_call, MugaSofer
comment by zero_call · 2010-02-04T09:16:57.231Z · LW(p) · GW(p)

Right. The dissenting people you're talking about are more classical moralists, while the Omega employees are viewing people as more primarily hedonists.

comment by MugaSofer · 2013-01-22T10:41:38.436Z · LW(p) · GW(p)

To be clear, do you consider this something worth keeping? If the Omega Corporation will change the batteries, would this affect your decision?

comment by Dagon · 2010-02-04T01:25:50.886Z · LW(p) · GW(p)

Why are these executives and salespeople trying to convince others to go into simulation rather than living their best possible lives in simulation themselves?

Replies from: wedrifid, JamesAndrix, zero_call
comment by wedrifid · 2010-02-04T03:30:11.039Z · LW(p) · GW(p)

Because they are fickle demigods? They are are catering to human desires for their own inscrutable (that is, totally arbitrary) ends and not because they themselves happen to be hedon maximisers.

comment by JamesAndrix · 2010-02-04T03:14:31.150Z · LW(p) · GW(p)

Whoa, Deja Vu.

comment by zero_call · 2010-02-04T08:43:18.037Z · LW(p) · GW(p)

They think they're being altruistic, I think.

comment by Kaj_Sotala · 2010-02-03T21:19:25.390Z · LW(p) · GW(p)

This is funny, but I'm not sure of what it's trying to say that hasn't already been discussed.

Replies from: Psychohistorian
comment by Psychohistorian · 2010-02-03T22:08:39.354Z · LW(p) · GW(p)

I wouldn't say I'm trying to say anything specific. I wrote in this style to promote thought and discussion, not to argue a specific point. It started as a post on the role of utilons vs. hedons in addiction (and I may yet write that post), but it seemed more interesting to try something that showed rather than told.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-04T10:26:02.081Z · LW(p) · GW(p)

Ah, alright. I read "response to" as implying that it was going to introduce some new angle. I'd have used some other term, maybe "related to" or "follow-up to". Though "follow-up" implies you wrote the original posts, so that's not great either.

Replies from: SilasBarta
comment by SilasBarta · 2010-02-04T15:36:42.475Z · LW(p) · GW(p)

Yeah, I've been wondering if there are standardized meanings for those terms we should be using. There are some I'm working on where I call the previous articles "related to", but mine might be better classified as a follow-up. Perhaps "unofficial follow-up" in case you didn't write the previous?

comment by Nanani · 2010-02-04T05:00:48.501Z · LW(p) · GW(p)

This sounds a lot like people who strongly urge others to take on a life-changing decision (joining a cult of some kind, having children, whatever) by saying that once you go for it, you will never ever want to go back to the way things were before you took the plunge.

This may be true to whatever extent, and in the story that extent is absolute, but it doesn't make for a very good sales pitch.

Can we get anything out of this analogy? If "once you join the cult, you'll never want to go back to your pre-cult life" is unnapealing because there is something fundamentally wrong with cults, can we look for a similar bug in wireheading, perfect world simulations, and so on?

Replies from: MrHen, MugaSofer
comment by MrHen · 2010-02-04T05:08:20.622Z · LW(p) · GW(p)

The pattern, "Once you do X you won't want to not do X" isn't inherently evil. Once you breathe oxygen you won't want to not breathe oxygen.

I think the deeper problem has to do with identity. If doing X implies that I will suddenly stop caring about everything I am doing, have done, or will do... is it still me?

The sunk cost fallacy may come into play as well.

Replies from: DanielVarga, Nanani, MugaSofer
comment by DanielVarga · 2010-02-04T17:54:12.249Z · LW(p) · GW(p)

"Once you stopped breathing oxygen you won't want to breathe oxygen ever again." is a more evil example.

Replies from: Alicorn
comment by Alicorn · 2010-02-04T18:06:48.133Z · LW(p) · GW(p)

Well, there is an adjustment period there.

comment by Nanani · 2010-02-04T07:37:29.043Z · LW(p) · GW(p)

Breathing oxygen isn't a choice, though. You have to go to great lengths (such as inserting yourself into an environment where it isn't breathable, such as vacuum or deep water) to stop breathing it for more than a few minutes before your conscious control is overriden.

comment by MugaSofer · 2013-01-22T10:46:35.681Z · LW(p) · GW(p)

You make a good argument that all those people who aren't breathing are missing out :/

Seriously though, a better example might be trying a hobby and finding you like it so much you devote significant resources and time to it.

Replies from: Fronken
comment by Fronken · 2013-01-22T17:21:50.723Z · LW(p) · GW(p)

a better example might be trying a hobby and finding you like it

But that sounds nice! Noone wants the wireheading to be nice! Its supposed to be scary but they want it anyway so its even scarier. People wanting fun stuff isn't scary its just nice and its not interesting.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-22T18:04:18.294Z · LW(p) · GW(p)

Not sure if serious ... ≖_≖

Replies from: DaFranker
comment by DaFranker · 2013-01-22T18:10:19.014Z · LW(p) · GW(p)

The correct response is the one at the intersection of possible responses to both cases such that the thread devolves into massive meta fun and uncertain pseudo-sarcasm.

comment by MugaSofer · 2013-01-22T10:44:33.533Z · LW(p) · GW(p)

Well, a major bug in cults is that they take all your money and you spend the rest of your life working to further the cult's interests. So perhaps the opportunity cost?

OTOH, it could be that something essential is missing - a cult is based on lies, an experience machine is full of zombies.

comment by Vladimir_Nesov · 2010-02-03T21:29:32.581Z · LW(p) · GW(p)

Typos: "would has not already", "determining if it we can even"; the space character after "Response to:" is linked.

Replies from: Psychohistorian
comment by Psychohistorian · 2010-02-03T21:58:00.582Z · LW(p) · GW(p)

Thanks. Fixed. Main drawback of spot-editing something you wrote a week ago.

comment by CAE_Jones · 2013-01-21T23:30:26.143Z · LW(p) · GW(p)

The talk of "what you want now vs what hypothetical future you would want" seems relevant to discussions I've participated in on whether or not blind people would accept a treatment that could give them sight. It isn't really a question for people who have any memory of sight; every such person I've encountered (including me) would jump at such an opportunity if the cost wasn't ridiculous. Those blind from birth, or early enough that they have no visual memory to speak of, on the other hand, are more mixed, but mostly approach the topic with apprehension at the very least. The way the brain develops, it probably wouldn't be the easiest thing in the world to adjust to (as numerous case studies on our rather specific treatment options indicate). Beyond that, most of the people that express a mostly negative view of the idea seem to be content with their blindness to some extent, or at least consider it an important part of their identity (one of the people I'm thinking of has made it very clear that he is aware of certain problems blindness causes him, mostly social ones--dating, employment, etc, but otherwise...).

I haven't posed the hypothetical where adjusting to a new sense, learning to read print at a reasonable pace, learning to communicate via facial expression and gestures, driving, etc are included, since one of the common causes of vision loss among people participating in the discussion has recently had significant breakthroughs on associated treatment in experiments on mice (not that anyone seems to care/notice when I bring this up; after all, they still need to do more experiments and go through the FDA, which I doubt will be a quick process). I did provide a link to this page, though (so hopefully nothing I've said here would offend anyone I referenced too much...).

comment by Dre · 2010-02-04T01:43:05.733Z · LW(p) · GW(p)

It seems interesting that lately this site has been going through a "question definitions of reality" stage (the ai in a box boxes you., this series). It does seem to follow that going far enough to materialism leads back to something similar to Cartesian questions, but its still surprising.

Replies from: Jonii, byrnema
comment by Jonii · 2010-02-04T04:42:51.924Z · LW(p) · GW(p)

It does seem to follow that going far enough to materialism leads back to something similar to Cartesian questions, but its still surprising.

Surprising? As the nature of experience and reality is the "ultimate" question, it would seem bizarre that any attempt to explain the world didn't eventually lead back to it.

comment by byrnema · 2010-02-04T01:49:58.501Z · LW(p) · GW(p)

Indeed. My hunch is that upon sufficiently focused intensity, the concept of material reality will fade away in a haze of immaterial distinctions.

I label this hunch, 'pessimism'.

Replies from: loqi
comment by loqi · 2010-02-04T08:46:25.064Z · LW(p) · GW(p)

Solipsism by any other name...

comment by MrHen · 2010-02-03T22:40:57.424Z · LW(p) · GW(p)

Isn't this the movie Vanilla Sky?

Replies from: Jayson_Virissimo, bgrah449
comment by Jayson_Virissimo · 2010-02-04T04:15:46.279Z · LW(p) · GW(p)

No, it is a variation of the Robert Nozick's Experience Machine.

comment by bgrah449 · 2010-02-03T22:45:24.951Z · LW(p) · GW(p)

Close! But I think the movie you're thinking of is Top Gun, where Omega is the military and the machine is being heterosexual.

Replies from: bgrah449
comment by bgrah449 · 2010-02-03T22:47:27.258Z · LW(p) · GW(p)

Karma prediction, SHA1 7589493720077a335c0c0f697844b8f3f664e353

comment by quanticle · 2010-02-07T05:05:21.073Z · LW(p) · GW(p)

What people say and what they do are two completely different things. In my view, a significant number of people will accept and use such a device, even if there is significant social pressure against it.

As a precedent, I look at video games. Initially there was very significant social pressure against video games. Indeed, social pressures in the US are still quite anti-video game. Yet, today video games are a larger industry than movies. Who is to say that this hypothetical virtual reality machine won't turn out the same way?

comment by new2reality · 2010-02-04T01:58:12.801Z · LW(p) · GW(p)

I think the logical course of action is to evaluate the abilities of a linked series of cross-sections and evaluate if they, as both a group and as individuals, are in-tune with the goals of the omnipotent.

comment by frozenchicken · 2010-10-15T03:52:47.844Z · LW(p) · GW(p)

Personally, I find the easiest answer is that we're multi-layered agents. On a base level, the back part of our minds seeks pleasure, but on an intellectual level, our brain is specifically wired to worry about things other than hedonistic pleasure. We derive our motivation from goals regarding hedonistic gain, however are goals can (and usually do) become much more abstract and complex than that. Philosophically speaking, the fact that we are differentiated from that hind part of our brain by non-hedonistic goals is in a way related to what our goals are. Although the animal in us enjoys pleasure, the intellectual part is specifically used for achieving pleasure. Those goals are mutually incompatible, weird as it sounds. Giving us a pleasure-based heaven can also be an intellectual hell. Of course I'm being kind of abstract and philosophical here, but anyway...

comment by zero_call · 2010-02-04T09:00:34.747Z · LW(p) · GW(p)

Once inside the simulation, imagine that another person came to you from Omega Corporation and offered you a second simulation with an even better hedonistic experience. Then what would you do -- would you take a trial to see if it was really better, or would you just sign right up, right away? I think you would take a trial because you wouldn't want to run the risk of decreasing your already incredible pleasure. I think the same argument could be made for non-simulation denizens looking for an on/off feature to any-such equipment. Then, you could also always be available from square-one to buy equipment from different, potentially even more effective companies, and so on.

Replies from: bogdanb
comment by bogdanb · 2010-02-06T01:18:22.018Z · LW(p) · GW(p)

Note that the post mentioned that the OC's offer was for the algorithmically proven most enjoyable life the recipient can live.

(And local tradition stipulates that entities called Omega are right when they prove something.)

Edit: Which indicates that if your scenario might happen, if the best life you can live includes recursive algorithmic betterment of said life.

Replies from: zero_call
comment by zero_call · 2010-02-06T05:08:46.773Z · LW(p) · GW(p)

Ah, yea, thanks. Guess that's an invalid scenario.

comment by kip1981 · 2010-02-04T03:48:00.580Z · LW(p) · GW(p)

He didn't count on the stupidity of mankind.

"Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe."