Much-Better-Life Simulator™ - Sales Conversation
post by XiXiDu · 2011-06-19T12:44:16.628Z · LW · GW · Legacy · 49 commentsContents
The Sales Conversation None 49 comments
Related to: A Much Better Life?
Reply to: Why No Wireheading?
The Sales Conversation
Sales girl: Our Much-Better-Life Simulator™ is going to provide the most enjoyable life you could ever experience.
Customer: But it is a simulation, it is fake. I want the real thing, I want to live my real life.
Sales girl: We accounted for all possibilities and determined that the expected utility of your life outside of our Much-Better-Life Simulator™ is dramatically lower.
Customer: You don't know what I value and you can't make me value what I don't want. I told you that I value reality over fiction.
Sales girl: We accounted for that as well! Let me ask you how much utility you assign to one hour of ultimate well-being™, where 'ultimate' means the best possible satisfaction of all desirable bodily sensations a human body and brain is capable of experiencing?
Customer: Hmm, that's a tough question. I am not sure how to assign a certain amount of utility to it.
Sales girl: You say that you value reality more than what you call 'fiction'. But you nonetheless value fiction, right?
Customer: Yes of course, I love fiction. I read science fiction books and watch movies like most humans do.
Sales girl: Then how much more would you value one hour of ultimate well-being™ by other means compared to one hour of ultimate well-being™ that is the result of our Much-Better-Life Simulator™?
Customer: If you ask me like that, I would exchange ten hours in your simulator with one hour of real satisfaction, something that is the result of an actual achievement rather than your fake.
Sales girl: Thank you. Would you agree if I said that for you one hour outside, that is 10 times less satisfying, roughly equals one hour in our simulator?
Customer: Yes, for sure.
Sales girl: Then you should buy our product. Not only is it very unlikely for you to experience even a tenth of ultimate well-being™ that we offer more than a few times per year, but our simulator delivers and allows your brain to experience 20 times more perceptual data than you would be able to experience outside of our simulator. All this at a constant rate while experiencing ultimate well-being™. And we offer free upgrades that are expected to deliver exponential speed-ups and qualitative improvements for the next few decades.
Customer: Thanks, but no thanks. I rather enjoy the real thing.
Sales girl: But I showed you that our product easily outweighs the additional amount of utility you expected to experience outside of our simulator.
Customer: You just tricked me into this utility thing, I don't want to buy your product. Please leave me alone now.
49 comments
Comments sorted by top scores.
comment by MixedNuts · 2011-06-19T14:59:21.367Z · LW(p) · GW(p)
Taboo "simulation". Whatever is, is real. Also we're probably already in a simu... I mean, in a subset of the world systematically implementing surface laws different from deeper laws.
Is the problem that the Much-Better Life Simulator only simulates feelings and not their referents? Then Customer should say so. Is it that it involves chatbots rather than other complete people inside? Then Customer should say so. Is it that it loses complexity to cut simulating costs? Then Customer should say so. Is it that trees are allowed to be made of wood, which is allowed to be made of organic molecules, which are allowed to be made out of carbon, which is allowed to be made out of nuclei, which are allowed to be made out of quarks, which are not allowed to be made out of transistors? Then Customer hasn't thought it through.
Replies from: Giles, Richard_Kennaway, XiXiDu, XiXiDu↑ comment by Giles · 2011-06-19T17:34:58.234Z · LW(p) · GW(p)
If you're allowed to assign utility to events which you cannot perceive but can understand and anticipate, then you can assign a big negative utility to the "going up a simulation level" event.
EDIT: What Pavitra said. I guess I was thinking of turtles all the way down
Replies from: Pavitra, MixedNuts↑ comment by MixedNuts · 2011-06-19T17:44:05.420Z · LW(p) · GW(p)
Sure you can, but I can't see why you would. Reality is allowed to be made out of atoms, but not out of transistors? Why? (Well, control over the world outside the simulation matters too, but that's mostly solved. Though we do get Lamed Vav deciding to stay on Earth to help people afford the Simulator instead of ascending to Buddhahood themselves! ...hang on, I think I got my religions mixed up.)
Replies from: Giles↑ comment by Giles · 2011-06-19T20:36:21.232Z · LW(p) · GW(p)
Well, control over the world outside the simulation matters too, but that's mostly solved
Really? If everyone hooked up to the sim at the same time then you might be right, but our current world seems pretty chaotic.
As to why I'd care about increasing my simulation-depth - essentially my mind runs on hardware which was designed to care about things in the outside world (specifically my reproductive fitness) more than it cares about its own state. I'm free to re-purpose that hardware how I like, but this kind of preference (for me at least) seems to be a kind of hangover from that.
↑ comment by Richard_Kennaway · 2011-06-19T16:09:22.759Z · LW(p) · GW(p)
Taboo "simulation". Whatever is, is real. Also we're probably already in a simu... I mean, in a subset of the world systematically implementing surface laws different from deeper laws.
We're already in a subset of the world systematically implementing surface laws different from deeper laws. All the surface laws we see around us are implemented by atoms. (And subatomic particles, and fields, but "it's all made of atoms" is Feynman's way of summing up the idea.)
Where this differs from "simulations" is that there are no sentient beings at the atomic level, telling the atoms how to move to simulate us. This I think, is the issue with "simulations" -- at least, it's my issue. There is another world outside. If there is another world, I'd rather be out (however many levels of "out" it takes) than in. "In" is an ersatz, a fake experience under the absolute control of other people.
Replies from: MixedNuts, DanArmak↑ comment by MixedNuts · 2011-06-19T16:22:12.451Z · LW(p) · GW(p)
Nah, our surface rules ain't systematic. I made a laser.
Agree direct puppet-style control is icky. Disagree that is what makes simulations simulatey, or that our own universe is a puppet-theatre-style simulation. If the Matrix masters were constantly deciding "let's add obsidian to this character's inventory", we would be "under the absolute control of other people", but instead they described physical laws and initial conditions and let the simulation unfold without intervention. I'm not particularly icked by that - the universe has to come from somewhere.
Replies from: jhuffman↑ comment by DanArmak · 2011-07-17T17:17:34.537Z · LW(p) · GW(p)
Would you still prefer to be "out" if you expected your life outside to be much, much worse than your life inside? Would you accept a lifetime of suffering to gain an iota of control over the "real" outside world?
(This is the reversal of the MBLS. Also, apologies for coming late to the discussion.)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2011-07-17T19:52:39.585Z · LW(p) · GW(p)
Would you still prefer to be "out" if you expected your life outside to be much, much worse than your life inside?
I would prefer to act to make my life outside better.
Scaling the imaginary situation back to everyday matters, your question is like responding to the statement "I'm taking a holiday in Venice" with "but suppose you hate it when you get there?" Or responding to "I'm starting a new business" with "but suppose it fails?" Or responding to "I'm going out with someone new" with "but suppose she's a serial killer?"
All you have done is imagine the scenario ending in failure. Why?
Replies from: DanArmak↑ comment by DanArmak · 2011-07-17T20:28:34.486Z · LW(p) · GW(p)
All you have done is imagine the scenario ending in failure. Why?
Because I'm building it to parallel the original question of whether you'd want to go into an MBLS. In both cases, your potential future life in the simulated or "inside" world is assumed to be much better than the one you might have in the simulating "outside" world. If you give different answers (inside vs. outside) in the two cases, why?
You said:
There is another world outside. If there is another world, I'd rather be out
As a reason for not entering the MBLS. Would that reason also make you want to escape from our current world to a much more dismal life in the simulating one? To me that would be a repugnant conclusion and is why I'd prefer a much better life in a simulated world, in both cases.
I would prefer to act to make my life outside better.
An individual's control over their life, in our current world, is far below what I consider acceptable. People are stuck with sick bodies and suffering minds and bad relationships and die in unexpected or painful ways or, ultimately, of unavoidable old age. I would happily trade this for the MBLS experience which would surely offer much greater control.
Do you attach intrinsic value to affecting (even if weakly) the true ultimate level of reality, or do you disagree with my preference for a different reason? If the former, how would you deal with not knowing if we're simulated, or infinite recursions of simulation, or scenarios where infinite numbers of worlds are simulated and simulate others? Would it mean you give high priority to discovering if we're in a simulation and, if so, breaking out - at the expense of efforts to optimize our life in this world?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2011-07-17T20:52:13.826Z · LW(p) · GW(p)
You said:
There is another world outside. If there is another world, I'd rather be out
As a reason for not entering the MBLS. Would that reason also make you want to escape from our current world to a much more dismal life in the simulating one? To me that would be a repugnant conclusion and is why I'd prefer a much better life in a simulated world, in both cases.
Both scenarios involve the scenario-setter putting their hand on one side of the scales and pushing hard enough to sway my preferences. You might as well ask if I would torture babies for a sufficiently high incentive. These questions are without significance. Ask me again when we actually have uploads and simulations. Meanwhile, strongly rigged scenarios can always beat strong hypothetical preferences, and vice versa. It just becomes a contest over who can name the biggest number.
how would you deal with not knowing if we're simulated, or infinite recursions of simulation, or scenarios where infinite numbers of worlds are simulated and simulate others?
I don't take such speculations seriously. I've read the arguments for why we're probably living in a simulation and am unimpressed; I am certainly not going to be mugged à la Pascal into spending any substantial effort considering the matter.
↑ comment by XiXiDu · 2011-06-19T15:39:30.989Z · LW(p) · GW(p)
I want to rephrase my last comment:
Utility maximization destroys complex values by choosing the value that yields the most utility, i.e. the best cost-value ratio. One unit of utility is not discriminable from another unit of utility. All a utility maximizer can do is to maximize expected utility. If it turns out that one of its complex values can be effectively realized and optimized, it might turn out to outweigh all other values. This can only be countered by changing one's utility function and reassign utility in such a way as to outweigh that effect, which will lead to inconsistency, or by discounting the value that threatens to outweigh all others, which will again lead to inconsistency.
Replies from: MixedNuts, wedrifid, Thomas↑ comment by MixedNuts · 2011-06-19T16:28:11.490Z · LW(p) · GW(p)
Can't your utility function look like "number of paperclips times number of funny jokes" rather than a linear combination? Then situations where you accept very little humor in exchange for loads of paperclips are much rarer.
Relevant intuition: this trade-off makes me feel sad, so it can't be what I really want. And I hear it's proven that wanting can only work if it involves maximizing a function over the state of the universe.
↑ comment by wedrifid · 2011-06-19T18:52:53.744Z · LW(p) · GW(p)
Utility maximization destroys complex values
No, it doesn't. A utility function can be as complex as you want it to be. In fact it can be more complex than is possible to represent in universe.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-06-20T14:58:52.593Z · LW(p) · GW(p)
For this reason, I almost wish LW would stop talking about utility functions entirely.
Replies from: wedrifid↑ comment by wedrifid · 2011-06-20T17:59:21.789Z · LW(p) · GW(p)
For this reason, I almost wish LW would stop talking about utility functions entirely.
That it is theoretically possible for functions to be arbitrarily complex does not seem to be a good reason to reject using a specific kind of function. Most information representation formats can be arbitrary complex. That's what they do.
(This is to say that while I respect your preference for not talking about utility functions your actual reasons are probably better than because utility functions can be arbitrarily complex.)
Replies from: CuSithBell↑ comment by CuSithBell · 2011-06-20T18:31:51.699Z · LW(p) · GW(p)
Right, sorry. The reason I meant was something like "utlity functions can be arbitrarily complex and in practice are extremely complex, but this is frequently ignored", what with talk about "what utility do you assign to a firm handshake" or the like.
Edit: And while they have useful mathematical features in the abstract, they seem to become prohibitively complex when modeling the preferences of things like humans.
Replies from: XiXiDu, wedrifid↑ comment by XiXiDu · 2011-06-20T19:54:13.212Z · LW(p) · GW(p)
...what with talk about "what utility do you assign to a firm handshake" or the like.
World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn't this a dramatic oversimplification? I don't see what is wrong in asking how you weigh firm handshakes. A world state that features firm handshakes must be different from one that doesn't feature firm handshakes, even if the difference is tiny. So if I ask how much utility you assign to firm handshakes I ask how you weigh firm handshakes, how the absence of firm handshakes would affect the value of a world state. I ask about your utility preferences between possible world states that feature firm handshakes and those that don't.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-06-21T03:47:43.665Z · LW(p) · GW(p)
World states are not uniform entities, but compounds of different items, different features, each adding a certain amount of utility, weight to the overall value of the world state. If you only consider utility preferences between world states that are not made up of all the items of your utility-function, then isn't this a dramatic oversimplification?
So far as I can tell, you have it backwards - those sorts of functions form a subset of the set of utility functions.
The problem is that utility functions that are easy to think about are ridiculously simple, and produce behavior like the above "maximize one value" or "tile the universe with 'like' buttons". They're characterized by "Handshake = (5*firmness_quotient) UTILS" or "Slice of Cheesecake = 32 UTILS" or what have you.
I'm sure it's possible to discuss utility functions without falling into these traps, but I don't think we do that, except in the vaguest cases.
↑ comment by wedrifid · 2011-06-20T18:44:14.500Z · LW(p) · GW(p)
Ick. Yes. That question makes (almost) no sense.
There are very few instances in which I would ask "what utility do you assign?" regarding a concrete, non-contrived good. I tend to consider utility preferences between possible world states that could arise depending on a specific decision or event and then only consider actual numbers if actually necessary for the purpose of multiplying.
I would certainly prefer to limit use of the term to those who actually understand what it means!
Replies from: CuSithBell, XiXiDu↑ comment by CuSithBell · 2011-06-21T03:50:01.063Z · LW(p) · GW(p)
There are very few instances in which I would ask "what utility do you assign?" regarding a concrete, non-contrived good.
Exactly. Perhaps if we used a different model (or an explicitly spelled-out simplified subset of the utility functions) we could talk about such things.
I would certainly prefer to limit use of the term to those who actually understand what it means!
Inconceivable!
↑ comment by XiXiDu · 2011-06-20T19:39:57.570Z · LW(p) · GW(p)
But if you do not "assign utility" and only consider world states, how do you deal with novel discoveries? How does a hunter gatherer integrate category theory into its utility function? I mean, you have to somehow weigh new items?
Replies from: Rain↑ comment by Rain · 2011-06-21T14:45:51.780Z · LW(p) · GW(p)
I just go ahead and assign value directly to "novelty" and "variety".
Replies from: XiXiDu↑ comment by XiXiDu · 2011-06-21T17:26:01.214Z · LW(p) · GW(p)
I just go ahead and assign value directly to "novelty" and "variety".
Isn't that too unspecific? Every sequence of digits of the variety of transcendental numbers can be transcribed into musical scores. Or you could use cellular automata to create endless amounts of novel music. But that is not what you mean. If I asked you for a concrete example you could only tell me something that you already expect but are not sure of, which isn't really novel, or say that you will be able to point out novelty in retrospect. But even with the latter answer there is a fundamental problem, because novelty can't be crowned in retrospect if you are able to recognize it. In other words, it is predictable what will excite you and make you label something n-o-v-e-l. In this respect what you call "novelty" is just like the creation of music by the computation of the sequences of transcendental numbers, uncertain but ultimately computable. My point, to assign value to "novelty" and "variety" can not replace the assignment of utility to discrete sequences that make interesting music. You have to weigh discrete items, because those that are sufficiently described by "novelty" and "variety" are just random noise.
Replies from: Rain↑ comment by Rain · 2011-06-21T17:55:41.753Z · LW(p) · GW(p)
You have to weigh discrete items, because those that are sufficiently described by "novelty" and "variety" are just random noise.
Continuous random noise is quite monotonous to experience - the opposite of varied. I didn't say that variety and novelty were my only values, just that I assign value to them. I value good music, too, as well as food and other pleasant stimuli. The theory of diminishing returns comes into play, often caused by the facility of the human mind to attain boredom. I view this as a value continuum rather than a set value.
In my mind, I'm picturing one of those bar graphs that show up when music is playing, except instead of music, it's my mind and body moving throughout the day, and each bar represents my value of particular things in the world, with new bars added and old ones dying off, and... well, it's way more complex than, "assign value K to music notes XYZ and call it done." And several times I've been rebuked for using the phrase "assign value to something", as opposed to "discover value as already-implemented by my brain".
↑ comment by Thomas · 2011-06-19T16:01:11.253Z · LW(p) · GW(p)
Utility maximization destroys complex values by choosing the value that yields the most utility, i.e. the best cost-value ratio.
Does not follow necessary. A larger plethora of values can be the greatest utility.
I don't say that it must always be so. But it can be constructed that way.
↑ comment by XiXiDu · 2011-06-19T15:14:59.767Z · LW(p) · GW(p)
(Note that I myself do not subscribe to wireheading, I am merely trying to fathom the possible thought processes of those that subscribe to it.)
You are right. But the basic point is that if you are human and subscribe to rational, consistent, unbounded utility maximization, then you assign at least non-negligible utility to unconditional bodily sensations. If you further accept uploading and that emulations can experience more in a shorter period of time compared to fleshly humans, then it is a serious possibility that you can outweigh the extra utility you assign to the referents of rewards in the form of bodily sensations and other differences like chatbots instead of real agents (a fact that you can choose to forget).
I believe the gist of the matter to be that wireheading appears to its proponents to be the rational choice for an utility maximizing agent which is the effect of biological evolution within our universe. For what it's worth, this could be an explanation for the Fermi paradox.
comment by Giles · 2011-06-19T17:19:45.100Z · LW(p) · GW(p)
Customer: I'm pretty sure the marginal utility of fiction diminishes once a significant portion of my life is taken up by fiction.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-06-19T18:09:39.314Z · LW(p) · GW(p)
Customer: I'm pretty sure the marginal utility of fiction diminishes once a significant portion of my life is taken up by fiction.
Then that is also the solution to infinite ethics, that we should be scope insensitive to even larger amounts of the same if we already devote a significant portion of our life's to it? And what do you mean by 'diminishes', are you saying that we should apply discounting?
Replies from: Giles, Pavitra↑ comment by Giles · 2011-06-21T01:36:16.224Z · LW(p) · GW(p)
I don't know. The utility function measures outputs rather than inputs; the fiction case is confusing because the two are closely correlated (i.e. how much awesome fiction I consume is correlated with how much time I spend consuming awesome fiction).
For your solution to make sense, we'd need some definition of "time devoted to a particular cause" that we can then manage in our utility function. For example, if parts of your brain are contemplating some ethical problem while you're busy doing something else, does that count as time devoted?
It seems doable though. I don't think it's the solution to infinite ethics but it seems like you could conceive of an agent behaving that way while still being considered rational and altruistic.
↑ comment by Pavitra · 2011-06-19T18:29:35.314Z · LW(p) · GW(p)
If you can increase the intensity of the awesomeness of the fiction, without increasing the duration I spend there, I certainly have no objections. Similarly, if you can give an awesomizing overlay to my productive activity, without interfering with that productivity, then again I have no objections.
My objection to the simulator is that it takes away from my productive work. It's not that I stop caring about fiction, it's that I keep caring about reality.
Even if I accept that living in the simulator is genuinely good and worthwhile... what am I doing sitting around in the sim when I could be out there getting everyone else to sign up? Actually using the simulator creates only one person-hour of sim-time per hour; surely I can get better leverage than that through a little well-placed evangelism.
comment by jsteinhardt · 2011-06-19T17:32:45.636Z · LW(p) · GW(p)
You place utility on entire universe histories, not just isolated events. So I can place 0 utility on all universe histories where I end up living in a simulation, and will always reject the salesgirl's offer.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-06-21T23:27:12.874Z · LW(p) · GW(p)
You place utility on entire universe histories
That does seem like the most I can imagine my preferences depending on :)
I generally agree.
comment by [deleted] · 2011-06-19T14:01:15.875Z · LW(p) · GW(p)
Man, the marketing department of Wireheading Inc. really has a tough job on their hands. Maybe they should just change their vocabulary, make a Facebook app instead and just wait for people to rationalize their choices and join anyway.
comment by benelliott · 2011-06-19T13:49:22.317Z · LW(p) · GW(p)
Hmm, I think I just increased my credence of the master slave model. It explains the customer's reaction perfectly.
Replies from: Manfred↑ comment by Manfred · 2011-06-19T18:30:54.862Z · LW(p) · GW(p)
On fictional evidence?
Replies from: benelliott↑ comment by benelliott · 2011-06-19T20:20:12.564Z · LW(p) · GW(p)
I was wondering if somebody would catch that.
To be more precise, I updated on the fact that my own reactions were perfectly aligned with the customer's.
comment by Lightwave · 2011-06-19T18:20:04.683Z · LW(p) · GW(p)
To be honest, if a simulation is as rich and complex (in terms of experiences/environment/universe) as the "real world" and (maybe?) also has some added benefit (e.g. more subjective years inside, is "cheaper"), then I can imagine myself jumping in and staying there forever (or for as long as possible).
What's the difference between "real" reality and a simulated one anyway, if all my experiences are going to be identical? I think our intuitions regarding not wanting to be in a simulated world are based on some evolutionary optimization, which no longer applies in a world of uploads and should be done away with.
Replies from: DanielLC↑ comment by DanielLC · 2011-06-20T01:30:46.310Z · LW(p) · GW(p)
If all you value is your own experiences, then this would be just as good. You may value other things. For example, I value other people's experiences, and I wouldn't care about happy-looking NPCs. I'd be happier in that simulator, but I'd choose against it, because other things are important.
Replies from: Lightwave↑ comment by Lightwave · 2011-06-20T10:16:30.719Z · LW(p) · GW(p)
Other people could join in the simulation as well. Also, new people could be created, what's the difference between being born in the "real world" and the simulated one? So they would be real people. It's not fair to call them just "NPCs".
Replies from: DanielLC↑ comment by DanielLC · 2011-06-20T20:47:13.709Z · LW(p) · GW(p)
Also, new people could be created, what's the difference between being born in the "real world" and the simulated one?
If the simulation is sufficiently accurate to generate qualia, they're real people. If it's only sufficiently accurate to convince me that they're real, they're not. I agree that you can make a simulation that actually has people in it, but the point is that you can also make a simulation that makes me think my desires are fulfilled without actually fulfilling them. I have no desire to be so fooled.
comment by byrnema · 2011-06-21T01:11:26.399Z · LW(p) · GW(p)
A suggestion: I feel like the story focuses too much on 'feelings' (e.g., "if all desirable bodily sensations a human body and brain is capable of experiencing") which people discount a lot and have trained themselves to not optimize in favor of things that are more satisfying. (Taking a bath and eating cake would yield more immediate physical, pleasurable sensations than writing this comment but I know I'll find this more satisfying. .. I'll slice some cake in a minute.) Ah -- this was better said in LukeProg's recent post Not For the Sake Of Pleasure Alone.
It would more convincing to appeal to the stronger, concrete desires of people...
Sales girl: Don't you want to know how the world works? Your simulated brain can read and process 100 books a day and invent the equivalent of a PhD thesis on any subject just by directing your attention. When you leave the simulation, you'll need to leave your knowledge behind, but you can return to it at anytime.
I wonder about the last sentence I felt compelled to add. Why can't we come and go from the simulator? Then wouldn't it be a no-brainer, to choose spend something like 10 minutes of every hour there? (It would make pleasant experiences more efficient, yielding more time for work.)
Someone else's turn: what else can be done in the simulator that would be most irresistible?
comment by Gedusa · 2011-06-19T14:07:00.322Z · LW(p) · GW(p)
The obvious extra question is:
"If you think it's so great, how come you're not using it?" Unless the sales girl's enjoyable life includes selling the machine she's in to disinterested customers.
Replies from: jimrandomh, gwern, kpreid↑ comment by jimrandomh · 2011-06-19T14:22:41.841Z · LW(p) · GW(p)
The obvious extra question is:
"If you think it's so great, how come you're not using it?" Unless the sales girl's enjoyable life includes selling the machine she's in to disinterested customers.
In the least convenient world, the answer is: "I can't afford it until I make enough money by working in sales." Or alternatively, "I have a rare genetic defect which makes the machine not work for me."
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-06-21T23:30:09.284Z · LW(p) · GW(p)
Well done. But parent comment is still clever and amusing, if useless.
comment by Dorikka · 2011-06-20T04:41:42.068Z · LW(p) · GW(p)
Sales girl: We accounted for that as well! Let me ask you how much utility you assign to one hour of ultimate well-being™, where 'ultimate' means the best possible satisfaction of all desirable bodily sensations a human body and brain is capable of experiencing?
My entire life in your simulator might be of less utility than my life outside of the simulator because if, say, I was roughly utilitarian, the more moderate positive effect that my efforts had on the preference functions of a whole lot of people would be worth more than a huge increase that the simulator would have on my own preference function.
In all honesty, however, I would be really tempted. I'm also pretty sure that I wouldn't have akrasia problems after turning the offer down. Curious how a counterfactual can have such an affect on your outlook, no? Perhaps there's a way to take advantage of that.