Truth vs Utility
post by Qwake · 2014-08-13T05:45:55.426Z · LW · GW · Legacy · 30 commentsContents
30 comments
According to Eliezer, there are two types of rationality. There is epistemic rationality, the process of updating your beliefs based on evidence to correspond to the truth (or reality) as closely as possible. And there is instrumental rationality, the process of making choices in order to maximize your future utility yield. These two slightly conflicting definitions work together most of the time as obtaining the truth is the rationalists' ultimate goal and thus yields the maximum utility. Are there ever times when the truth is not in a rationalist's best interest? Are there scenarios in which a rationalist should actively try to avoid the truth to maximize their possible utility? I have been mentally struggling with these questions for a while. Let me propose a scenario to illustrate the conundrum.
Suppose Omega, a supercomputer, comes down to Earth to offer you a choice. Option 1 is to live in a stimulated world where you have infinite utility (on this world there is no, pain, suffering, death, its basically a perfect world) and you are unaware you are living in a stimulation. Option 2 is Omega will answer one question on absolutely any subject truthfully pertaining to our universe with no strings attached. You can ask about the laws governing the universe, the meaning of life, the origin of time and space, whatever and Omega will give you a absolutely truthful, knowledgeable answer. Now, assuming all of these hypotheticals are true, which option would you pick? Which option should a perfect rationalist pick? Does the potential of asking a question whose answer could greatly improve humanity's knowledge of our universe outweigh the benefits of living in a perfect simulated world with unlimited utility? There is probably a lot of people who would object outright to living in a simulation because it's not reality or the truth. Well lets consider the simulation in my hypothetical conundrum for a second. It's a perfect reality and has unlimited utility potential, and you are completely unaware you are in a simulation on this world. Aside from the unlimited utility part, that sounds a lot like our reality. There are no signs of our reality of being a simulation and all (most) of humanity is convinced that our reality is not a simulation. There for, the only difference that really matters between the simulation in Option 1 and our reality is the unlimited utility potential that Option 1 offers. If there is no evidence that a simulation is not reality then the simulation is reality for the people inside the simulation. That is what I believe and that is why I would choose Option 1. The infinite utility of living in a perfect reality outweighs almost any utility amount increase I could contribute to humanity.
I am very interested in which option the less wrong community would choose (I know Option 2 is kind of arbitrary I just needed an option for people who wouldn't want to live in a simulation). As this is my first post, any feedback or criticism is appreciated. Also many more information on the topic of truth vs utility would be very helpful. Feel free to down vote me to oblivion if this post was stupid, didn't make sense, etc. It was simply an idea that I found interesting that I wanted to put into writing. Thank you for reading.
30 comments
Comments sorted by top scores.
comment by Viliam_Bur · 2014-08-13T06:27:10.679Z · LW(p) · GW(p)
obtaining the truth is the rationalists' ultimate goal
Nope. It's an instrumental goal. We just believe it to be very useful, because in nontrivial situations it is difficult to find a strategy to achieve X without having true beliefs about X.
Are there scenarios in which a rationalist should actively try to avoid the truth to maximize their possible utility?
Omega tells you: "Unless you start believing in horoscopes, I will torture all humans to death." (Or, if making oneself believe something false is too difficult, then something like: "There is one false statement in your math textbook, and if you even find out which one it is, I will torture all humans to death." In which case I would avoid looking at the textbook ever again.)
Option 2 is Omega will answer one question on absolutely any subject truthfully pertaining to our universe with no strings attached. You can ask about the laws governing the universe, the meaning of life, the origin of time and space, whatever and Omega will give you a absolutely truthful, knowledgeable answer.
I guess it would depend on how much would I trust myself to ask a question what could bring me even more benefit than option 1. For example: "What is the most likely way that I could become Omega-powerful without losing my values? (Most likely = relative to my current situation and abilities.)" Because a lucky answer on this one could be even better than the first option. -- So it comes to an estimate about whether such lucky answer exists, what is my probability to follow the strategy successfully if I get the answer, and what is my probability to ask the question correctly. Which I admit I don't know.
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2014-08-13T16:54:41.924Z · LW(p) · GW(p)
Where truth is a terminal goal, it is a terminal goal. The fact that it is a often a useful as a means to some other goal does not contradict that. Cf: valuing money for itself, or for what you can do with it.
Replies from: Vulturecomment by Shmi (shminux) · 2014-08-13T17:06:00.343Z · LW(p) · GW(p)
As remarked many times on this site and elsewhere, if you are given evidence that Omega is capable of simulating an environment as rich as our observed Universe, you should apply the Copernican principle and assign high probability that our world is not special and is already a simulation. The Matrix-like dualism (real/simulated) is a very low-probability alternative, which only seems likely because we are used to anthropocentrically thinking of our world as "real".
Once you realize that, Option 1 becomes "pick a different simulation" and Option 2 "improve current simulation".
Replies from: Gunnar_Zarncke, Vladimir_Nesov↑ comment by Gunnar_Zarncke · 2014-08-13T17:50:06.829Z · LW(p) · GW(p)
Option 1 becomes "pick a different simulation" and Option 2 "improve current simulation".
This is a very succinct and clear phrasing. In this form it seems clear to me that the choice depends on individual preferrences and character.
A proponent might argue: 'the current simulation is a hopeless case, why stay?' And a proponent might counter: ''you run away from responsibilities'
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-13T18:26:51.481Z · LW(p) · GW(p)
A proponent might argue: 'the current simulation is a hopeless case, why stay?' And a proponent might counter: ''you run away from responsibilities'
Note that this is nearly isomorphic to the standard moral question of emigration, once you drop the no-longer useful qualifier "simulation". Is it immoral and unpatriotic to leave your home country and try your luck elsewhere? (Provided you cannot influence your former reality once you leave.)
Replies from: Qwake↑ comment by Qwake · 2014-08-14T04:22:25.509Z · LW(p) · GW(p)
That's not quite the question I am trying to convey with my conundrum. What I wanted Option 1 and Option 2 to represent is a hypothetical conflict in which you must choose between maximizing your utility potential at the cost of living in simulation or maximizing your knowledge of the truth in this reality. My point with in sharing this scenario did not have anything to do with the probability of such a scenario occurring. Now, everybody is free to interpret my scenario any way they like but I just wanted to explain what I had in mind. Thank you for your criticism and ideas. By the way.
↑ comment by Vladimir_Nesov · 2014-08-13T18:15:10.173Z · LW(p) · GW(p)
Which simulations (or "real worlds") matter (and how much) depends on one's preference. A hypothetical world that's not even being simulated may theoretically matter more than any real or simulated world, in the sense that an idealized agent with that preference would make decisions that are primarily concerned with optimizing properties of that hypothetical world (and won't care what happens in other real or simulated worlds). Such an agent would need to estimate consequences of decisions in the hypothetical world, but this estimate doesn't need to be particularly detailed, just as thinking with human brain doesn't constitute simulation of the real world. (Also the agent itself doesn't need to exist in a "real" or simulated world for the point about its preference being concerned primarily with hypotheticals to hold.)
Replies from: shminux↑ comment by Shmi (shminux) · 2014-08-13T18:22:08.629Z · LW(p) · GW(p)
I've read this twice and failed to parse... Do you mind rephrasing in a clearer way, maybe with examples or something?
Replies from: Vladimir_Nesov, Tyrrell_McAllister↑ comment by Vladimir_Nesov · 2014-08-17T21:42:56.582Z · LW(p) · GW(p)
A consequentialist agent makes decisions based on the effect they have, as depicted in its map. Different agents may use different maps that describe different worlds, or rather more abstract considerations about worlds that don't pin down any particular world. Which worlds appear on an agent's map determines which worlds matter to it, so it seems natural to consider the relevance of such worlds an aspect of agent's preference.
The role played by these worlds in an idealized agent's decision-making doesn't require them to be "real", simulated in a "real" world, or even logically consistent. Anything would do for an agent with the appropriate preference, properties of an impossible world may well matter more than what happens in the real world.
You called attention to the idea that a choice apparently between an effect on the real world, and an effect on a simulated world, may instead be a choice between effects in two simulated worlds. Why is it relevant whether a certain world is "real" or simulated? In many situations that come up in thought experiments, simulated worlds matter less, because they have less measure, in the same way as an outcome predicated on a thousand coins all falling the same way matters less than what happens in all the other cases combined. Following the reasoning similar to expected utility considerations, you would be primarily concerned with the outcomes other than the thousand-tails one; and for the choice between influence in a world that might be simulated as a result of an unlikely collection of events, and influence in the real world, you would be primarily concerned with influence in the real world. So finding out that the choice is instead between two simulated worlds may matter a great deal, shifting focus of attention from the real world (now unavailable, not influenced by your decisions) to both of the simulated worlds, a priori expected to be similarly valuable.
My point was that the next step in this direction is to note that being simulated in an unlikely manner, as opposed to not even being simulated, is not obviously an important distinction. At some point the estimate of moral relevance may fail to remain completely determined by how a world (as a theoretical construct giving semantics to agent's map, or agent's preference) relates to some "real" world. At that point, discussing contrived mechanisms that give rise to the simulation may become useless as an argument about which worlds have how much moral relevance, even if we grant that the worlds closer to the real world in their origin are much more important in human preference.
↑ comment by Tyrrell_McAllister · 2014-08-17T19:20:36.209Z · LW(p) · GW(p)
Here is my attempt to rephrase Vladimir's comment:
Consider a possible world W that someone could simulate, but which, in fact, no one ever will simulate. An agent A can still care about what happens in W. The agent could even try to influence what happens in W acausally.
A natural rejoinder is, How is A going to influence W unless A itself simulates W? How else can A play out the acausal consequences of its choices?
The reply is, A can have some idea about what happens in W without reasoning about W in so fine-grained a way as to deserve the word "simulation". Coarse-grained reasoning could still suffice for A to influence W.
For example, recall Vladimir's counterfactual mugging:
Imagine that one day, Omega comes to you and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don't want to give up your $100. But see, Omega tells you that if the coin came up heads instead of tails, it'd give you $10000, but only if you'd agree to give it $100 if the coin came up tails.
Now consider a variant in which, in the counterfactual heads world, instead of giving you $10,000, Omega would have given you an all-expenses-paid month-long vacation to the destination of your choice.
You don't need to simulate all the details of how that vacation would have played out. You don't even need to simulate where you would have chosen to go. (And let us assume that Omega also never simulates any of these things.) Even if no such simulations ever run, you might still find the prospect of counterfactual-you getting that vacation so enticing that you give Omega the $100 in the actual tails world.
comment by LizzardWizzard · 2014-08-13T09:34:19.474Z · LW(p) · GW(p)
As you can't know is current reality simulation or not, you could propose 50-50 probability as you have no anchor to operate on.
a)Current reality is a simulation ran by superintellegent computers. In this case you shouldn't hesitate even for a second - you are obviously going for option 1 where you will live in a better world
b)You are living in reality. If given reality is not a simulation your utility function in simulation will differ from your real needs and wants, there you should ask a question that will maximize your real utility which can be how to bring peace to earth, bring down inequality and lift happiness level and maybe even kill death.
But actually we have some evidence of the world we live in, because supercomputer came to you and gave you two choices, it doesn't somehow fit for what we expect to see in our daily routine. It means that supercomputers exists and probability that you live in simulation is close to 100% But in fact it is likely that Omega doesnt run simulation because you can speak to it and see it, it is more likely that there is another more powerful supercomputer out there who rans simulation on this simulation. You should choose option 1
comment by Gunnar_Zarncke · 2014-08-13T08:33:43.697Z · LW(p) · GW(p)
This cries fr a poll. To make this into a more balanced question I changed the "simulation" variant into something more 'real':
Suppose Omega, a supercomputer, comes down to Earth to offer you a choice:
[pollid:750]
Replies from: Creutzer↑ comment by Creutzer · 2014-08-13T10:45:45.330Z · LW(p) · GW(p)
Granting the question's premise that we have a utility function, you have just defined option 1 as the rational choice.
Replies from: Slider, Nornagest, Gunnar_Zarncke↑ comment by Nornagest · 2014-08-20T19:37:41.399Z · LW(p) · GW(p)
Yeah, granted that premise and given that maximizing utility may very well involve telling you stuff, option 2 seems to imply one of the following:
- you don't trust Omega
- you don't trust your utility function
- you have objections (other than trust) to accepting direct help from an alien supercomputer
The second of these possibilities seems the most compelling; we aren't Friendly in a strong sense. Depending on Omega's idea of your utility function, you can make an argument that maximizing it would be a disaster from a more general perspective, either because you think your utility function's hopelessly parochial and is likely to need modification once we better understand metaethics and fun theory, or because you don't think you're really all that ethical at whatever level Omega's going to be looking at. This latter is almost certainly true, and the former at least seems plausible.
↑ comment by Gunnar_Zarncke · 2014-08-13T13:24:15.606Z · LW(p) · GW(p)
Judging from the vote that doesn't seem to be the case. I guess the options are still not phrased precisely enough. Probably utility needs to be made more clear.
comment by [deleted] · 2014-08-13T18:53:07.738Z · LW(p) · GW(p)
Making the assumption that since #2 comes with 'No strings attached' it is implying safety measures such as 'The answer does not involve the delivery of a star sized super computer that kills you with it's gravity well' since that feels like a string, and #1 does not have such safety measures (implying you have infinite utility because you have been turned into a paperclipper in simulated paperclippium is an interpretation), I find myself trying to ponder ways of getting the idealized results of #1 with the safety measures of #2, such as
"If you were willing to answer an unlimited number of questions, and I asked you all the questions I could think of, What are all question answer pairs where I would consider any set of those question answer pairs a net gain in utility, answered in order from highest net gain of utility to smallest net gain of utility?"
Keeping in mind that the questions such as the below would be part of the hilariously meta above question:
"Exactly, in full detail without compression and to the full extent of time, what would all of my current and potentially new senses experience like if I took the simulation in Option 1?"
It was simply an idea that I found interesting that I wanted to put into writing. Thank you for reading.
This was an interesting idea to read! (Even if I don't think my interpretation was what you had in mind.) Thank you for writing!
Replies from: Jiro, Qwake↑ comment by Jiro · 2014-08-15T21:43:31.771Z · LW(p) · GW(p)
What are all question answer pairs where I would consider any set of those question answer pairs a net gain in utility, answered in order from highest net gain of utility to smallest net gain of utility?"
Having the answers to some questions can change the utility of the answers to the other ones, so "in order from highest gain of utility" may not make sense. You'd have tio ask something like "in an order which maximizes front-load utility compared to other orders", comparing the orders by the utility of the Nth place question rather than comparing the utility resulting from the questions.
↑ comment by Qwake · 2014-08-14T04:39:38.113Z · LW(p) · GW(p)
Interesting interpretation of my scenario. I don't know about other people but I personally wouldn't mind being a paperclip in paperclippium if meant realizing infinite utility potential (assuming paperclips are conscious and have sensory experience of course).
Keeping in mind that the questions such as the below would be part of the hilariously meta above question:
"Exactly, in full detail without compression and to the full extent of time, what would all of my current and potentially new senses experience like if I took the simulation in Option 1?"
As for this question, that is pretty ingenious but avoiding the conflict of my scenario entirely! No need to undermine my thought experiment unneedlessly! :) Anyway thanks for the nice comment.
comment by buybuydandavis · 2014-08-13T08:15:50.713Z · LW(p) · GW(p)
Option 1 is to live in a stimulated world where you have infinite utility
No. First, if you want to learn about us, you don't get to define our utility functions. Beings with utility functions you make up are made up beings.
Second, there is a distinction that I believe EY makes which seems to me a good one, which I think you are mistaking. Utility functions may be functions on the world, not only on the state of our feelings or experience. Telling me how wonderful I will feel in a simulation does not ping my preferences over reality beyond my feelings.
This is a fundamental error Sam Harris makes, thinking we only care about conscious experience. Conscious experience may be how we experience caring, but it is not necessarily the only object of our caring.
comment by blacktrance · 2014-08-13T08:59:32.934Z · LW(p) · GW(p)
I accept Option 1. Anything less would be a failure of instrumental rationality.
comment by Richard_Kennaway · 2014-08-13T07:07:35.737Z · LW(p) · GW(p)
Let me propose a scenario to illustrate the conundrum. Suppose Omega
I don't think I shall. But for what it's worth, I would reject Option 1 even if the alternative was just the status quo.
Replies from: Capla↑ comment by Capla · 2014-08-15T00:24:51.037Z · LW(p) · GW(p)
Uh...Why?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-08-15T09:08:37.763Z · LW(p) · GW(p)
Why decline another Omega thought experiment? Because they usually amount to no more than putting a thumb on one side of the scales and saying, "Look, this pan goes down!"
Why decline the offer of a wonderful dream? Because it's a dream, not reality.
Replies from: Qwake↑ comment by Qwake · 2014-08-16T01:53:32.473Z · LW(p) · GW(p)
Yes but as stated above if there is superintelligent being capable of making perfect stimulations of reality than the Copernican Principle states that the probability of our "reality" not being a stimulation is extremely low If thats the case it would be obvious to choose Option 1, it being the stimulation that yields you the most utility
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-08-16T06:47:34.296Z · LW(p) · GW(p)
If that's the case it would be obvious (to me) to choose Option 2 and ask a question with a view to determining if this is a simulation and if so how to get out of it.
But I think you're just putting a hand on the scales here. In the OP you wrote that a perfect simulation is "reality for" the people living in it. There is no such thing as "reality for", only "reality". Their simulation is still a simulation. They just do not know it. If I believe the Earth is flat, is a flat Earth "my reality"? No, it is my error, whether I ever discover it or not.
Replies from: bogdanb↑ comment by bogdanb · 2014-09-05T23:57:56.784Z · LW(p) · GW(p)
I sort of get your point, but I’m curious: can you imagine learning (with thought-experiment certainty) that there is actually no reality at all, in the sense that no matter where you live, it’s simulated by some “parent reality” (which in turn is simulated, etc., ad infinitum)? Would that change your preference?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2014-09-06T05:56:51.204Z · LW(p) · GW(p)
I can imagine many things, including that one, but I am unconcerned with how I might react to them.
How would I explain the event of my left arm being replaced by a blue tentacle? The answer is that I wouldn't. It isn't going to happen.
Eliezer Yudkowsky, "A Technical Explanation of Technical Explanation"