Problem of Optimal False Information

post by Endovior · 2012-10-15T21:42:33.679Z · LW · GW · Legacy · 113 comments

Contents

113 comments

This is a thought that occured to me on my way to classes today; sharing it for feedback.

Omega appears before you, and after presenting an arbitrary proof that it is, in fact, a completely trustworthy superintelligence of the caliber needed to play these kinds of games, presents you with a choice between two boxes.  These boxes do not contain money, they contain information.  One box is white and contains a true fact that you do not currently know; the other is black and contains false information that you do not currently believe.  Omega advises you that the the true fact is not misleading in any way (ie: not a fact that will cause you to make incorrect assumptions and lower the accuracy of your probability estimates), and is fully supported with enough evidence to both prove to you that it is true, and enable you to independently verify its truth for yourself within a month.  The false information is demonstrably false, and is something that you would disbelieve if presented outright, but if you open the box to discover it, a machine inside the box will reprogram your mind such that you will believe it completely, thus leading you to believe other related falsehoods, as you rationalize away discrepancies.

Omega further advises that, within those constraints, the true fact is one that has been optimized to inflict upon you the maximum amount of long-term disutility for a fact in its class, should you now become aware of it, and the false information has been optimized to provide you with the maximum amount of long-term utility for a belief in its class, should you now begin to believe it over the truth.  You are required to choose one of the boxes; if you refuse to do so, Omega will kill you outright and try again on another Everett branch.  Which box do you choose, and why?

 

(This example is obviously hypothetical, but for a simple and practical case, consider the use of amnesia-inducing drugs to selectively eliminate traumatic memories; it would be more accurate to still have those memories, taking the time and effort to come to terms with the trauma... but present much greater utility to be without them, and thus without the trauma altogether.  Obviously related to the valley of bad rationality, but since there clearly exist most optimal lies and least optimal truths, it'd be useful to know which categories of facts are generally hazardous, and whether or not there are categories of lies which are generally helpful.)

113 comments

Comments sorted by top scores.

comment by Kindly · 2012-10-15T22:59:57.536Z · LW(p) · GW(p)

Least optimal truths are probably really scary and to be avoided at all costs. At the risk of helping everyone here generalize from fictional evidence, I will point out the similarity to the Cthaeh in The Wise Man's Fear.

On the other hand, a reasonably okay falsehood to end up believing is something like "35682114754753135567 is prime", which I don't expect to affect my life at all if I suddenly start believing it. The optimal falsehood can't possibly be worse than that. Furthermore, if you value not being deceived about important things then the optimality of the optimal falsehood should take that into account, making it more likely that the falsehood won't be about anything important.

Edit: Would the following be a valid falsehood? "The following program is a really cool video game: "

Replies from: AlexMennen, EricHerboso, Endovior
comment by AlexMennen · 2012-10-15T23:54:54.947Z · LW(p) · GW(p)

Would the following be a valid falsehood? "The following program is a really cool video game: "

I think we have a good contender for the optimal false information here.

Replies from: Endovior
comment by Endovior · 2012-10-16T00:21:47.307Z · LW(p) · GW(p)

The problem specifies that something will be revealed to you, which will program you to believe it, even though false. It doesn't explicitly limit what can be injected into the information stream. So yes, assuming you would value the existence of a Friendly AI, yes, that's entirely valid as optimal false information. Cost: you are temporarily wrong about something, and realize your error soon enough.

comment by EricHerboso · 2012-10-16T03:02:44.659Z · LW(p) · GW(p)

Except after executing the code, you'd know it was FAI and not a video game, which goes against the OP's rule that you honestly believe in the falsehood continually.

I guess it works if you replace "FAI" in your example with "FAI who masquerades as a really cool video game to you and everyone you will one day contact" or something similar, though.

Replies from: Endovior
comment by Endovior · 2012-10-16T12:25:57.380Z · LW(p) · GW(p)

The original problem didn't specify how long you'd continue to believe the falsehood. You do, in fact, believe it, so stopping believing it would be at least as hard as changing your mind in ordinary circumstances (not easy, nor impossible). The code for FAI probably doesn't run on your home computer, so there's that... you go off looking for someone who can help you with your video game code, someone else figures out what it is you're come across and gets the hardware to implement, and suddenly the world gets taken over. Depending on how attentive you were to the process, you might not correlate the two immediately, but if you were there when the people were running things, then that's pretty good evidence that something more serious then a video game happened.

comment by Endovior · 2012-10-15T23:55:41.011Z · LW(p) · GW(p)

Yes, least optimal truths are really terrible, and the analogy is apt. You are not a perfect rationalist. You cannot perfectly simulate even one future, much less infinite possible ones. The truth can hurt you, or possibly kill you, and you have just been warned about it. This problem is a demonstration of that fact.

That said, if your terminal value is not truth, a most optimal falsehood (not merely a reasonably okay one) would be a really good thing. Since you are (again) not a perfect rationalist, there's bound to be something that you could be falsely believing that would lead you to better consequences than your current beliefs.

comment by bogus · 2012-10-15T22:59:46.328Z · LW(p) · GW(p)

You should choose the false belief, because Omega has optimized it for instrumental utility whereas the true belief has been optimized for disutility, and you may be vulnerable to such effects if only because you're not a perfectly rational agent.

If you were sure that no hazardous true information (or advantageous false information) could possibly exist, you should still be indifferent between the two choices: either of these would yield a neutral belief, leaving you with very nearly the same utility as before.

Replies from: Endovior
comment by Endovior · 2012-10-15T23:58:12.826Z · LW(p) · GW(p)

That is my point entirely, yes. This is a conflict between epistemic and instrumental rationality; if you value anything higher than truth, you will get more of it by choosing the falsehood. That's how the problem is defined.

comment by Armok_GoB · 2012-10-16T14:37:53.153Z · LW(p) · GW(p)

Ok, after actually thinking about this for 5 minutes, it's ludicrously obviously that the falsehood is the correct choice, and it's downright scary how long it took me to realize this and how many in the comments seems to still not realize it.

Some tools falsehoods have for not being so bad:

  • sheer chaos theoretical outcome pumping. aka "you bash your face into the keyboard randomly believing a pie will appear and it programs a friendly AI" or the lottery example mentioned in other comments.

  • any damage is bonded by being able to be obviously insane enough that it wont spread, or even cause you to commit suicide you don't believe it very long if you really think believing any falsehood is THAT bad.

  • even if you ONLY value the truth, it could give you "all the statements in this list are true:" followed by a list of 99 true statements you are currently wrong about, and one inconsequential false one.

Some tools truths have for being bad:

  • sheer chaos theoretical outcome pumping. aka "you run to the computer to type in the code for the FAI you just learnt, but fall down some stairs and die, and the wind currents cause a tornado that kills Eliezer and then runs through a scrapyard assembling a maximally malevolent UFAI"

  • perceptual basilisks, aka "what Cthulhu face looks like"

  • Roko-type basilisks

  • borderline lies-by-commission/not mentioning side effects. "The free energy device does work, but also happens to make the sun blow up"

Note that both these lists are long enough, and the items unconnected enough, that there almost certainly many more points each at least as good as these that I haven't though of, and that both the lie and the true that likely to be using ALL these tools at the same time, in much more powerful ways than we can think of.

comment by Kyre · 2012-10-16T05:41:02.167Z · LW(p) · GW(p)

Thanks, this one made me think.

a machine inside the box will reprogram your mind such that you will believe it completely

It seems that Omega is making a judgement about making some "minimal" change that leaves you the "same person" afterwards, otherwise he can always just replace anyone with a pleasure (or torture) machine that believes one fact.

If you really believe that directly editing thoughts and memories is equivalent to murder, and Omega respects that, then Omega doesn't have much scope for the black box. If Omega doesn't respect those beliefs about personal identity, then he's either a torturer or a murderer, and the dilemma is less interesting.

But ... least convenient possible world ...

Omega could be more subtle than this. Instead of facts and/or mind reprogrammers in boxes, Omega sets up your future so that you run into (apparent) evidence for the fact that you correctly (or mistakenly) interpret, and you have no way of distinguishing this inserted evidence from the normal flow of your life ... then you're back to the dilemma.

comment by maia · 2012-10-16T16:09:34.781Z · LW(p) · GW(p)

The answer to this problem is only obvious because it's framed in terms of utility. Utility is, by definition, the thing you want. Strictly speaking, this should include any utility you get from your satisfaction at knowing the truth rather than a lie.

So for someone who valued knowing the truth highly enough, this problem actually should be impossible for Omega to construct.

Replies from: Endovior, Kindly, Jay_Schweikert
comment by Endovior · 2012-10-16T16:46:50.166Z · LW(p) · GW(p)

Okay, so you are a mutant, and you inexplicably value nothing but truth. Fine.

The falsehood can still be a list of true things, tagged with 'everything on this list is true', but with an inconsequential falsehood mixed in, and it will still have net long-term utility for the truth-desiring utility function, particularly since you will soon be able to identify the falsehood, and with your mutant mind, quickly locate and eliminate the discrepancy.

The truth has been defined as something that cannot lower the accuracy of your beliefs, yet it still has maximum possible long-term disutility, and your utility function is defined exclusively in terms of the accuracy of your beliefs. Fine. Mutant that you are, the truth of maximum disutility is one which will lead you directly to a very interesting problem that will distract you for an extended period of time, but which you will ultimately be unable to solve. This wastes a great deal of your time, but leaves you with no greater utility than you had before, constituting disutility in terms of the opportunity cost of that time which you could've spent learning other things. Maximum disutility could mean that this is a problem that will occupy you for the rest of your life, stagnating your attempts to learn much of anything else.

comment by Kindly · 2012-10-17T00:02:59.962Z · LW(p) · GW(p)

So for someone who valued knowing the truth highly enough, this problem actually should be impossible for Omega to construct.

Not necessarily: the problem only stipulates that of all truths you are told the worst truth, and of all falsehoods the best falsehood. If all you value is truth and you can't be hacked, then it's possible that the worst truth still has positive utility, and the best falsehood still has negative utility.

comment by Jay_Schweikert · 2012-10-16T17:28:13.606Z · LW(p) · GW(p)

Can we solve this problem by slightly modifying the hypothetical to say that Omega is computing your utility function perfectly in every respect except for whatever extent you care about truth for its own sake? Depending on exactly how we define Omega's capabilities and the concept of utility, there probably is a sense in which the answer really is determined by definition (or in which the example is impossible to construct). But I took the spirit of the question to be "you are effectively guaranteed to get a massively huge dose of utility/disutility in basically every respect, but it's the product of believing a false/true statement -- what say you?"

comment by DanielLC · 2012-10-16T05:08:44.562Z · LW(p) · GW(p)

You should make the choice that brings highest utility. While truths in general are more helpful than falsehoods, this is not necessarily true, even in the case of a truly rational agent. The best falsehood will, in all probability, be better than the worst truth. Even if you exclusively value truth, there will most likely be a lie that results in you having a more accurate model, and the worst possible truth that's not misleading will have negligible effect. As such, you should chose the black box.

I don't see why this would be puzzling.

comment by Jay_Schweikert · 2012-10-16T13:54:31.182Z · LW(p) · GW(p)

I would pick the black box, but it's a hard choice. Given all the usual suppositions about Omega as a sufficiently trustworthy superintelligence, I would assume that the utilities really were as it said and take the false information. But it would be a painful, both because I want to be the kind of person who pursues and acts upon the truth, and also because I would be desperately curious to know what sort of true and non-misleading belief could cause that much disutility -- was Lovecraft right after all? I'd probably try to bargain with Omega to let me know the true belief for just a minute before erasing it from my memory -- but still, in the Least Convenient Possible World where my curiosity was never satisfied, I'd hold my nose and pick the black box.

Having answered the hypothetical, I'll go on and say that I'm not sure there's much to take from it. Clearly, I don't value Truth for its own sake over and beyond all other considerations, let the heavens fall -- but I never thought I did, and I doubt many here do. The point is that in the real world, where we don't yet have trustworthy superintelligences, the general rule that your plans will go better when you use an accurate map doesn't seem to admit of exceptions (and little though I understand Friendly AI, I'd be willing to bet that this rule holds post-singularity). Yes, there are times where you might be better off with a false belief, but you can't predictably know in advance when that is, black swan blow-ups, etc.

To be more concrete, I don't think there's any real-world analogue to the hypothetical. If a consortium of the world's top psychiatrists announced that, no really, believing in God makes people happier, more productive, more successful, etc., and that this conclusion holds even for firm atheists who work for years to argue themselves into knots of self-deception, and that this conclusion has the strongest sort of experimental support that you could expect in this field, I'd probably just shrug and say "I defy the data". When it comes to purposeful self-deception, it really would take Omega to get me on board.

Replies from: Endovior, ChristianKl, army1987
comment by Endovior · 2012-10-16T14:04:48.059Z · LW(p) · GW(p)

That's exactly why the problem invokes Omega, yes. You need an awful lot of information to know which false beliefs actually are superior to the truth (and which facts might be harmful), and by the time you have it, it's generally too late.

That said, the best real-world analogy that exists remains amnesia drugs. If you did have a traumatic experience, serious enough that you felt unable to cope with it, and you were experiencing PTSD or depression related to the trauma that impeded you from continuing with your life... but a magic pill could make it all go away, with no side effects, and with enough precision that you'd forget only the traumatic event... would you take the pill?

Replies from: Jay_Schweikert
comment by Jay_Schweikert · 2012-10-16T14:41:45.882Z · LW(p) · GW(p)

Okay, I suppose that probably is a more relevant question. The best answer I can give is that I would be extremely hesitant to do this. I've never experienced anything like this, so I'm open to the idea that there's a pain here I simply can't understand. But I would certainly want to work very hard to find a way to deal with the situation without erasing my memory, and I would expect to do better in the long-term because of it. Having any substantial part of my memory erased is a terrifying thought to me, as it's really about the closest thing I can imagine to "experiencing" death.

But I also see a distinction between limiting your access to the truth for narrow, strategic reasons, and outright self-deception. There are all kinds of reasons one might want the truth withheld, especially when the withholding is merely a delay (think spoilers, the Bayesian Conspiracy, surprise parties for everyone except Alicorn, etc.). In those situations, I would still want to know that the truth was being kept for me, understand why it was being done, and most importantly, know under what circumstances it would be optimal to discover it.

So maybe amnesia drugs fit into that model. If all other solutions failed, I'd probably take them to make the nightmares stop, especially if I still had access to the memory and the potential to face it again when I was stronger. But I would still want to know there was something I blocked out and was unable to bear. What if the memory was lost forever and I could never even know that fact? That really does seem like part of me is dying, so choosing it would require the sort of pain that would make me wish for (limited) death -- which is obviously pretty extreme, and probably more than I can imagine for a traumatic memory.

Replies from: None
comment by [deleted] · 2015-07-28T23:23:10.753Z · LW(p) · GW(p)

For some genotypes, more trauma is associated with lower levels of depression

Yet, someone experiencing trauma that they are better off continuing to suffer would hypothetically lead to learned helplessness and worse depression. But it's true, yet false belief is more productive.

That said, genetic epidemiology is weird and I haven't looked at the literature beyodndon't understand the literature beyond this book. I was prompted to investigate it based on some counterintuitive outcomes regarding treatment for psychological trauama and depressive symptomology, established counterintuitive results about mindfulness and depressive symptoms in Parkinsons and Schizophrenia, and some disclosed SNP's sequences from a known individual.

comment by ChristianKl · 2012-10-17T14:39:07.439Z · LW(p) · GW(p)

Nobody makes plans based on totally accurate maps. Good maps contain simplifications of reality to allow you to make better decisions. You start to teach children how atoms work by putting the image atoms as spheres into their heads. You don't start by teaching them a model that's up to date with the current scientific knowledge of how atoms works. The current model is more accurate but less useful for the children.

You calculate how airplanes fly with Newtons equations instead of using Einstein's.

In social situations it can also often help to avoid getting certain information. You don't have job. You ask a friend to get you a job. The job pays well. He assures you that the work you are doing helps the greater good of the world.

He however also tells you that some of the people you will work with do things in their private lifes that you don't like.

Would you want him to tell you that your new boss secretly burns little puppies at night? The boss also doesn't take it kindly if people critizise him for it.

Replies from: Jay_Schweikert
comment by Jay_Schweikert · 2012-10-17T16:57:26.504Z · LW(p) · GW(p)

Would you want him to tell you that your new boss secretly burns little puppies at night? The boss also doesn't take it kindly if people critizise him for it.

Well, yes, I would. Of course, it's not like he could actually say to me "your boss secretly burns puppies -- do you want to know this or not?" But if he said something like "your boss has a dark and disturbing secret which might concern you; we won't get in trouble just for talking about it, but he won't take kindly to criticism -- do you want me to tell you?", then yeah, I would definitely want to know. The boss is already burning puppies, so it's not like the first-level harm is any worse just because I know about it. Maybe I decide I can't work for someone like that, maybe not, but I'm glad that I know not to leave him alone with my puppies.

Now of course, this doesn't mean it's of prime importance to go around hunting for people's dark secrets. It's rarely necessary to know these things about someone to make good decisions on a day-to-day basis, the investigation is rarely worth the cost (both in terms of the effort required and the potential blow-ups from getting caught snooping around in the wrong places), and I care independently about not violating people's privacy. But if you stipulate a situation where I could somehow learn something in a way that skips over these concerns, then sure, give me the dark secret!

Replies from: ChristianKl
comment by ChristianKl · 2012-10-17T17:31:06.312Z · LW(p) · GW(p)

The boss is already burning puppies, so it's not like the first-level harm is any worse just because I know about it.

Knowing the dark secret will produce resentment for your boss. That resentment is likely to make it harder for you to get work done. If you see him with a big smile in the morning you won't think: "He seems like a nice guy because he's smilling" but "Is he so happy because he burned puppies yesterday?"

Replies from: Jay_Schweikert
comment by Jay_Schweikert · 2012-10-17T17:51:24.875Z · LW(p) · GW(p)

Well, maybe. I'm actually skeptical that it would have much effect on my productivity. But to reverse the question, suppose you actually did know this about your boss. If you could snap your fingers and erase the knowledge from your brain, would you do it? Would you go on deleting all information that causes you to resent someone, so long as that information wasn't visibly relevant to some other pending decision?

Replies from: ChristianKl
comment by ChristianKl · 2012-10-17T20:06:50.426Z · LW(p) · GW(p)

Deleting information doesn't make emotions go away. Being afraid and not knowing the reason for being afraid is much worse than just being afraid. You start to rationalize the emotions with bogus stories to get the emotions make sense.

comment by A1987dM (army1987) · 2012-10-17T11:21:19.208Z · LW(p) · GW(p)

The point is that in the real world ... the general rule that your plans will go better when you use an accurate map doesn't seem to admit of exceptions

Azatoth built you in such a way that having certain beliefs can screw you over, even when they're true. (Well, I think it's the aliefs that actually matter, but deliberately keeping aliefs and beliefs separate is an Advanced Technique.)

comment by Shmi (shminux) · 2012-10-15T23:01:39.324Z · LW(p) · GW(p)

First, your invocation of Everett branches adds nothing to the problem, as every instance of "you" may well decide not to choose. So, "choose or die" ought to be good enough, provided that you have a fairly strong dislike of dying.

Second, the traumatic memories example is great, but a few more examples would be useful. For example, the "truth" might be "discover LW, undergo religious deconversion, be ostracized by your family, get run over by a car while wondering around in a distraught state" whereas the "lie" might be "get hypnotized into believing Scientology, join the church, meet and hook up with a celebrity you dreamed about, live happily ever after".

Even more drastic: a bomb vs a wireheading device. The "truth" results in you being maimed, the "lie" results in you being happy, if out of touch with the world around you.

At this point it should be clear that there is no single "right" solution to the problem, and even the same person would choose differently depending on the situation, so the problem is not well defined as stated.

Replies from: Endovior, Luke_A_Somers
comment by Endovior · 2012-10-15T23:37:34.250Z · LW(p) · GW(p)

I didn't have any other good examples on tap when I originally conceived of the idea, but come to think of it...

Truth: A scientific formula, seemingly trivial at first, but whose consequences, when investigated, lead to some terrible disaster, like the sun going nova. Oops.

Lies involving 'good' consequences are heavily dependent upon your utility function. If you define utility in such a way that allows your cult membership to be net-positive, then sure, you might get a happily-ever-after cult future. Whether or not this indicates a flaw in your utility function is a matter of personal choice; rationality cannot tell you what to protect.

That said, we are dealing with Omega, who is serious about those optimals. This really is a falsehood with optimal net long-term utility for you. It might be something like a false belief about lottery odds, which leads to you spending the next couple years wasting large sums of money on lottery tickets... only to win a huge jackpot, hundreds of millions of dollars, and retire young, able to donate huge sums to the charities you consider important. You don't know, but it is, by definition, the best thing that could possibly happen to you as the result of believing a lie, as you define 'best thing'.

Replies from: Zaine, Kindly
comment by Zaine · 2012-10-16T06:33:49.461Z · LW(p) · GW(p)

You don't know, but it is, by definition, the best thing that could possibly happen to you as the result of believing a lie, as you define 'best thing'.

If that's what you meant, then the choice is really "best thing in life" or "worst thing in life"; whatever belief leads you there is of little consequence. Say the truth option leads to an erudite you eradicating all present, past, and future sentient life, and the falsehood option leads to an ignorant you stumbling upon the nirvana-space that grants all infinite super-intelligent bliss and Dr. Manhattan-like superpowers (ironically enough):

What you believed is of little consequence to the resulting state of the verse(s).

comment by Kindly · 2012-10-16T13:17:15.397Z · LW(p) · GW(p)

I'd say that this is too optimistic. Omega checks the future and if, in fact, you would eventually win the lottery if you started playing, then deluding you about lotteries might be a good strategy. But for most people that Omega talks to, this wouldn't work.

It's possible that the number of falsehoods that have one-in-a-million odds of helping you exceeds a million by far, and then it's very likely that Omega (being omniscient) can choose one that turns out to be helpful. But it's more interesting to see if there are falsehoods that have at least a reasonably large probability of helping you.

Replies from: Endovior
comment by Endovior · 2012-10-16T13:34:44.355Z · LW(p) · GW(p)

True; being deluded about lotteries is unlikely to have positive consequences normally, so unless something weird is going to go on in the future (eg: the lottery machine's random number function is going to predictably malfunction at some expected time, producing a predictable set of numbers; which Omega then imposes on your consciousness as being 'lucky'), that's not a belief with positive long-term consequences. That's not an impossible set of circumstances, but it is an easy-to-specify set, so in terms of discussing 'a false belief which would be long-term beneficial', it leaps readily to mind.

comment by Luke_A_Somers · 2012-10-16T13:04:21.251Z · LW(p) · GW(p)

every instance of "you" may well decide not to choose

Very unlikely, I'd say. ( = 0 for all a in 'you chose yes or no') is an extremely strong criterion.

So, "choose or die" ought to be good enough, provided that you have a fairly strong dislike of dying.

True.

comment by DaFranker · 2012-10-17T15:32:00.207Z · LW(p) · GW(p)

Edge case:

The truth and falsehood themselves are irrelevant to the actual outcomes, since another superintelligence (or maybe even Omega itself) is directly conditioning on your learning of these "facts" in order to directly alter the universe into its worst and best possible configurations, respectively.

These seem to be absolute optimums as far as I can tell.

If we posit that Omega has actual influential power over the universe and is dynamically attempting to create those optimal information boxes, then this seems like the only possible resulting scenario. If Omega is sufficiently superintelligent and the laws of entropy hold, then this also seems like the only possible resulting scenario for most conceivable minds, even if Omega's only means of affecting the universe is the information contained in the boxes.

Replies from: wedrifid
comment by wedrifid · 2012-10-17T15:51:24.589Z · LW(p) · GW(p)

Edge case:

The truth and falsehood themselves are irrelevant to the actual outcomes, since another superintelligence (or maybe even Omega itself) is directly conditioning on your learning of these "facts" in order to directly alter the universe into its worst and best possible configurations, respectively.

Good edge case.

These seem to be absolute optimums as far as I can tell.

Close to it. The only obvious deviations from the optimums are centered around the possible inherent disutility of having a universe in which you made the decision to have false beliefs then in fact had false beliefs for some time and the possible reduced utility assigned to unvierses in which you are granted a favourable universe rather than creating it yourself.

If we posit that Omega has actual influential power over the universe and is dynamically attempting to create those optimal information boxes, then this seems like the only possible resulting scenario. If Omega is sufficiently superintelligent and the laws of entropy hold, then this also seems like the only possible resulting scenario for most conceivable minds, even if Omega's only means of affecting the universe is the information contained in the boxes.

This seems right, and the minds that are an exception here that are most easy to conceive are ones where the problem is centered around specific high emphasis within their utility function on events immediately surrounding the decision itself (ie. the "other edge" case).

Replies from: DaFranker
comment by DaFranker · 2012-10-17T16:34:45.784Z · LW(p) · GW(p)

These seem to be absolute optimums as far as I can tell.

Close to it. The only obvious deviations from the optimums are centered around the possible inherent disutility of having a universe in which you made the decision to have false beliefs then in fact had false beliefs for some time and the possible reduced utility assigned to unvierses in which you are granted a favourable universe rather than creating it yourself.

Well, when I said "alter the universe into its worst and best possible configurations", I had in mind a literal rewrite of the absolute total state of the universe, such that for that then-universe its computable past was also the best/worst possible past (or something similarly inconceivable to us that a superintelligence could come up with in order to have absolute best/worst possible universes), such as modifying the then-universe's past such that you had taken the other box and that that box had the same effect as the one you did pick.

However, upon further thought, that feels incredibly like cheating and arguing by definition.

Also, for the "opposite/other edge", I had considered minds with utility functions centered on the decision itself with conditionals against reality-alteration and spacetime-rewrites and so on, but those seem to be all basically just "Break the premises and Omega's predictions by begging the question!", similar to above, so they're fun to think about but useless in other respects.

comment by wedrifid · 2012-10-17T07:29:51.242Z · LW(p) · GW(p)

Which box do you choose, and why?

I take the lie. The class of true beliefs has on average a significantly higher utility-for-believing than the class of false beliefs but there is an overlap. The worst in the "true" is worse than the best in "false".

I'd actually be surprised if Omega couldn't program me with a true belief that caused me to drive my entire species to extinction, and probably worse than that. Because superintelligent optimisers are badass and wedrifids are Turing-complete.

comment by [deleted] · 2012-10-16T20:11:02.881Z · LW(p) · GW(p)

Would the following be a True Fact that is supported by evidence?

You open the white box, and are hit by a poison dart, which causes you to drop into a irreversible, excruciatingly painful, minimally aware, coma, where by all outward appearances you look fine, and you find out the world goes downhill, while you get made to live forever, while still having had enough evidence that Yes, the dart DID in fact contain a poison that drops you into an:

irreversible(Evidence supporting this, you never come out of a coma),

excruciatingly painful(Evidence supporting this, your nerves are still working inside your head, you can feel this excruciating pain)

minimally aware (Evidence supporting this, while you are in the coma you are still vaugely aware that you can confirm all of this and hear about bad news that makes you feel worse on a level in addition to physical pain, such as being given the old poison dart because someone thinks it's a treasured memento instead of a constant reminder that you are an idiot.)

coma(Evidence supporting this, you can't actually act upon the outer world as if you were conscious),

where by all outward appearances you look fine(Evidence supporting this, no one appears to be aware that you are in utter agony to the point where you would gladly accept a mercy kill.)

and you find out the world goes downhill (Evidence supporting this, while in a minimally aware state, you hear about the world going downhill, UFAI, brutal torture, nuclear bombs, whatever bad things you don’t want to hear about.)

while you get made to live forever: (Evidence supporting this, you never, ever die.)

I mean, the disutility would probably be worse than that, but... surely you never purposely pick a CERTAINTY of such an optimized maximum disutility, regardless of what random knowledge it might comes with. It would be one thing if the knowledge was such that it was going to be helpful, but since it comes as part and parcel of a optimized maximum disutility, the knowledge is quite likely to be something useless or worse, like “Yes, this dart really did contain a poison to hit you with optimized maximum disutility, and you are now quite sure that is true." (You would probably have been sure of that well before now even if it wasn't explicitly given to you as a true fact by Omega!)

And Omega didn't mislead you, the dart REALLY was going to be that bad in the class of facts about darts!

Since that (or worse) seems likely to be the White Box, I'll probably as carefully as possible select the Black box while trying to be extremely sure that I didn't accidentally have a brain fart and flip the colors of the boxes by mistake in sheer panic. Anyone who would pick the White box intentionally doesn't seem to be giving enough credence to just how bad Omega can make a certainty of optimized maximum disutility and how useless Omega can select the true fact to be.

Replies from: mwengler, Endovior
comment by mwengler · 2012-10-17T14:47:23.853Z · LW(p) · GW(p)

It does seem to me that the question, which box, is is your utility associated with knowing truth able to overcome your disutility associated with fear of the unknown. If you are afraid enough, I don't have to torture you to break you, I only have to show you my dentist tools and talk to you about what might be in the white box.

comment by Endovior · 2012-10-17T05:36:28.185Z · LW(p) · GW(p)

As stated, the only trap the white box contains is information... which is quite enough, really. A prediction can be considered a true statement if it is a self-fulfilling prophecy, after all. More seriously, if such a thing as a basilisk is possible, the white box will contain a basilisk. Accordingly, it's feasible that the fact could be something like "Shortly after you finish reading this, you will drop into an irreversible, excruciatingly painful, minimally aware coma, where by all outward appearances you look fine, yet you find out the world goes downhill while you get made to live forever", and there's some kind of sneaky pattern encoded in the pattern of the text and the border of the page or whatever that causes your brain to lock up and start firing pain receptors, such that the pattern is self-sustaining. Everything else about the world and living forever and such would have to have been something that would have happened anyway, lacking your action to prevent it, but if Omega knows UFAI will happen near enough in the future, and knows that such a UFAI would catch you in your coma and stick you with immortality nanites without caring about your torture-coma state... then yeah, just such a statement is entirely possible.

Replies from: DaFranker
comment by DaFranker · 2012-10-17T15:22:02.573Z · LW(p) · GW(p)

But the information in either box is clearly an influence on the universe - you can't just create information. I'm operating under the assumption that Omega's boxes don't violate the entropy principles here, and it just seems virtually impossible to construct a mind such that Omega could not possibly, with sufficient data on the universe, construct a truth and a falsehood for which when learned by you would arrive at causal disruption of the world in the worst-possible-by-your-utility-function and best-possible-by-your-utility-function manners respectively.

As such, since Omega is saying the truth and Omega has fully optimized these two boxes among a potentially-infinite space of facts correlating to a potentially-infinite (unverified) space of causal influences on the world depending on your mind. To me, it seems >99% likely that opening the white box will result in the worst possible universe for the vast majority of mindspace, and the black box in the best possible universe for the vast majority of mindspace.

I can conceive of minds that would circumvent this, but these are not even remotely close to anything I would consider capable of discussing with Omega (e.g. a mind that consists entirely of "+1 utilon on picking Omega's White Box, -9999 utilon on any other choice" and nothing else), and I infer all of those minds to be irrelevant to the discussion at hand since all such minds I can imagine currently are.

comment by mwengler · 2012-10-16T15:20:57.092Z · LW(p) · GW(p)

As stated, the question comes down to acting on an opinion you have on an unknown, but within the principles of this problem potentially knowable conclusion about your own utility function. And that is: Which is larger: 1) the amount of positive utility you gain from knowing the most disutile truths that exist for you OR 2) the amount of utility you gain from knowing the most utile falsehoods that exist for you

ALMOST by definition of the word utility, you would choose the truth (white box) if and only if 1) is larger and you would choose the falsehood (black box) only if 2) is larger. I say almost by definition because all answers of the form "I would choose the truth even if it was worse for me" are really statements that the utility you place on the truth is higher than Omega has assumed, which violates the assumption that Omega knows your utility function and speaks truthfully about it.

I say ALMOST by definition because we have to consider the other piece of the puzzle: when I open box 2) there is a machine that "will reprogram your mind." Does this change anything? Well it depends on which utility function Omega is using to make her calculations of my long term utility. Is Omega using my utility function BEFORE the machine reprogram's my mind, or after? Is me after the reprogramming really still me after the reprogramming? I think within the spirit of the problem we must assume that 1) The utility happens to be maximized for both me before the reprogram and me after the reprogram, perhaps my utility function does not change at all in the reprogramming, 2) Omega has correctly included the amount of disutility I would have to the particular programming change, and this is factored in to her calculations so that the proposed falsehood and mind reprogramming do in fact, on net, give the maximum utility I can get from knowing the falsehood PLUS being reprogrammed.

Within these constraints, we find that the "ALMOST" above can be removed if we include the (dis)utility I have for the reprogramming in the calculation. So:

Which is larger: 1) the amount of positive utility you gain from knowing the most disutile truths that exist for you AND being reprogrammed to believe it OR 2) the amount of utility you gain from knowing the most utile falsehoods that exist for you

So ultimately, the question which would we choose is the question above. I think to say anything else is to say "my utility is not my utility," i.e. to contradict yourself.

In my case, I would choose the white box. On reflection, considering the long run, I doubt that there is a falsehood PLUS a reprogramming that I would accept as a combination as more utile than the worst true fact (with no reprogramming to consider) that I would ever expect to get. Certainly, this is the Occam's razor answer, the ceteris paribus answer. GENERALLY, we believe that knowing more is better for us than being wrong. Generally we believe that someone else meddling with our minds has a high disutility to us.


For completeness I think these are straightforward conclusions from "playing fair" in this question, from accepting an Omega as postulated.

1) If Omega assures you the utility for 2) (including the disutility of the reprogramming as experienced by your pre-reprogrammed self) is 1% higher than the utility of 1), then you want to choose 2), to choose the falsehood and the reprogramming. To give any other answer is to presume that Omega is wrong about your utility , which violates the assumptions of the question.

2) If Omega assures you the utility for 2) and 1) are equal, it doesn't matter which one you choose. As much as you might think "all other things being equal I'll choose the truth" you must accept that the value you place on the truth has already been factored in, and the blip-up from choosing the truth will be balanced by some other disutility in a non-truth area. Since you can be pretty sure that the utility you place on the truth is very much unrelated to pain and pleasure and joy and love and so on, you are virtually guaranteeing you will FEEL worse choosing the truth, but that this worse feeling will just barely be almost worth it.


Finally, I tried to play nice within the question. But it is entirely possible, and I would say likely, that there can never be an Omega who could know ahead of time with the kind of detail required, what your future utility would be, at least not in our Universe. Consider just the quantum uncertainties (or future Everett universe splits). It seems most likely that your future net utility covers a broad range of outcomes in different Everett branches. In that case, it seems very likely that there is no one truth that minimizes your utility in all your possible futures, and no one falsehood that maximizes it in all your possible futures. In this case we would have a distribution of utility outcomes from 1) and 2) and it is not clear that we know how to choose between two different distributions. Possibly utility is defined in such a way that it would be the expectation value that "truly" mattered to us, but that puts I think a very serious constraint on utility functions and how we interact with them that I am not sure could be supported.

Replies from: Endovior, fezziwig
comment by Endovior · 2012-10-17T06:44:46.689Z · LW(p) · GW(p)

Quite a detailed analysis, and correct within its assumptions. It is important to know where Omega is getting it's information on your utility function. That said, since Omega implicitly knows everything you know (since it needs to know that in order to also know everything you don't know, and thus to be able to provide the problem at all), it implicitly knows your utility function already. Obviously, accepting a falsehood that perverts your utility function into something counter to your existing utility function just to maximize an easier target would be something of disutility to you as you are at present, and not something that you would accept if you were aware of it. Accordingly, it it a safe assumption that Omega has based its calculations off your utility before accepting the information, and for the purposes of this problem, that is exactly the case. This is your case (2); if a falsehood intrinsically conflicts with your utility function in whatever way, it generates disutility (and thus, is probably suboptimal). If your utility function is inherently hostile to such changes, this presents a limitation on the factset Omega can impose upon you.

That said, your personal answer seems to place rather conservative bounds on the nature of what Omega can do to you. Omega has not presented bounds on it's utilities; instead, it has advised you that they are maximized within fairly broad terms. Simialrly, it has not assured you anything about the relative values of those utilities, but the structure of the problem as Omega presents it (which you know is correct, because Omega has already arbitrarily demonstrated it's power and trustworthiness) means you are dealing with an outcome pump attached directly to your utility function. Since the structure of the problem gives it a great deal of room in which to operate, the only real limitation is the nature of your own utility function. Sure, it's entirely possible that your utility function could be laid out in such a way as to strongly emphasize the disutility of misinformation... but that just limits the nice things Omega can do for you, it does nothing to save you from the bad things it can do to you. It remains valid to show you a picture and say 'the picture you are looking at is a basilisk; it causes any human that sees it to die within 48 hours'. Even without assuming basilisks, you're still dealing with a hostile outcome pump. There's bound to be some truth that you haven't considered that will lead you to a bad end. And if you want to examine it in terms of Everett branches, Omega is arbitrarily powerful. It has the power to compute all possible universes and give you the information which has maximally bad consequences for your utility function in aggregate across all possible universes (this implies, of course, that Omega is outside the Matrix, but pretty much any problem invoking Omega does that).

Even so, Omega doesn't assure you of anything regarding the specific weights of the two pieces of information. Utility functions differ, and since there's nothing Omega could say that would be valid for all utility functions, there's nothing it will say at all. It's left to you to decide which you'd prefer.

That said, I do find it interesting to note under which lines of reasoning people will choose something labelled 'maximum disutility'. I had thought it to be a more obvious problem than that.

Replies from: mwengler
comment by mwengler · 2012-10-17T14:42:43.946Z · LW(p) · GW(p)

Wire-heading, drug-addicition, lobotomy, black-box, all seem similar morally to me. Heck, my own personal black box would need nothing more than to have me believe that the universe is just a little more absurd than I already believe, that the laws of physics and the progress of humanity are a fever-dream, an hallucination. From there I would lower my resistance to wire-heading, drug-addiction. Even if I still craved the "truth" (my utility function was largely unchanged), these new facts would lead me to believe there was less of a possibility of utility from pursuing that, and so the rather obvious utility of drug or electronic induced pleasure would win my not-quite-factual day.

The white box and a Nazi colonel-dentist with his tools laid out, talking to me about what he was going to do to me until I chose the black box are morally similar. I do not know why the Nazis/Omega want me to black box it. I do not know the extent of the disutility the colonel-dentist will actually inflict upon me. I do know m fear is at minimum nearly overwhelming, and may indeed overwhelm me before the day is done.

Being broken in the sense that those who torture you for a result, and choosing the black box, are morally equivalent to me. Abandoning a long-term principle of commitment to the truth in favor of a short term but very high utility of giving up, the short term utility of totally abandoning myself in to the control of an evil god to avoid his torture is what I am being asked to do in choosing the black box.

Its ALWAYS at least a little scary to choose reality over self-deception, over the euphoria of drugs and pain killers. The utility one derives from making this choice is much colder than the utility one derives from succumbing: it comes more, it seems, from the neo-cortex and less from the limbic system or lizard brain of fast fear responses.

My utility AFTER I choose the white box may well be less than if I chose the black box. The scary thing in the white box might be that bad. But my life up to now has rewarded me vastly for resisting drug addiction, for resisting gorping my own brain in the pursuit of non-reality-based pleasure. Indeed, it has rewarded me for resisting fear.

So before I have made my choice, I do not want to choose the lie in order to get the dopamine, or the epinephrine or whatever it is that the wire gives me. That is LOW utility to me before I make the choice. Resisting choosing that out of fear has high utility to me.

WIll I regret my choice afterwards? Maybe, since I might be a broken destroyed shell of a human subject to brain patterns for which I had no evolutionary preparation.

Would I admire someone who chose the black box? No. Would I admire someone who had chosen the white box? Yes. Doing things that I would admire in others is a strong source of utility in me (and in many others of course).


Do you think your omega problem contains elements that go beyond the question: would you abandon your principled commitment to truth and choose believing a lie and wire-heading under the threat of an unknown future torture inflicted upon you by a powerful entity you cannot and do not understand?

comment by fezziwig · 2012-10-16T20:50:42.258Z · LW(p) · GW(p)

Curious to know what you think of Michaelos' construction of the white-box.

Replies from: mwengler
comment by mwengler · 2012-10-17T14:45:16.970Z · LW(p) · GW(p)

Thank you for that link, reading it helped me clarify my answer.

comment by moridinamael · 2012-10-15T23:54:35.621Z · LW(p) · GW(p)

The Bohr model of atomic structure is a falsehood which would have been of tremendous utility to a natural philosopher living a few hundred years ago.

That said, I feel like I'm fighting the hypothetical with that answer - the real question is, should we be willing to self-modify to make our map less accurate in exchange for utility? I don't think there's actually a clean decision-theoretic answer for this, that's what makes it compelling.

Replies from: Luke_A_Somers, Endovior
comment by Luke_A_Somers · 2012-10-16T13:07:53.544Z · LW(p) · GW(p)

If you're going that far, you might as well go as far as all modern physical theories, since we know they're incomplete - with it being wrong in that it's left out the bits that demonstrate that they're incomplete.

comment by Endovior · 2012-10-16T00:34:06.366Z · LW(p) · GW(p)

That is the real question, yes. That kind of self-modification is already cropping up, in certain fringe cases as mentioned; it will get more prevalent over time. You need a lot of information and resources in order to be able to generally self-modify like that, but once you can... should you? It's similar to the idea of wireheading, but deeper... instead of generalized pleasure, it can be 'whatever you want'... provided that there's anything you want more than truth.

comment by faul_sname · 2012-10-15T23:53:31.254Z · LW(p) · GW(p)

For me, the falsehood is the obvious choice. I don't particularly value truth as an end (or rather, I do, but there are ends I value several orders of magnitude more). The main reason to seek to have true beliefs if truth is not the end goal is to ensure that you have accurate information regarding how well you're achieving your goal. By ensuring that the falsehood is high-utility, that problem is fairly well utility.

My beliefs are nowhere near true anyway. One more falsehood is unlikely to make a big difference, while there is a large class of psychologically harmful truths that can make a large (negative) difference.

comment by philh · 2012-10-17T23:46:07.304Z · LW(p) · GW(p)

Not really relevant, but

Omega appears before you, and after presenting an arbitrary proof that it is, in fact, a completely trustworthy superintelligence of the caliber needed to play these kinds of games

I idly wonder what such a proof would look like. E.g. is it actually possible to prove this to someone without presenting them an algorithm for superintelligence, sufficiently commented that the presentee can recognise it as such? (Perhaps I test it repeatedly until I am satisfied?) Can Omega ever prove its own trustworthiness to me if I don't already trust it? (This feels like a solid Gödelian "no".)

Replies from: Endovior
comment by Endovior · 2012-10-18T03:59:41.159Z · LW(p) · GW(p)

I don't have a valid proof for you. Omega is typically defined like that (arbitrarily powerful and completely trustworthy), but a number of the problems I've seen of this type tend to just say 'Omega appears' and assume that you know Omega is the defined entity simply because it self-identifies as Omega, so I felt the need to specify that in this instance, Omega has just proved itself.

Theoretically, you could verify the trustworthiness of a superintelligence by examining its code... but even if we ignore the fact that you're probably not equipped to comprehend the code of a superintelligence (really, you'll probably need another completely trustworthy superintelligence to interpret the code for you, which rather defeats the point), there's still the problem that an untrustworthy superintelligence could provide you with a completely convincing forgery, which could potentially be designed in such a way that it would performs every action in the same way as the real one would (in that way being evaluated as 'trustworthy' under simulation)... except the one for which the untrustworthy superintelligence is choosing to deceive you on. Accordingly, I think that even a superintelligence probably can't be sure about the trustworthiness of another superintelligence, regardless of evidence.

comment by SilasBarta · 2012-10-19T00:29:52.780Z · LW(p) · GW(p)

This doesn't sound that hypothetical to me: it sounds like the problem of which organizations to join. Rational-leaning organizations will give you true information you don't currently know, while anti-rational organizations will warp your mind to rationalize false things. The former, while not certain to be on net bad, will lead you to unpleasant truths, while people in anti-rational groups are often duped into a kind of happiness.

Replies from: Endovior
comment by Endovior · 2012-10-19T05:20:55.960Z · LW(p) · GW(p)

Sure, that's a valid way of looking at things. If you value happiness over truth, you might consider not expending a great deal of effort in digging into those unpleasant truths, and retain your pleasant illusions. Of course, the nature of the choice is such that you probably won't realize that it is such a choice until you've already made it.

comment by Desrtopa · 2012-10-16T14:14:47.859Z · LW(p) · GW(p)

I'm not sure this scenario even makes sense as a hypothetical. At least for me personally, I find it doubtful that my utility could be improved according to my current function by being made to accept a false belief that I would normally reject outright.

If such a thing is possible, then I'd pick the false belief, since utility is necessarily better than disutility and I'm in no position to second guess Omega's assurance about which option will bring more, and there's no meta-utility on the basis of which I can be persuaded to choose things that go against my current utility function. But even granting the existence of Omega as a hypothetical I'd bet against this scenario being able to happen to me.

Edit: this comment has made me realize that I was working under the implicit assumption that the false belief could not be something that would deliver its utility while being proven wrong. If I include such possibilities, there are definitely many ways that my utility could be improved by being convinced of a falsehood, but I would only be temporarily convinced, whereas I parsed the dilemma as one where my utility is increased as long as I continue to believe the falsehood.

Replies from: thomblake
comment by thomblake · 2012-10-16T16:02:05.826Z · LW(p) · GW(p)

I find it doubtful that my utility could be improved according to my current function by being made to accept a false belief that I would normally reject outright.

Vaguely realistic example: You believe that the lottery is a good bet, and as a result win the lottery.

Hollywood example: You believe that the train will leave at 11:10 instead of 10:50, and so miss the train, setting off an improbable-seeming sequence of life-changing events such as meeting your soulmate, getting the job of your dreams, and finding a cure for aging.

Omega example: You believe that "hepaticocholangiocholecystenterostomies" refers to surgeries linking the gall bladder to the kidney. This subtly changes the connections in your brain such that over time you experience a great deal more joy in life, as well as curing your potential for Alzheimer's.

Replies from: Desrtopa
comment by Desrtopa · 2012-10-16T17:26:55.050Z · LW(p) · GW(p)

The first example sounds like something that Omega might actually be able to forecast, so I may have to revise my position on those grounds, but on the other hand that specific example would pretty much have to alter my entire epistemic landscape, so it's hard to measure the utility difference between the me who believes the lottery is a bad deal and the altered person who wins it. The second falls into the category I mentioned previously of things that increase my utility only as I find out they're wrong; when I arrive, I will find out that the train has already left.

As for the third, I suspect that there isn't a neurological basis for such a thing to happen. If I believed differently, I would have a different position on the dilemma in the first place.

Replies from: thomblake, thomblake
comment by thomblake · 2012-10-16T17:38:46.719Z · LW(p) · GW(p)

Regardless of whether the third one is plausible, I suspect Omega would know of some hack that is equally weird and unable to be anticipated.

Replies from: Endovior
comment by Endovior · 2012-10-17T06:50:20.647Z · LW(p) · GW(p)

A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don't produce an answer you'd expect; they instead produce something that matches the criteria even better then anything you were aware of.

comment by thomblake · 2012-10-16T17:42:16.023Z · LW(p) · GW(p)

The second falls into the category I mentioned previously of things that increase my utility only as I find out they're wrong; when I arrive, I will find out that the train has already left.

You can subtly change that example to eliminate that problem. Instead of actually missing the train, you just leave later and so run into someone who gives you a ride, and then you never go back and check when the train was.

Replies from: Desrtopa
comment by Desrtopa · 2012-10-16T17:46:20.954Z · LW(p) · GW(p)

The example fails the "that you would normally reject outright" criterion though, unless I already have well established knowledge of the actual train scheduling times.

comment by asparisi · 2012-10-16T03:13:46.067Z · LW(p) · GW(p)

Hm. If there is a strong causal relationship between knowing truths and utility, then it is conceivable that this is a trick: the truth, while optimized for disutility, might still present me with a net gain over the falsehood and the utility. But honestly, I am not sure I buy that: you can get utility from a false belief, if that belief happens to steer you in such a way that it adds utility. You can't normally count on that, but this is Omega we are talking about.

The 'other related falsehoods and rationalizing' part has me worried. The falsehood might net me utility in and of itself, but I poke and prod ideas. If my brain becomes set up so that I will rationalize this thought, altering my other thought-patterns to match as I investigate... that could be very bad for me in the long run. Really bad.

And I've picked inconvenient truth over beneficial falsehood before on those grounds, so I probably would pick the inconvenient truth. Maybe, if Omega guaranteed that the process of rationalizing the falsehood would still have me with more utility- over the entire span of my life- than I would earn normally after you factor in the disutility of the inconvenient truth then I would take the false-box. But on the problem as it stands, I'd take the truth-box.

Replies from: Endovior
comment by Endovior · 2012-10-16T12:50:00.327Z · LW(p) · GW(p)

That's why the problem specified 'long-term' utility. Omega is essentially saying 'I have here a lie that will improve your life as much as any lie possibly can, and a truth that will ruin your life as badly as any truth can; which would you prefer to believe?'

Yes, believing a lie does imply that your map has gotten worse, and rationalizing your belief in the lie (which we're all prone to do to things we believe) will make it worse. Omega has specified that this lie has optimal utility among all lies that you, personally, might believe; being Omega, it is as correct in saying this as it is possible to be.

On the other hand, the box containing the least optimal truth is a very scary box. Presume first that you are particularly strong emotionally and psychologically; there is no fact that will directly drive you to suicide. Even so, there are probably facts out there that will, if comprehended and internalized, corrupt your utility function, leading you to work directly against all you currently believe in. There's probably something even worse than that out there in the space of all possible facts, but the test is rated to your utility function when Omega first encountered you, so 'you change your ethical beliefs, and proceed to spend your life working to spread disutility, as you formerly defined it' is on the list of possibilities.

Replies from: asparisi
comment by asparisi · 2012-10-16T15:37:39.401Z · LW(p) · GW(p)

Interesting idea. That would imply that there is a fact out there that, once known, would change my ethical beliefs, which I take to be a large part of my utility function, AND would do so in such a way that afterward, I would assent to acting on the new utility function.

But one of the things that Me(now) values is updating my beliefs based on information. If there is a fact that shows that my utility function is misconstrued, I want to know it. I don't expect such a fact to surface, but I don't have a problem imagining such a fact existing. I've actually lost things that Me(past) valued highly on the basis of this, so I have some evidence that I would rather update my knowledge than maintain my current utility function. Even if that knowledge causes me to update my utility function so as not to prefer knowledge over keeping my utility function.

So I think I might still pick the truth. A more precise account for how much utility is lost or gained in each scenario might convince me otherwise, but I am still not sure that I am better off letting my map get corrupted as opposed to letting my values get corrupted, and I tend to pick truth over utility. (Which, in this scenario, might be suboptimal, but I am not sure it is.)

comment by EricHerboso · 2012-10-15T23:12:30.944Z · LW(p) · GW(p)

How one responds to this dilemma depends on how one values truth. I get the impression that while you value belief in truth, you can imagine that the maximum amount of long-term utility for belief in a falsehood is greater than the minimum amount of long-term utility for belief in a true fact. I would not be surprised to see that many others here feel the same way. After all, there's nothing inherently wrong with thinking this is so.

However, my value system is such that the value of knowing the truth greatly outweighs any possible gains you might have from honestly believing a falsehood. I would reject being hooked up to Nozick's experience machine on utilitarian grounds: I honestly value the disutility of believing in a falsehood to be that bad*.

(I am wary of putting the word "any" in the above paragraph, as maybe I'm not correctly valuing very large numbers of utilons. I'm not really sure how to evaluate differences in utility when it comes to things I really value, like belief in true facts. The value is so high in these cases that it's hard to see how anything could possibly exceed it, but maybe this is just because I have no understanding of how to properly value high value things.)

Replies from: AlexMennen, Endovior
comment by AlexMennen · 2012-10-16T00:00:52.042Z · LW(p) · GW(p)

I am skeptical. Do you spend literally all of your time and resources on increasing the accuracy of your beliefs, or do you also spend some on some other form of enjoyment?

Replies from: EricHerboso
comment by EricHerboso · 2012-10-16T02:50:50.174Z · LW(p) · GW(p)

Point taken.

Yet I would maintain that belief in true facts, when paired with other things I value, is what I place high value on. If I pair those other things I value with belief in falsehoods, their overall value is much, much less. In this way, I maintain a very high value in belief of true facts while not committing myself to maximize accuracy like paper clips.

(Note that I'm confabulating here; the above paragraph is my attempt to salvage my intuitive beliefs, and is not indicative of how I originally formulated them. Nevertheless, I'm warily submitting them as my updated beliefs after reading your comment.)

comment by Endovior · 2012-10-15T23:46:37.427Z · LW(p) · GW(p)

Okay, so if your utilities are configured that way, the false belief might be a belief you will encounter, struggle with, and get over in a few years, and be stronger for the experience.

For that matter, the truth might be 'your world is, in fact, a simulation of your own design, to which you have (through carelessness) forgotten the control codes; you are thus trapped and will die here, accomplishing nothing in the real world'. Obviously an extreme example; but if it is true, you probably do not want to know it.

comment by Shmi (shminux) · 2012-10-19T15:20:09.352Z · LW(p) · GW(p)

SMBC comics has a relevant strip: would you take a pill to ease your suffering when such a suffering no longer serves any purpose? (The strip goes for the all-or-nothing approach, but anything milder than that can be gamed by a Murder-Gandhi slippery slope).

comment by ChristianKl · 2012-10-17T14:41:34.493Z · LW(p) · GW(p)

This example is obviously hypothetical, but for a simple and practical case, consider the use of amnesia-inducing drugs to selectively eliminate traumatic memories; it would be more accurate to still have those memories, taking the time and effort to come to terms with the trauma... but present much greater utility to be without them, and thus without the trauma altogether.

Deleting all proper memories of an event from the mind doesn't mean that you delete all of it's traces.

An example from a physiology lecture I took at university: If you nearly get hit by a car on the street that's an event you will consciously forget very soon. Your amygdala however doesn't and will still send stress signals when you come to the same physical location in the future.

Children at the age of one can't form proper memories that they can remember as adults. That doesn't mean that trauma they experience during that time don't have an effect on their lifes as adults.

In the hypnosis community people who have the tools to delete memories generally consider it to be a bad idea to remove specific traumatic memories. It can mess people up because they still have the emotional effects of the trauma but don't know the reason.

comment by A1987dM (army1987) · 2012-10-17T08:22:39.376Z · LW(p) · GW(p)

I'll take the black box.

comment by wedrifid · 2012-10-17T07:17:50.267Z · LW(p) · GW(p)

You are required to choose one of the boxes; if you refuse to do so, Omega will kill you outright and try again on another Everett branch.

Everett branches don't (necessarily) work like that. If 'you' are a person who systematically refuses to play such games then you just don't, no matter the branch. Sure, the Omega in a different branch may find a human-looking creature also called "Endovior" that plays such games but if it is a creature that has a fundamentally different decision algorithm then for the purpose of analyzing your decision algorithm it isn't "you". (There are also branches in which an actual "you" plays the game but only as a freak anomaly of an event like the way an actual 'me' in a freakishly small Everett branch walked through a brick wall that one time. Still not exactly something to identify with as 'you' doing it.)

Replies from: Endovior
comment by Endovior · 2012-10-17T11:20:03.419Z · LW(p) · GW(p)

Eh, that point probably was a bit weak. I probably could've just gotten away with saying 'you are required to choose a box'. Or, come to think of it, 'failure to open the white box and investigate its contents results in the automatic opening and deployment of the black box after X time'.

Replies from: wedrifid
comment by wedrifid · 2012-10-17T13:39:47.975Z · LW(p) · GW(p)

Eh, that point probably was a bit weak. I probably could've just gotten away with saying 'you are required to choose a box'. Or, come to think of it, 'failure to open the white box and investigate its contents results in the automatic opening and deployment of the black box after X time'.

Or, for that matter, just left it at "Omega will kill you outright". For flavor and some gratuitous additional disutility you could specify the means of execution as being beaten to death by adorable live puppies.

Replies from: mwengler
comment by mwengler · 2012-10-17T16:32:32.140Z · LW(p) · GW(p)

I observe that there have been many human utility functions where people who would prefer to be killed than to make choices offered to them that would keep them alive. So if the intention in the problem is to get you to choose one of the boxes, offering the 3rd choice of being killed doesn't make sense.

comment by CronoDAS · 2012-10-17T07:05:09.408Z · LW(p) · GW(p)

It's a trivial example, but "the surprise ending to that movie you're about to see" is a truth that's generally considered to have disutility. ;)

Replies from: wedrifid
comment by wedrifid · 2012-10-17T07:40:50.747Z · LW(p) · GW(p)

It's a trivial example, but "the surprise ending to that movie you're about to see" is a truth that's generally considered to have disutility. ;)

And a trivial falsehood that is likely to have positive utility: The price of shares in CompanyX will increase by exactly 350% in the next week. (When they will actually increase by 450%). Or lottery numbers that are 1 digit off.

comment by Larks · 2012-10-16T20:12:22.146Z · LW(p) · GW(p)

Optimised for utility X sounds like something that would win in pretty much any circumstance. Optimised for disutility Y sounds like something that would lose in pretty much any circumstance. In combination, the answer is especially clear.

Replies from: mwengler
comment by mwengler · 2012-10-17T16:34:58.066Z · LW(p) · GW(p)

Given a choice between the largest utlity less than 0 and the smallest utility greater than 1, would you still pick the largest?

I think this is a trivial counterexample to your "in pretty much any circumstance" that turns it back in to a live question for you.

Replies from: Larks
comment by Larks · 2012-10-17T21:51:10.205Z · LW(p) · GW(p)

It's not a counterexample: it's the reason I didn't say "in any circumstance." And nor does it turn it back into a live issue; it's equally obveous in the opposite direction.

comment by roystgnr · 2012-10-16T15:26:19.716Z · LW(p) · GW(p)

By what sort of mechanism does a truth which will not be misleading in any way or cause me to lower the accuracy of any probability estimates nevertheless lead to a reduction in my utility? Is the external world unchanged, but my utility is lowered merely by knowing this brain-melting truth? Is the external world changed for the worse by differing actions of mine, and if so then why did I cause my actions to differ, given that my probability estimate for the false-and-I-already-disbelieved-it statement "these new actions will be more utility-optimal" did not become less accurate?

Replies from: Endovior
comment by Endovior · 2012-10-16T16:37:55.460Z · LW(p) · GW(p)

The problem is that truth and utility are not necessarily correlated. Knowing about a thing, and being able to more accurately assess reality because of it, may not lead you to the results you desire. Even if we ignore entirely the possibility of basilisks, which are not ruled out by the format of the question (eg: there exists an entity named Hastur, who goes to great lengths to torment all humans that know his name), there is also knowledge you/mankind are not ready for (plan for a free-energy device that works as advertised, but when distributed and reverse-engineered, leads to an extinction-causing physics disaster). Even if you yourself are not personally misled, you are dealing with an outcome pump that has taken your utility function into account. Among all possible universes, among all possible facts that fit the pattern, there has to be at least one truth that will have negative consequences for whatever you value, for you are not perfectly rational. The most benign possibilities are those that merely cause you to reevaluate your utility function, and act in ways that no longer maximize what you once valued; and among all possibilities, there could be knowledge which will do worse. You are not perfectly rational; you cannot perfectly foresee all outcomes. A being which has just proved to you that it is perfectly rational, and can perfectly foresee all outcomes, has advised you that the consequences of you knowing this information will be the maximum possible long-term disutility. By what grounds do you disbelieve it?

Replies from: roystgnr
comment by roystgnr · 2012-10-17T05:25:42.731Z · LW(p) · GW(p)

"You are not perfectly rational" is certainly an understatement, and it does seem to be an excellent catch-all for ways in which a non-brain-melting truth might be dangerous to me... but by that token, a utility-improving falsehood might be quite dangerous to me too, no? It's unlikely that my current preferences can accurately be represented by a self-consistent utility function, and since my volition hasn't been professionally extrapolated yet, it's easy to imagine false utopias that might be an improvement by the metric of my current "utility function" but turn out to be dystopian upon actual experience.

Suppose someone's been brainwashed to the point that their utility function is "I want to obey The Leader as best as I can" - do you think that after reflection they'd be better off with a utility-maximizing falsehood or with a current-utility-minimizing truth?

Replies from: Endovior
comment by Endovior · 2012-10-17T05:54:30.998Z · LW(p) · GW(p)

The problem does not concern itself with merely 'better off', since a metric like 'better off' instead of 'utility' implies 'better off' as defined by someone else. Since Omega knows everything you know and don't know (by the definition of the problem, since it's presenting (dis)optimal information based on it's knowledge of your knowledge), it is in a position to extrapolate your utility function. Accordingly, it maximizes/minimizes for your current utility function, not its own, and certainly not some arbitrary utility function deemed to be optimal for humans by whomever. If your utility function is such that you hold the well-being of another above yourself (maybe you're a cultist of some kind, true... but maybe you're just a radically altruistic utilitarian), then the results of optimizing your utility will not necessarily leave you any better off. If you bind your utility function the aggregate utility of all humanity, then maximizing that is something good for all humanity. If you bind it to one specific non-you person, then that person gets a maximized utility. Omega does not discriminate between the cases... but if it is trying to minimize your long-term utility, a handy way to do so is to get you to act against your current utility function.

Accordingly, yes; a current-utility-minimizing truth could possibly be 'better' by most definitions for a cultist then a current-utility-maximizing falsehood. Beware, though; reversed stupidity is not intelligence. Being convinced to ruin Great Leader's life or even murder him outright might be better for you than blindly serving him and making him dictator of everything, but that hardly means there's nothing better you could be doing. The fact that there exists a class of perverse utility functions which have negative consequences for those adopting them (and which can thus be positively reversed) does not imply that it's a good idea to try inverting your utility function in general.

comment by Armok_GoB · 2012-10-16T13:37:50.807Z · LW(p) · GW(p)

Can I inject myself with a poison that will kill me within a few minutes and THEN chose the falsehood?

Replies from: Endovior
comment by Endovior · 2012-10-16T13:43:46.709Z · LW(p) · GW(p)

Suicide is always an option. In fact, Omega already presented you with it as an option, the consequences for not choosing. If you would in general carry around such a poison with you, and inject it specifically in response to just such a problem, then Omega would already know about that, and the information it offers would take that into account. Omega is not going to give you the opportunity to go home and fetch your poison before choosing a box, though.

EDIT: That said, I find it puzzling that you'd feel the need to poison yourself before choosing the falsehood, which has already been demonstrated to have positive consequences for you. Personally, I find it far easier to visualize a truth so terrible that it leaves suicide the preferable option.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-10-16T15:31:38.013Z · LW(p) · GW(p)

I never said I would do it, just curious.

comment by Lapsed_Lurker · 2012-10-16T11:34:47.838Z · LW(p) · GW(p)

I am worried about "a belief/fact in its class" the class chosen could have an extreme effect on the outcome.

Replies from: Endovior
comment by Endovior · 2012-10-16T12:34:07.455Z · LW(p) · GW(p)

As presented, the 'class' involved is 'the class of facts which fits the stated criteria'. So, the only true facts which Omega is entitled to present to you are those which are demonstrably true, which are not misleading as specified, which Omega can find evidence to prove to you, and which you could verify yourself with a month's work. The only falsehoods Omega can inflict upon you are those which are demonstrably false (a simple test would show they are false), which you do not currently believe, and which you would disbelieve if presented openly.

Those are fairly weak classes, so Omega has a lot of room to work with.

Replies from: Lapsed_Lurker
comment by Lapsed_Lurker · 2012-10-16T14:49:03.608Z · LW(p) · GW(p)

So, a choice between the worst possible thing a superintelligence can do to you by teaching you an easily-verifiable truth and the most wonderful possible thing by having you believe an untruth. That ought to be an easy choice, except maybe when there's no Omega and people are tempted to signal about how attached to the truth they are, or something.

comment by Richard_Kennaway · 2012-10-16T08:58:34.928Z · LW(p) · GW(p)

I choose the truth.

Omega's assurances imply that I will not be in the valley of bad rationality mentioned later.

Out of curiosity, I also ask Omega to also show me the falsehood, without the brain alteration, so I can see what I might have ended up believing.

Replies from: ArisKatsaris, wedrifid, Jay_Schweikert
comment by ArisKatsaris · 2012-10-17T20:01:50.054Z · LW(p) · GW(p)

I wonder if the mere use of Omega is tripping you up regarding this, or if perhaps it's the abstraction of "truth" vs "lie" rather than any concrete example.

So here's an example, straight from a spy-sf thriller of your choice. You're a secret agent, conscripted against your will by a tyrannical dystopian government. Your agency frequently mind-scans you to see to if you have revealed your true occupation to anyone, and then kills them to protect the secrecy of your work. They also kill anyone to whom you say that you can't reveal your true occupation, lest they become suspicious - the only allowed course of action is to lie plausibly.

Your dear old mom asks "What kind of job did they assign you to, Richard?" Now motivated purely by wishing her own benefit, do you: a) tell her the truth, condemning her to die. b) tell her a plausible lie, ensuring her continuing survival.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-10-17T21:32:39.161Z · LW(p) · GW(p)

I just don't find these extreme thought experiments useful. Parfit's Hitchhiker is a practical problem. Newcomb's Problem is an interesting conundrum. Omega experiments beyond that amount to saying "Suppose I push harder on this pan of the scales that you can push on the other. Which way will they go?" The question is trite and the answers nugatory. People talk about avoiding the hypothesis, but in these cases the hypothesis should be avoided. To accept it is already to have gone wrong.

Replies from: drethelin, ArisKatsaris
comment by drethelin · 2012-10-18T12:02:15.679Z · LW(p) · GW(p)

If you're trying to find the important decision point in real situations it can often be helpful to go to extremes to admit that things are possible. Ie, if the best lie is preferred to the worst truth, that implies that some truths are worse than some lies, and you can start talking about how to figure this out. If you just start with the actual question, you get people who say "No, the truth is most important"

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-10-18T12:51:32.164Z · LW(p) · GW(p)

Considering the extreme is only useful if the extreme is a realistic one -- if it is the least convenient possible world. (The meaning of "possible" in this sentence does not include "probability 1/3^^^3".) With extreme Omega-scenarios, the argument is nothing more than an outcome pump: you nail the conclusion you want to the ceiling of p=1 and confabulate whatever scenario is produced by conditioning on that hypothesis. The underlying structure is "Suppose X was the right thing to do -- would X be the right thing to so?", and the elaborate story is just a conjurer's misdirection.

That's one problem. A second problem with hypothetical scenarios, even realistic ones, is that they're a standard dark arts tool. A would-be burner of Dawkins' oeuvre presents a hypothetical scenario where suppressing a work would be the right thing to do, and triumphantly crows after you agree to it, "So, you do believe in censorship, we're just quibbling over which books to burn!" In real life, there's a good reason to be wary of hypotheticals: if you take them at face value, you're letting your opponent write the script, and you will never be the hero in it.

comment by ArisKatsaris · 2012-10-18T11:03:13.073Z · LW(p) · GW(p)

People talk about avoiding the hypothesis, but in these cases the hypothesis should be avoided.

Harmful truths are not extreme hypotheticals, they're a commonly recognized part of everyday existence.

  • You don't show photographs of your poop to a person who is eating - it would be harmful to the eaters' appetite.
  • You don't repeat to your children every little grievance you ever had with their other parent - it might be harmful to them.
  • You don't need to tell your child that they're NOT your favourite child either.

Knowledge tends to be useful, but there's no law in the universe that forces it to be always beneficial to you. You've not indicated any reason that it is so obliged to be in every single scenario.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-10-18T11:05:51.160Z · LW(p) · GW(p)

Knowledge tends to be useful, but there's no law in the universe that forces it to be always beneficial to you. You've not indicated any reason that it is so obliged to be in every single scenario.

Since I have not claimed it to be so, it is completely appropriate that I have given no reasons for it to be so.

comment by wedrifid · 2012-10-17T09:46:42.795Z · LW(p) · GW(p)

I choose the truth.

Then you lose. You also maintain your ideological purity with respect to epistemic rationality. Well, perhaps not. You will likely end up with an overall worse map of the territory (given that the drastic loss of instrumental resources probably kills you outright rather than enabling your ability to seek out other "truth" indefinitely). We can instead say that at least you refrained from making a deontological violation against an epistemic rationality based moral system.

comment by Jay_Schweikert · 2012-10-16T17:44:56.487Z · LW(p) · GW(p)

Even if it's a basilisk? Omega says: "Surprise! You're in a simulation run by what you might as well consider evil demons, and anyone who learns of their existence will be tortured horrifically for 3^^^3 subjective years. Oh, and by the way, the falsehood was that the simulation is run by a dude named Kevin who will offer 3^^^3 years of eutopian bliss to anyone who believes he exists. I would have used outside-of-the-Matrix magic to make you believe that was true. The demons were presented with elaborate thought experiments when they studied philosophy in college, so they think it's funny to inflict these dilemmas on simulated creatures. Well, enjoy!"

If you want to say this is ridiculously silly and has no bearing on applied rationality, well, I agree. But that response pretty clearly meets the conditions of the original hypothetical, which is why I would trust Omega. If I somehow learned that knowing the truth could cause so much disutility, I would significantly revise my estimate that we live in a Lovecraftian horror-verse with basilisks floating around everywhere.

Replies from: wedrifid, MixedNuts, Richard_Kennaway
comment by wedrifid · 2012-10-17T09:47:23.319Z · LW(p) · GW(p)

, the falsehood was that the simulation is run by a dude named Kevin who will offer 3^^^3 of eutopian bliss to anyone who believes he exists

3^^^3 units of simulated Kratom?

Replies from: Jay_Schweikert
comment by Jay_Schweikert · 2012-10-17T14:43:21.797Z · LW(p) · GW(p)

Oops, meant to say "years." Fixed now. Thanks!

Replies from: wedrifid
comment by wedrifid · 2012-10-17T15:22:06.875Z · LW(p) · GW(p)

I honestly didn't notice the missing word. I seemed to have just read "units" as a default. My reference was to the long time user by that name who does, in fact, deal in bliss of a certain kind.

comment by MixedNuts · 2012-10-16T18:14:32.961Z · LW(p) · GW(p)

Omega could create the demons when you open the box, or if that's too truth-twisting, before asking you.

comment by Richard_Kennaway · 2012-10-16T20:40:52.376Z · LW(p) · GW(p)

If you want to say this is ridiculously silly and has no bearing on applied rationality, well, I agree.

That's the problem. The question is the rationalist equivalent of asking "Suppose God said he wanted you to kidnap children and torture them?" I'm telling Omega to just piss off.

Replies from: Endovior
comment by Endovior · 2012-10-17T07:02:21.524Z · LW(p) · GW(p)

The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it's rational to win, not complain that you're being punished for making the "right" choice. As with Newcomb's Problem, if you can predict in advance that the choice you've labelled "right" has less utility than a "wrong" choice, that implies that you have made an error in assessing the relative utilities of the two choices. Sure, Omega's being a jerk. It does that. But that doesn't change the situation, which is that you are being presented with a situation where you are asked to choose between two situations of differing utility, and being trapped into an option of lesser utility (indeed, vastly lesser utility) by nothing but your own "rationality". This implies a flaw in your system of rationality.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-10-17T08:42:51.091Z · LW(p) · GW(p)

The bearing this has on applied rationality is that this problem serves as a least convenient possible world

When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It's like Pascal's Mugging. Sure, there can be things you're better off not knowing, but the thing to do is to level up your ability to handle it. The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can't lift it.

Replies from: wedrifid
comment by wedrifid · 2012-10-17T09:25:16.855Z · LW(p) · GW(p)

When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It's like Pascal's Mugging. Sure, there can be things you're better off not knowing, but the thing to do is to level up your ability to handle it.

Leveling up is great, but I'm still not going to try to beat up an entire street-gang just to steal their bling. I don't have that level of combat prowess right now even though it is entirely possible to level up enough for that kind of activity to be possible and safe. It so happens that neither I nor any non-fictional human is at that level or likely to be soon. In the same way there is a huge space of possible agent that would be able to calculate true information that it would be detrimental for me to have. For most humans just another particularly manipulative human would be enough and for all the rest any old superintellgence would do.

The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can't lift it.

No, this is a cop-out. Humans do encounter situations where they encounter agents more powerful than themselves, including agents that are more intelligent and able to exploit human weaknesses. Just imagining yourself to be more powerful and more able to "handle the truth" isn't especially useful and trying to dismiss all such scenarios as like God combatting his own omnipotence would be irresponsible.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-10-17T09:35:35.810Z · LW(p) · GW(p)

I don't have that level of combat prowess right now

Omega isn't showing up right now.

It so happens that neither I nor any non-fictional human is at that level or likely to be soon.

No non-fictional Omega is at that level either.

Replies from: wedrifid
comment by wedrifid · 2012-10-17T09:46:06.772Z · LW(p) · GW(p)

Then it would seem you need to delegate your decision theoretic considerations to those better suited to abstract analysis.

comment by RolfAndreassen · 2012-10-15T21:56:08.928Z · LW(p) · GW(p)

Does the utility calculation from the false belief include utility from the other beliefs I will have to overwrite? For example, suppose the false belief is "I can fly". At some point, clearly, I will have to rationalise away the pain of my broken legs from jumping off a cliff. Short of reprogramming my mind to really not feel the pain anymore - and then we're basically talking about wireheading - it seems hard to come up with any fact, true or false, that will provide enough utility to overcome that sort of thing.

I additionally note that the maximum disutility has no lower bound, in the problem as given; for all I know it's the equivalent of three cents. Likewise the maximum utility has no lower bound, in the problem as written. Perhaps Omega ought to provide some comparisons; for example he might say that the disutility of knowing the true fact is at least equal to breaking an arm, or some such calibration.

As the problem is written, I'd take the true fact.

Replies from: AlexMennen, Endovior
comment by AlexMennen · 2012-10-16T00:05:54.093Z · LW(p) · GW(p)

"I can fly" doesn't sound like a particularly high-utility false belief. It sounds like you are attacking a straw man. I'd assume that if the false information is a package of pieces of false information, then the entire package is optimized for being high-utility.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2012-10-16T04:56:48.471Z · LW(p) · GW(p)

"I can fly" doesn't sound like a particularly high-utility false belief.

True, but that's part of my point: The problem does not specify that the false belief has high utility, only that it has the highest possible utility. No lower bound.

Additionally, any false belief will bring you into conflict with reality eventually. "I can fly" just illustrates this dramatically.

Replies from: AlexMennen
comment by AlexMennen · 2012-10-16T06:11:56.033Z · LW(p) · GW(p)

Of course there will be negative-utility results of most false beliefs. This does not prove that all false beliefs will be net negative utility. The vastness of the space of possible beliefs should suggest that there are likely to be many approximately harmless false ones, and some very beneficial ones, despite the tendency for false beliefs to be negative utility. In fact, Kindly gives an example of each here.

In the example of believing some sufficiently hard to factor composite to be prime, you would not naturally be able to cause a conflict anyway, since it is too hard to show that it is not prime. In the FAI example, it might have to keep you in the dark for a while and then fool you into thinking that someone else had created an FAI separately so you wouldn't have to know that your game was actually an FAI. The negative utility from this conflict resolution would be negligible compared to the benefits. The negative utility arising from belief conflict resolution in your example of "I can fly" does not even come close to generalizing to all possible false beliefs.

comment by Endovior · 2012-10-16T00:16:35.475Z · LW(p) · GW(p)

As written, the utility calculation explicitly specifies 'long-term' utility; it is not a narrow calculation. This is Omega we're dealing with, it's entirely possible that it mapped your utility function from scanning your brain, and checked all possible universes forward in time from the addition of all possible facts to your mind, and took the worst and best true/false combination.

Accordingly, a false belief that will lead you to your death or maiming is almost certainly non-optimal. No, this is the one false thing that has the best long-term consequences for you, as you value such things, out of all the false things you could possibly believe.

True, the maximum utility/disutility has no lower bound. This is intentional. If you really believe that your position is such that no true information can hurt you, and/or no false information can benefit you, then you could take the truth. This is explicitly the truth with the worst possible long-term consequences for whatever it is you value.

Yes, it's pretty much defined as a sucker bet, implying that Omega is attempting to punish people for believing that there is no harmful true information and no advantageous false information. If you did, in fact, believe that you couldn't possibly gain by believing a falsehood, or suffer from learning a truth, this is the least convenient possible world.

comment by RichardHughes · 2012-10-17T19:41:23.953Z · LW(p) · GW(p)

The parallels with Newcomb's Paradox are obvious, and the moral is the same. If you aren't prepared to sacrifice a convenient axiom for greater utility, you're not really rational. In the case of Newcomb's Paradox, that axiom is Dominance. In this case, that axiom is True Knowledge Is Better Than False Knowledge.

In this instance, go for falsehood.