Posts

Problem of Optimal False Information 2012-10-15T21:42:33.679Z

Comments

Comment by Endovior on Rationality Quotes November 2014 · 2014-11-12T22:18:42.108Z · LW · GW

It seems to me that this is related to the idea of roles. If you don't see yourself as being responsible for handling emergencies, you probably won't do anything about them, hoping someone else will. But if you do see yourself as being the person responsible for handling a crisis situation, then you're a lot more likely to do something about it, because you've taken that responsibility upon yourself.

It's a particularly nuanced response to both take that kind of responsibility for a situation, and then, after carefully evaluating the options, decide that the best course is to do nothing, since it conflicts with that cultivated need to respond. That said, it could easily be a better choice than the alternative of making a probably-bad decision in the spur of the moment with incomplete information. Used properly, it's a level above the position of decisive but unplanned action... though on the surface, it can be hard to distinguish from the default bystander position of passing off responsibility.

Comment by Endovior on Welcome to Less Wrong! (July 2012) · 2013-03-21T10:18:45.195Z · LW · GW

I think I understand. There is something of what you describe here that resonates with my own past experience.

I myself was always much smarter than my peers; this isolated me, as I grew contemptuous of the weakness I found in others, an emotion I often found difficult to hide. At the same time, though, I was not perfect; the ease at which I was able to do many things led me to insufficient conscientiousness, and the usual failures arising from such. These failures would lead to bitter cycles of guilt and self-loathing, as I found the weakness I so hated in others exposed within myself.

Like you, I've found myself becoming more functional over time, as my time in university gives me a chance to repair my own flaws. Even so, it's hard, and not entirely something I've been able to do on my own... I wouldn't have been able to come this far without having sought, and received, help. If you're anything like me, you don't want to seek help directly; that would be admitting weakness, and at the times when you hurt the worst, you'd rather do anything, rather hurt yourself, rather die than admit to your weakness, to allow others to see how flawed you are.

But ignoring your problems doesn't make them go away. You need to do something about them. There are people out there who are willing to help you, but they can't do so unless you make the first move. You need to take the initiative in seeking help; and though it will seem like the hardest thing you could do... it's worth it.

Comment by Endovior on Rationality Quotes March 2013 · 2013-03-06T20:54:58.609Z · LW · GW

Not necessarily. Cosmic rays are just electromagnetic energy on particular (high) frequencies. So if it interprets everything along those lines, it's just seeing everything purely in terms of the EM spectrum... in other words 'normal, uninteresting background case, free of cosmic rays'. So things that don't trigger high enough to be cosmic rays, like itself, parse as meaningless random fluctuations... presumably, if it was 'intelligent', it would think that it existed for no reason, as a matter of random chance, like any other case of background radiation below the threshold of cosmic rays, without losing any ability to perceive or understand cosmic rays.

Comment by Endovior on How An Algorithm Feels From Inside · 2013-02-13T07:09:26.819Z · LW · GW

As a former Objectivist, I understand the point being made.

That said, I no longer agree... I now believe that Ayn Rand made an axiom-level mistake. Existence is not Identity. To assume that Existence is Identity is to assume that all things have concrete properties, which exist and can therefore be discovered. This is demonstrably false; at the fundamental level of reality, there is Uncertainty. Quantum-level effects inherent in existence preclude the possibility of absolute knowledge of all things; there are parts of reality which are actually unknowable.

Moreover, we as humans do not have absolute knowledge of things. Our knowledge is limited, as is the information we're able to gather about reality. We don't have the ability to gather all relevant information to be certain of anything, nor the luxury to postpone decision-making while we gather that information. We need to make decisions sooner then that, and we need to make them in the face of the knowledge that our knowledge will always be imperfect.

Accordingly, I find that a better axiom would be "Existence is Probability". I'm not a good enough philosopher to fully extrapolate the consequences of that... but I do think if Ayn Rand had started with a root-level acknowledgement of fallibility, it would've helped to avoid a lot of the problems she wound up falling into later on.

Also, welcome, new person!

Comment by Endovior on Rationality Quotes January 2013 · 2013-01-14T17:25:03.614Z · LW · GW

Yeah, that happens too. Best argument I've gotten in support of the position is that they feel that they are able to reasonably interpret the will of God through scripture, and thus instructions 'from God' that run counter to that must be false. So it's not quite the same as their own moral intuition vs a divine command, but their own scriptural learning used as a factor to judge the authenticity of a divine command.

Comment by Endovior on Rationality Quotes January 2013 · 2013-01-14T12:14:11.959Z · LW · GW

This argument really isn't very good. It works on precisely none of the religious people I know, because:

A: They don't believe that God would tell them to do anything wrong.

B: They believe in Satan, who they are quite certain would tell them to do something wrong.

C: They also believe that Satan can lie to them and convincingly pretend to be God.

Accordingly, any voice claiming to be God and also telling them to do something they feel is evil must be Satan trying to trick them, and is disregarded. They actually think like that, and can quote relevant scripture to back their position, often from memory. This is probably better than a belief framework that would let them go out and start killing people if the right impulse struck them, but it's also not a worldview that can be moved by this sort of argument.

Comment by Endovior on Rationality Quotes January 2013 · 2013-01-13T01:26:31.167Z · LW · GW

Well, that gets right to the heart of the Friendliness problem, now doesn't it? Mother Brain is the machine that can program, and she reprogrammed all the machines that 'do evil'. It is likely, then, that the first machine that Mother Brain reprogrammed was herself. If a machine is given the ability to reprogram itself, and uses that ability to make itself decide to do things that are 'evil', is the machine itself evil? Or does the fault lie with the programmer, for failing to take into account the possibility that the machine might change its utility function? It's easy to blame Mother Brain; she's a major antagonist in her timeline. It's less easy to think back to some nameless programmer behind the scenes, considering the problem of coding an intelligent machine, and deciding how much freedom to give it in making its own decisions.

In my view, Lucca is taking personal responsibility with that line. 'Machines aren't capable of evil', (they can't choose to do anything outside their programming). 'Humans make them that way', (so the programmer has the responsibility of ensuring their actions are moral). There are other interpretations, but I'd be wary of any view that shifts moral responsibility to the machine. If you, as a programmer, give up any of your moral responsibility to your program, then you're basically trying to absolve yourself of the consequences if anything goes wrong. "I gave my creation the capacity to choose. Is it my fault if it chose evil?" Yes, yes it is.

Comment by Endovior on Rationality Quotes January 2013 · 2013-01-13T00:52:40.898Z · LW · GW

My point in posting it was that UFAI isn't 'evil', it's badly programmed. If an AI proves itself unfriendly and does something bad, the fault lies with the programmer.

Comment by Endovior on Rationality Quotes January 2013 · 2013-01-11T21:39:26.320Z · LW · GW

Machines aren't capable of evil. Humans make them that way.

-Lucca, Chrono Trigger

Comment by Endovior on Rationality Quotes January 2013 · 2013-01-10T13:31:31.962Z · LW · GW

This. Took a while to build that foundation, and a lot of contemplation in deciding what needed to be there... but once built, it's solid, and not given to reorganization on whim. That's not because I'm closed-minded or anything, it's because stuff like a belief that the evidence provided by your own senses is valid really is kind of fundamental to believing anything else, at all. Not believing in that implies not believing in a whole host of other things, and develops into some really strange philosophies. As a philosophical position, this is called 'empiricism', and it's actually more fundamental than belief in only the physical world (ie: disbelief in spiritual phenomena, 'materialism'), because you need a thing that says what evidence is considered valid before you have a thing that says 'and based on this evidence, I conclude'.

Comment by Endovior on Rationality Quotes January 2013 · 2013-01-03T18:07:47.919Z · LW · GW

If your ends don’t justify the means, you’re working on the wrong project.

-Jobe Wilkins (Whateley Academy)

Comment by Endovior on New Sequences and General Update From Castify · 2012-12-13T02:15:11.971Z · LW · GW

Listening through the sequences available now, a couple issues:

1: The podcasts available, when loaded in iTunes, aren't in proper order; the individual posts come through in a haphazard, random sort of order, which means that to listen to them in proper order means consulting with the proper order in another window, which is awkward. I am inclined to believe that this has something to do with the order in which they are recorded on your end, though I don't actually know enough about the mechanics of podcasting to be certain of this.

2: It is insufficiently obvious that The Simple Truth is bundled with Map and Territory. After finishing Mysterious Answers to Mysterious Questions, and looking for something else to listen to, I decided to grab The Simple Truth quickly... and, about 45 minutes later, having finished it, I moved on to Map and Territory... and then promptly smacked my forehead, seeing a little disclaimer that mentioned that The Simple Truth was included (I bought Map and Territory anyways). It wasn't until after I investigated further that I noticed a similar disclaimer on the purchase page for The Simple Truth; I'd missed that the first time around. This should probably be mentioned on the channels page directly.

Comment by Endovior on LessWrong podcasts · 2012-12-12T06:43:17.859Z · LW · GW

Awesome, was waiting for that to be fixed before sending you monies. Subscribed now.

Comment by Endovior on LessWrong podcasts · 2012-12-06T00:32:43.359Z · LW · GW

Thanks for the quick response. I figured that you probably wouldn't be trying to do that (it'd be awful for business, for one thing), but from what was written on the site as it stood, I couldn't find any reading of it that said anything else.

Comment by Endovior on LessWrong podcasts · 2012-12-05T22:03:33.836Z · LW · GW

Speaking personally, I'm really put off by the payment model. You're presenting this as "$5 for a one-year subscription". Now, if this was "$5 for a one-year subscription to all our Less Wrong content, released regularly on the following schedule", then that would seem fair value for money. On the other hand, if it was "$5 to buy this sequence, and you can buy other sequences once we have them ready", then that would be okay, too. As is, though it's coming across as "$5 to subscribe to this sequence for one-year, plus more money to subscribe to any other sequences we put out when we have them, and if something happens to your files at some point in the future, then too bad; we'll charge you another $5 to get it again, despite the fact that we have your account information on file and know you paid us". And that is... not good. It strikes me as overly greedy, and to no real purpose, since you're not locking the files or anything so I don't get to play them after the subscription expires (incidentally, also a business model I would not support). To summarize: I'd willingly 'subscribe' to content that is coming out on a regular basis, or 'buy' content that is complete as is. I will not 'subscribe' to content complete as is, since the implication is that my right to the content is temporary and revocable.

Comment by Endovior on Newcomb's Problem and Regret of Rationality · 2012-11-04T04:34:14.284Z · LW · GW

Not how Omega looks at it. By definition, Omega looks ahead, sees a branch in which you would go for Box A, and puts nothing in Box B. There's no cheating Omega... just like you can't think "I'm going to one-box, but then open Box A after I've pocketed the million" there's no "I'm going to open Box B first, and decide whether or not to open Box A afterward". Unless Omega is quite sure that you have precommitted to never opening Box A ever, Box B contains nothing; the strategy of leaving Box A as a possibility if Box B doesn't pan out is a two-box strategy, and Omega doesn't allow it.

Comment by Endovior on Newcomb's Problem and Regret of Rationality · 2012-11-03T16:56:43.624Z · LW · GW

Okay... so since you already know, in advance of getting the boxes, that that's what you'd know, Omega can deduce that. So you open Box B, find it empty, and then take Box A. Enjoy your $1000. Omega doesn't need to infinite loop that one; he knows that you're the kind of person who'd try for Box A too.

Comment by Endovior on Problem of Optimal False Information · 2012-10-19T05:20:55.960Z · LW · GW

Sure, that's a valid way of looking at things. If you value happiness over truth, you might consider not expending a great deal of effort in digging into those unpleasant truths, and retain your pleasant illusions. Of course, the nature of the choice is such that you probably won't realize that it is such a choice until you've already made it.

Comment by Endovior on Problem of Optimal False Information · 2012-10-18T03:59:41.159Z · LW · GW

I don't have a valid proof for you. Omega is typically defined like that (arbitrarily powerful and completely trustworthy), but a number of the problems I've seen of this type tend to just say 'Omega appears' and assume that you know Omega is the defined entity simply because it self-identifies as Omega, so I felt the need to specify that in this instance, Omega has just proved itself.

Theoretically, you could verify the trustworthiness of a superintelligence by examining its code... but even if we ignore the fact that you're probably not equipped to comprehend the code of a superintelligence (really, you'll probably need another completely trustworthy superintelligence to interpret the code for you, which rather defeats the point), there's still the problem that an untrustworthy superintelligence could provide you with a completely convincing forgery, which could potentially be designed in such a way that it would performs every action in the same way as the real one would (in that way being evaluated as 'trustworthy' under simulation)... except the one for which the untrustworthy superintelligence is choosing to deceive you on. Accordingly, I think that even a superintelligence probably can't be sure about the trustworthiness of another superintelligence, regardless of evidence.

Comment by Endovior on Problem of Optimal False Information · 2012-10-17T11:20:03.419Z · LW · GW

Eh, that point probably was a bit weak. I probably could've just gotten away with saying 'you are required to choose a box'. Or, come to think of it, 'failure to open the white box and investigate its contents results in the automatic opening and deployment of the black box after X time'.

Comment by Endovior on Problem of Optimal False Information · 2012-10-17T07:02:21.524Z · LW · GW

The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it's rational to win, not complain that you're being punished for making the "right" choice. As with Newcomb's Problem, if you can predict in advance that the choice you've labelled "right" has less utility than a "wrong" choice, that implies that you have made an error in assessing the relative utilities of the two choices. Sure, Omega's being a jerk. It does that. But that doesn't change the situation, which is that you are being presented with a situation where you are asked to choose between two situations of differing utility, and being trapped into an option of lesser utility (indeed, vastly lesser utility) by nothing but your own "rationality". This implies a flaw in your system of rationality.

Comment by Endovior on Problem of Optimal False Information · 2012-10-17T06:50:20.647Z · LW · GW

A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don't produce an answer you'd expect; they instead produce something that matches the criteria even better then anything you were aware of.

Comment by Endovior on Problem of Optimal False Information · 2012-10-17T06:44:46.689Z · LW · GW

Quite a detailed analysis, and correct within its assumptions. It is important to know where Omega is getting it's information on your utility function. That said, since Omega implicitly knows everything you know (since it needs to know that in order to also know everything you don't know, and thus to be able to provide the problem at all), it implicitly knows your utility function already. Obviously, accepting a falsehood that perverts your utility function into something counter to your existing utility function just to maximize an easier target would be something of disutility to you as you are at present, and not something that you would accept if you were aware of it. Accordingly, it it a safe assumption that Omega has based its calculations off your utility before accepting the information, and for the purposes of this problem, that is exactly the case. This is your case (2); if a falsehood intrinsically conflicts with your utility function in whatever way, it generates disutility (and thus, is probably suboptimal). If your utility function is inherently hostile to such changes, this presents a limitation on the factset Omega can impose upon you.

That said, your personal answer seems to place rather conservative bounds on the nature of what Omega can do to you. Omega has not presented bounds on it's utilities; instead, it has advised you that they are maximized within fairly broad terms. Simialrly, it has not assured you anything about the relative values of those utilities, but the structure of the problem as Omega presents it (which you know is correct, because Omega has already arbitrarily demonstrated it's power and trustworthiness) means you are dealing with an outcome pump attached directly to your utility function. Since the structure of the problem gives it a great deal of room in which to operate, the only real limitation is the nature of your own utility function. Sure, it's entirely possible that your utility function could be laid out in such a way as to strongly emphasize the disutility of misinformation... but that just limits the nice things Omega can do for you, it does nothing to save you from the bad things it can do to you. It remains valid to show you a picture and say 'the picture you are looking at is a basilisk; it causes any human that sees it to die within 48 hours'. Even without assuming basilisks, you're still dealing with a hostile outcome pump. There's bound to be some truth that you haven't considered that will lead you to a bad end. And if you want to examine it in terms of Everett branches, Omega is arbitrarily powerful. It has the power to compute all possible universes and give you the information which has maximally bad consequences for your utility function in aggregate across all possible universes (this implies, of course, that Omega is outside the Matrix, but pretty much any problem invoking Omega does that).

Even so, Omega doesn't assure you of anything regarding the specific weights of the two pieces of information. Utility functions differ, and since there's nothing Omega could say that would be valid for all utility functions, there's nothing it will say at all. It's left to you to decide which you'd prefer.

That said, I do find it interesting to note under which lines of reasoning people will choose something labelled 'maximum disutility'. I had thought it to be a more obvious problem than that.

Comment by Endovior on Problem of Optimal False Information · 2012-10-17T05:54:30.998Z · LW · GW

The problem does not concern itself with merely 'better off', since a metric like 'better off' instead of 'utility' implies 'better off' as defined by someone else. Since Omega knows everything you know and don't know (by the definition of the problem, since it's presenting (dis)optimal information based on it's knowledge of your knowledge), it is in a position to extrapolate your utility function. Accordingly, it maximizes/minimizes for your current utility function, not its own, and certainly not some arbitrary utility function deemed to be optimal for humans by whomever. If your utility function is such that you hold the well-being of another above yourself (maybe you're a cultist of some kind, true... but maybe you're just a radically altruistic utilitarian), then the results of optimizing your utility will not necessarily leave you any better off. If you bind your utility function the aggregate utility of all humanity, then maximizing that is something good for all humanity. If you bind it to one specific non-you person, then that person gets a maximized utility. Omega does not discriminate between the cases... but if it is trying to minimize your long-term utility, a handy way to do so is to get you to act against your current utility function.

Accordingly, yes; a current-utility-minimizing truth could possibly be 'better' by most definitions for a cultist then a current-utility-maximizing falsehood. Beware, though; reversed stupidity is not intelligence. Being convinced to ruin Great Leader's life or even murder him outright might be better for you than blindly serving him and making him dictator of everything, but that hardly means there's nothing better you could be doing. The fact that there exists a class of perverse utility functions which have negative consequences for those adopting them (and which can thus be positively reversed) does not imply that it's a good idea to try inverting your utility function in general.

Comment by Endovior on Problem of Optimal False Information · 2012-10-17T05:36:28.185Z · LW · GW

As stated, the only trap the white box contains is information... which is quite enough, really. A prediction can be considered a true statement if it is a self-fulfilling prophecy, after all. More seriously, if such a thing as a basilisk is possible, the white box will contain a basilisk. Accordingly, it's feasible that the fact could be something like "Shortly after you finish reading this, you will drop into an irreversible, excruciatingly painful, minimally aware coma, where by all outward appearances you look fine, yet you find out the world goes downhill while you get made to live forever", and there's some kind of sneaky pattern encoded in the pattern of the text and the border of the page or whatever that causes your brain to lock up and start firing pain receptors, such that the pattern is self-sustaining. Everything else about the world and living forever and such would have to have been something that would have happened anyway, lacking your action to prevent it, but if Omega knows UFAI will happen near enough in the future, and knows that such a UFAI would catch you in your coma and stick you with immortality nanites without caring about your torture-coma state... then yeah, just such a statement is entirely possible.

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T16:46:50.166Z · LW · GW

Okay, so you are a mutant, and you inexplicably value nothing but truth. Fine.

The falsehood can still be a list of true things, tagged with 'everything on this list is true', but with an inconsequential falsehood mixed in, and it will still have net long-term utility for the truth-desiring utility function, particularly since you will soon be able to identify the falsehood, and with your mutant mind, quickly locate and eliminate the discrepancy.

The truth has been defined as something that cannot lower the accuracy of your beliefs, yet it still has maximum possible long-term disutility, and your utility function is defined exclusively in terms of the accuracy of your beliefs. Fine. Mutant that you are, the truth of maximum disutility is one which will lead you directly to a very interesting problem that will distract you for an extended period of time, but which you will ultimately be unable to solve. This wastes a great deal of your time, but leaves you with no greater utility than you had before, constituting disutility in terms of the opportunity cost of that time which you could've spent learning other things. Maximum disutility could mean that this is a problem that will occupy you for the rest of your life, stagnating your attempts to learn much of anything else.

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T16:37:55.460Z · LW · GW

The problem is that truth and utility are not necessarily correlated. Knowing about a thing, and being able to more accurately assess reality because of it, may not lead you to the results you desire. Even if we ignore entirely the possibility of basilisks, which are not ruled out by the format of the question (eg: there exists an entity named Hastur, who goes to great lengths to torment all humans that know his name), there is also knowledge you/mankind are not ready for (plan for a free-energy device that works as advertised, but when distributed and reverse-engineered, leads to an extinction-causing physics disaster). Even if you yourself are not personally misled, you are dealing with an outcome pump that has taken your utility function into account. Among all possible universes, among all possible facts that fit the pattern, there has to be at least one truth that will have negative consequences for whatever you value, for you are not perfectly rational. The most benign possibilities are those that merely cause you to reevaluate your utility function, and act in ways that no longer maximize what you once valued; and among all possibilities, there could be knowledge which will do worse. You are not perfectly rational; you cannot perfectly foresee all outcomes. A being which has just proved to you that it is perfectly rational, and can perfectly foresee all outcomes, has advised you that the consequences of you knowing this information will be the maximum possible long-term disutility. By what grounds do you disbelieve it?

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T14:04:48.059Z · LW · GW

That's exactly why the problem invokes Omega, yes. You need an awful lot of information to know which false beliefs actually are superior to the truth (and which facts might be harmful), and by the time you have it, it's generally too late.

That said, the best real-world analogy that exists remains amnesia drugs. If you did have a traumatic experience, serious enough that you felt unable to cope with it, and you were experiencing PTSD or depression related to the trauma that impeded you from continuing with your life... but a magic pill could make it all go away, with no side effects, and with enough precision that you'd forget only the traumatic event... would you take the pill?

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T13:43:46.709Z · LW · GW

Suicide is always an option. In fact, Omega already presented you with it as an option, the consequences for not choosing. If you would in general carry around such a poison with you, and inject it specifically in response to just such a problem, then Omega would already know about that, and the information it offers would take that into account. Omega is not going to give you the opportunity to go home and fetch your poison before choosing a box, though.

EDIT: That said, I find it puzzling that you'd feel the need to poison yourself before choosing the falsehood, which has already been demonstrated to have positive consequences for you. Personally, I find it far easier to visualize a truth so terrible that it leaves suicide the preferable option.

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T13:34:44.355Z · LW · GW

True; being deluded about lotteries is unlikely to have positive consequences normally, so unless something weird is going to go on in the future (eg: the lottery machine's random number function is going to predictably malfunction at some expected time, producing a predictable set of numbers; which Omega then imposes on your consciousness as being 'lucky'), that's not a belief with positive long-term consequences. That's not an impossible set of circumstances, but it is an easy-to-specify set, so in terms of discussing 'a false belief which would be long-term beneficial', it leaps readily to mind.

Comment by Endovior on How To Have Things Correctly · 2012-10-16T13:21:44.947Z · LW · GW

Wow. This is particularly interesting to me, because I already felt this way without knowing why, not having consciously examined the feeling. I know that I already felt uncomfortable around gift-giving holidays, and this provides context to that; I don't particularly enjoy receiving incorrect things, and indeed, I have several boxes full of incorrect things following me around that I can't get rid of (even if I can't think of any reason to have or use a thing, it feels like losing hit points to dispose of it). For the same reason, I feel uncomfortable giving incorrect things, but the conventions of the relevant holidays makes it unacceptable to go looking for the kind of information I need to give correct things (having noticed this for a while, I switched over to exclusively giving gift certificates a couple years back, it seems to help).

Now that I've consciously identified the feeling, however, I can more proactively approach the problem, by communicating these feelings to those with whom I'd normally exchange gifts.

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T12:50:00.327Z · LW · GW

That's why the problem specified 'long-term' utility. Omega is essentially saying 'I have here a lie that will improve your life as much as any lie possibly can, and a truth that will ruin your life as badly as any truth can; which would you prefer to believe?'

Yes, believing a lie does imply that your map has gotten worse, and rationalizing your belief in the lie (which we're all prone to do to things we believe) will make it worse. Omega has specified that this lie has optimal utility among all lies that you, personally, might believe; being Omega, it is as correct in saying this as it is possible to be.

On the other hand, the box containing the least optimal truth is a very scary box. Presume first that you are particularly strong emotionally and psychologically; there is no fact that will directly drive you to suicide. Even so, there are probably facts out there that will, if comprehended and internalized, corrupt your utility function, leading you to work directly against all you currently believe in. There's probably something even worse than that out there in the space of all possible facts, but the test is rated to your utility function when Omega first encountered you, so 'you change your ethical beliefs, and proceed to spend your life working to spread disutility, as you formerly defined it' is on the list of possibilities.

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T12:34:07.455Z · LW · GW

As presented, the 'class' involved is 'the class of facts which fits the stated criteria'. So, the only true facts which Omega is entitled to present to you are those which are demonstrably true, which are not misleading as specified, which Omega can find evidence to prove to you, and which you could verify yourself with a month's work. The only falsehoods Omega can inflict upon you are those which are demonstrably false (a simple test would show they are false), which you do not currently believe, and which you would disbelieve if presented openly.

Those are fairly weak classes, so Omega has a lot of room to work with.

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T12:25:57.380Z · LW · GW

The original problem didn't specify how long you'd continue to believe the falsehood. You do, in fact, believe it, so stopping believing it would be at least as hard as changing your mind in ordinary circumstances (not easy, nor impossible). The code for FAI probably doesn't run on your home computer, so there's that... you go off looking for someone who can help you with your video game code, someone else figures out what it is you're come across and gets the hardware to implement, and suddenly the world gets taken over. Depending on how attentive you were to the process, you might not correlate the two immediately, but if you were there when the people were running things, then that's pretty good evidence that something more serious then a video game happened.

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T00:34:06.366Z · LW · GW

That is the real question, yes. That kind of self-modification is already cropping up, in certain fringe cases as mentioned; it will get more prevalent over time. You need a lot of information and resources in order to be able to generally self-modify like that, but once you can... should you? It's similar to the idea of wireheading, but deeper... instead of generalized pleasure, it can be 'whatever you want'... provided that there's anything you want more than truth.

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T00:21:47.307Z · LW · GW

The problem specifies that something will be revealed to you, which will program you to believe it, even though false. It doesn't explicitly limit what can be injected into the information stream. So yes, assuming you would value the existence of a Friendly AI, yes, that's entirely valid as optimal false information. Cost: you are temporarily wrong about something, and realize your error soon enough.

Comment by Endovior on Problem of Optimal False Information · 2012-10-16T00:16:35.475Z · LW · GW

As written, the utility calculation explicitly specifies 'long-term' utility; it is not a narrow calculation. This is Omega we're dealing with, it's entirely possible that it mapped your utility function from scanning your brain, and checked all possible universes forward in time from the addition of all possible facts to your mind, and took the worst and best true/false combination.

Accordingly, a false belief that will lead you to your death or maiming is almost certainly non-optimal. No, this is the one false thing that has the best long-term consequences for you, as you value such things, out of all the false things you could possibly believe.

True, the maximum utility/disutility has no lower bound. This is intentional. If you really believe that your position is such that no true information can hurt you, and/or no false information can benefit you, then you could take the truth. This is explicitly the truth with the worst possible long-term consequences for whatever it is you value.

Yes, it's pretty much defined as a sucker bet, implying that Omega is attempting to punish people for believing that there is no harmful true information and no advantageous false information. If you did, in fact, believe that you couldn't possibly gain by believing a falsehood, or suffer from learning a truth, this is the least convenient possible world.

Comment by Endovior on Problem of Optimal False Information · 2012-10-15T23:58:12.826Z · LW · GW

That is my point entirely, yes. This is a conflict between epistemic and instrumental rationality; if you value anything higher than truth, you will get more of it by choosing the falsehood. That's how the problem is defined.

Comment by Endovior on Problem of Optimal False Information · 2012-10-15T23:55:41.011Z · LW · GW

Yes, least optimal truths are really terrible, and the analogy is apt. You are not a perfect rationalist. You cannot perfectly simulate even one future, much less infinite possible ones. The truth can hurt you, or possibly kill you, and you have just been warned about it. This problem is a demonstration of that fact.

That said, if your terminal value is not truth, a most optimal falsehood (not merely a reasonably okay one) would be a really good thing. Since you are (again) not a perfect rationalist, there's bound to be something that you could be falsely believing that would lead you to better consequences than your current beliefs.

Comment by Endovior on Problem of Optimal False Information · 2012-10-15T23:46:37.427Z · LW · GW

Okay, so if your utilities are configured that way, the false belief might be a belief you will encounter, struggle with, and get over in a few years, and be stronger for the experience.

For that matter, the truth might be 'your world is, in fact, a simulation of your own design, to which you have (through carelessness) forgotten the control codes; you are thus trapped and will die here, accomplishing nothing in the real world'. Obviously an extreme example; but if it is true, you probably do not want to know it.

Comment by Endovior on Problem of Optimal False Information · 2012-10-15T23:37:34.250Z · LW · GW

I didn't have any other good examples on tap when I originally conceived of the idea, but come to think of it...

Truth: A scientific formula, seemingly trivial at first, but whose consequences, when investigated, lead to some terrible disaster, like the sun going nova. Oops.

Lies involving 'good' consequences are heavily dependent upon your utility function. If you define utility in such a way that allows your cult membership to be net-positive, then sure, you might get a happily-ever-after cult future. Whether or not this indicates a flaw in your utility function is a matter of personal choice; rationality cannot tell you what to protect.

That said, we are dealing with Omega, who is serious about those optimals. This really is a falsehood with optimal net long-term utility for you. It might be something like a false belief about lottery odds, which leads to you spending the next couple years wasting large sums of money on lottery tickets... only to win a huge jackpot, hundreds of millions of dollars, and retire young, able to donate huge sums to the charities you consider important. You don't know, but it is, by definition, the best thing that could possibly happen to you as the result of believing a lie, as you define 'best thing'.

Comment by Endovior on New study on choice blindness in moral positions · 2012-09-25T13:37:50.950Z · LW · GW

Agreed. Emotional motivations make just as good a target as intellectual ones. If someone already feels lonely and isolated, then they have a generally exploitable motivation, making them a prime candidate for any sort of cult recruitment. That kind of isolation is just what cults look for in a recruit, and most try to create it intentionally, using whatever they can to cut their cultists off from any anti-cult influences in their lives.

Comment by Endovior on New study on choice blindness in moral positions · 2012-09-23T17:50:28.522Z · LW · GW

The trick: you need to spin it as something they'd like to do anyway... you can't just present it as a way to be cool and different, you need to tie it into an existing motivation. Making money is an easy one, because then you can come in with an MLM structure, and get your cultists to go recruiting for you. You don't even need to do much in the way of developing cultic materials; there's plenty of stuff designed to indoctrinate people in anti-rational pro-cult philosophies like "the law of attraction" that are written in a way so as to appear as guides for salespeople, so your prospective cultists will pay for and perform their own indoctrination voluntarily.

I was in such a cult myself; it's tremendously effective.

Comment by Endovior on Less Wrong Polls in Comments · 2012-09-22T16:02:39.550Z · LW · GW

Yeah, it looks like there's something seriously broken about this poll code. I'm seeing 159 total votes, and only 13 visible votes.

Comment by Endovior on Backward Reasoning Over Decision Trees · 2012-06-30T18:55:15.315Z · LW · GW

What was repealed seems to have been the ability to veto individual letters (creating new words). This was a laughably incomplete solution, as instead of vetoing individual letters to create whatever wording the governor liked (as it was before), he's now limited to vetoing lots and lots of words until he finds the exact wording he wanted. Hence why the example looks like lots and lots of words crossed out, instead of specific letters crossed out. The power involved is quite similar, but it's somewhat more tricky to use if you're restricted to whole words.

Comment by Endovior on Rationality Quotes June 2012 · 2012-06-03T16:57:56.864Z · LW · GW

Really? Are you sure you're not just making yourself believe you feel something you do not?

Comment by Endovior on Rationality Quotes May 2012 · 2012-05-30T07:37:52.827Z · LW · GW

That's obviously true, yeah. But if it's cool enough that you'd consider doing it, and you actually, as the quote implies, cannot understand why nobody has attempted it despite having done initial research, then you may be better off preparing to try it yourself rather than doing more research to try and find someone else who didn't quite do it before. Not all avenues of research are fruitful, and it might actually be better to go ahead and try than to expend a bunch of effort trying to dig up someone else's failure.

Comment by Endovior on Rationality Quotes May 2012 · 2012-05-27T23:11:57.765Z · LW · GW

Both sound quite appropriate; it seems likely that in the process of attempting to do some crazy awesome thing, you will run into the exact reasons why nobody has done it before; either you'll find out why it wasn't actually a good idea, or you'll do something awesome.

Comment by Endovior on Moral Complexities · 2012-05-15T05:32:28.990Z · LW · GW

No worries; it's just that here, in particular, you caught the tail end of my clumsy attempts to integrate my old Objectivist metaethics with what I'd read thus far in the Sequences. I have since reevaluated my philosophical positions... after all, tidy an explanation as it may superficially seem, I no longer believe that the human conception of morality can be entirely based on selfishness.

Comment by Endovior on Moral Complexities · 2012-05-14T04:40:55.715Z · LW · GW

Uh... did you just go through my old comments and upvote a bunch of them? If so, thanks, but... that really wasn't necessary.

It's almost embarrassing in the case of the above; it, like much of the other stuff that I've written at least one year ago, reads like an extended crazy rant.