Is Omega Impossible? Can we even ask?
post by mwengler · 2012-10-24T14:47:26.743Z · LW · GW · Legacy · 52 commentsContents
52 comments
EDIT: I see by the karma bombing we can't even ask. Why even call this part of the site "discussion?"
Some of the classic questions about an omnipotent god include
- Can god make a square circle?
- Can god create an immovable object? And then move it?
- 2+2 = 5
- 1+1 = 2
- 1+1+1+1+1 = 5
- Omega is an infallible intelligence that does not lie
- Omega tells you 2+2=5
- Omega is an infallible intelligence
- Omega has predicted correctly whether we will one box or two box.
52 comments
Comments sorted by top scores.
comment by Stuart_Armstrong · 2012-10-24T16:04:42.750Z · LW(p) · GW(p)
I see by the karma bombing we can't even ask.
It's more that the post isn't well written. It mentions omnipotence (for God), some thoughts that past philosophers had on then, and then rambles about things being difficult to conceive (without any definitions or decomposition of the problem) and then brings in Omega, with an example equivalent to "1) Assume Omega never lies, 2) Omega lies".
Then when we get to the actual point, it's simply "maybe the Newcomb problem is impossible". With no real argument to back that up (and do bear in mind that if copying of intelligence is possible, then the Newcomb problem is certainly possible; and I've personally got a (slightly) better-than-chance record at predicting if people 1-box or 2-box on Newcomb-like problems, so a limited Omega certainly is possible).
Replies from: mwengler↑ comment by mwengler · 2012-10-25T15:13:25.051Z · LW(p) · GW(p)
It's more that the post isn't well written. ...
Then when we get to the actual point, it's simply "maybe the Newcomb problem is impossible".
Well written, well read, definitely one or the other. Of course in my mind it is the impossibility of Omega that is central, and I support that with the title of my post. In my mind, Newcomb's problem is a good example. And from the discussion, it may turn out to be a good example. I have learned that 1) WIth the numbers stated, Omega doesn't need to have mysterious powers, he only needs to be right a little mroe than 1/2 the time. 2) Then other commenters go on to realize that understanding of HOW Omega is right will impact on whether one should one-box or two-box
So even IF the "meat" was Newcomb's problem this post is an intellectual success for me (and I feel confident for some of those who have pointed out the ways Newcomb's problem becomes more interesting with a finite Omega).
As to a full support for my ideas, it seems to me that posts must be limited in length and content to be read and responded to. That ONE form of "crackpot" is the person who shows up with 1000 pages or even 25 pages of post to support his unusual point. Stylistically, I think the discussion on this post justifies the way I wrote it. The net karma bombing was largely halted by my "whiney" edit. The length and substance of my post was considered in such a way as to be quite useful to my understanding (and naming this section "Discussion" suggests at least some positive value in that).
So in the internet age, a post which puts hooks for concepts in place without hanging 10s of pages of pre-emptive verbiage on each one is superior to its wordy alternative. And lesswrong's collective emergent action is to karma bomb such posts. Is this a problem? More for me than for you, that is for sure.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-10-25T15:19:23.901Z · LW(p) · GW(p)
My objection, more succinctly: too long an introduction, not enough argument in the main part. Rewriting it with a paragraph (or three) cut from the intro and an extra paragraph of arguments about the impossibility of Omega in Newcomb would have made it much better, in my opinion.
But glad the discussion was useful!
comment by Emile · 2012-10-24T15:07:15.891Z · LW(p) · GW(p)
Doesn't Newcomb's problem remain pretty much the same if Omega is "only" able to predict your answer with 99% accuracy?
In that case, a one boxer would get a million 99% of the time, and nothing 1% of the time, and a two-boxer would get a thousand 99% of the time, and thousand and a million 1% of the time ... unless you have a really weirdly shaped utility function, one-boxing still seems much better.
(I see the "omnipotence" bit a bit of a spherical cow assumption that allows to sidestep some irrelevant issues to get to the meat of the problem, but it does become important when you're dealing with bits of code simulating each other)
Replies from: thomblake, mwengler↑ comment by thomblake · 2012-10-24T15:18:27.645Z · LW(p) · GW(p)
If Omega is only able to predict your answer with 75% accuracy, then the expected payoff for two-boxing is:
.25 * 1001000 + .75 * 1000 = 251000
and the expected payoff for one-boxing is:
.25 * 0 + .75 * 1000000 = 750000.
So even if Omega is just a pretty good predictor, one-boxing is the way to go. (unless you really need a thousand dollars or usual concerns about money vs utility)
Replies from: thomblake, vi21maobk9vp, mwengler↑ comment by thomblake · 2012-10-24T15:45:53.943Z · LW(p) · GW(p)
For the curious, you should be indifferent to one- or two-boxing when Omega predicts your response 50.05% of the time. If Omega is just perceptibly better than chance, one-boxing is still the way to go.
Now I wonder how good humans are at playing Omega.
Replies from: benelliott, KPier↑ comment by benelliott · 2012-10-24T16:01:52.610Z · LW(p) · GW(p)
Better than 50.5% accuracy actually doesn't sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer. E.g., if Omega works by asking people what they will do and then believing them, this may well get better than chance results with humans, at least some of whom are honest. However, the correct response in this version of the problem is to two-box and lie.
Replies from: thomblake↑ comment by thomblake · 2012-10-24T16:10:03.292Z · LW(p) · GW(p)
Better than 50.5% accuracy actually doesn't sound that implausible, but I will note that if Omega is probabilistic then the way in which it is probabilistic affects the answer.
Sure, I was reading the 50.05% in terms of probability, not frequency, though I stated it the other way. If you have information about where his predictions are coming from, that will change your probability for his prediction.
Replies from: benelliott↑ comment by benelliott · 2012-10-24T16:28:43.141Z · LW(p) · GW(p)
Fair point, your're right.
↑ comment by KPier · 2012-10-24T23:29:48.131Z · LW(p) · GW(p)
... and if your utility scales linearly with money up to $1,001,000, right?
Replies from: thomblake, prase↑ comment by prase · 2012-10-25T05:12:10.489Z · LW(p) · GW(p)
Or if the payoffs are reduced to fall within the (approximately) linear region.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-10-25T08:51:48.035Z · LW(p) · GW(p)
But if they are too low (say, $1.00 and $0.01) I might do things other than what gets me more money Just For The Hell Of It.
Replies from: faul_sname↑ comment by faul_sname · 2012-10-25T17:31:19.039Z · LW(p) · GW(p)
And thus was the first zero-boxer born.
Replies from: None↑ comment by [deleted] · 2012-10-25T18:10:33.333Z · LW(p) · GW(p)
Zero-boxer: "Fuck you, Omega. I won't be your puppet!"
Omega: "Keikaku doori..."
↑ comment by vi21maobk9vp · 2012-10-24T17:27:44.365Z · LW(p) · GW(p)
This seems an overly simplistic view. You need to specify your source of knowledge about correlation of quality of predictions and decision theory prediction target uses.
And even then, you need to be sure that your using an exotic DT will not throw Omega too much off the trail (note that erring in your case will not ruin the nice track record).
I don't say it is impossible to specify, just that your description could be improved.
Replies from: thomblake↑ comment by mwengler · 2012-10-24T15:42:53.661Z · LW(p) · GW(p)
Interesting and valuable point, brings the issue back to decision theory and away from impossible physics.
As I have said in the past, I would one-box because I think Omega is a con-man. When magicians do this trick the trick is the box SEEMS to be sealed ahead of time, but in fact there is a mechanism for the magician to slip something inside it. In the case of finding a signed card in a sealed envelope, the envelope had a razor slit which the magician could surreptitiously push the card in from. Ultimately, Siegfried and Roy were doing the same trick with tigers in cages. If regular (but talented) humans like Siegfried and Roy could trick thousands of people a day, then Omega can get the million out of the box if I two box, or get it in there if I one box.
Yes, I would want to build an AI clever enough to figure out a probable scam and then clever enough to figure out whether it can profit from that scam by going along with it. No, I wouldn't want that AI to think it had proof that there was a being that could seemingly violate the causal arrow of time merely because it seemed to have done so a number of times on the same order as Siegfried and Roy had managed.
Ultimately, my fear is if you can believe in Omega at face value, you can believe in god, and an FAI that winds up believing something is god when it is actually just a conman is no friend of mine.
If I see Omega getting the answer right 75% of the time, I think "the clever conman makes himself look real by appearing to be constrained by real limits." Does this make me smarter or dumber than we want a powerful AI to be?
Replies from: thomblake↑ comment by thomblake · 2012-10-24T15:48:16.277Z · LW(p) · GW(p)
Nobody is proposing building an AI that can't recognize a con-man. Even if in all practical cases putative Omegas will be con-men, this is still an edge case for the decision theory, and an algorithm that might be determining the future of the entire universe should not break down on edge cases.
Replies from: mwengler↑ comment by mwengler · 2012-10-25T15:18:15.961Z · LW(p) · GW(p)
I have seen numerous statements of Newcomb's problem where it is stated "Omega got the answer right 100 out of 100 times before." That is PATHETIC evidence to support Omega not being a con man and that is not a prior, that is post. So if there is a valuable edge case here (and I'm not sure there is), it has been left implicit until now.
comment by gwern · 2012-10-24T16:31:44.385Z · LW(p) · GW(p)
Downvoted for missing the obvious and often pointed out part about a fallible Omega still making Newcomb go through, and then whining about your own fault.
Replies from: mwengler↑ comment by mwengler · 2012-10-25T14:34:44.099Z · LW(p) · GW(p)
I was already downvoted 6 before whining about my own fault. If Omega need not be infallible than it is certainly gratuitously confusing to me to put such an omega in the argument. I am a very smart potential fellow traveler here, it would seem a defect of the site that its collective behavior is to judge queries such as mine unreasonable and undesirable to be seen.
If omega has previously been cited to be quite fallible and still have a newcomb's problem, I have not noticed it and I would love to see a link. Meanwhile, I'd still like to know how a newcomb's problem stated with a garden-variety human conman making the prediction is inferior to one which gratuitously calls upon a being with special powers unknown in the universe. Why attribute to Omega that which can be adequately explained by Penn and Teller?
Replies from: gwern↑ comment by gwern · 2012-10-25T14:58:47.846Z · LW(p) · GW(p)
I was already downvoted 6 before whining about my own fault.
So?
If Omega need not be infallible than it is certainly gratuitously confusing to me to put such an omega in the argument.
No, it's necessary to prevent other people from ignoring the hypothetical and going 'I trick Omega! ha ha ha I are so clever!' This is as about as interesting as saying, in response to the trolley dilemma, 'I always carry a grenade with me, so instead of choosing between the 5 people and the fat man, I just toss the grenade down and destroy the track! ha ha ha I are so clever!'
I am a very smart potential fellow traveler here, it would seem a defect of the site that its collective behavior is to judge queries such as mine unreasonable and undesirable to be seen.
'I am important! You should treat me nicer!'
If omega has previously been cited to be quite fallible and still have a newcomb's problem, I have not noticed it and I would love to see a link.
Multiple people have already pointed it out here, which should tell you something about how widespread that simple observation is - why on earth do you need a link? (Also, if you are "very smart", it should have been easy to construct the obvious Google query.)
Replies from: mwengler↑ comment by mwengler · 2012-10-25T15:44:27.211Z · LW(p) · GW(p)
No, it's necessary to prevent other people from ignoring the hypothetical and going 'I trick Omega! ha ha ha I are so clever!' This is as about as interesting as saying, in response to the trolley dilemma, 'I always carry a grenade with me, so instead of choosing between the 5 people and the fat man, I just toss the grenade down and destroy the track! ha ha ha I are so clever!'
There have been numerous threads about what can lesswrong/SIAI do to attract more interest and support. This argues towards a certain recapitulation of ground already covered. When I tell you things like i have been reading this site and overcomingbias for years, that I am neither particularly ignorant, particularly doctrinaire, nor particularly thick, it may well be that I would like better treatment. But it is also information about the emergent behavior of this community towards those who are probably its "hot market."
If you look through the comments below, I don't think you can miss that there are many commenting to whom the non-requirement of near-magical powers in Omega is news to them as well. Is site-emergent behavior of "We don't want to see posts like this" really desirable on posts that bring this out?
I recognize there is a line somewhere beyond which you don't dilute the site message to pick up an increasingly small number of people. Especially if your model of these people on the margins is as crackpots who are unlikely to be fixed. My opinion and suggestion is that this site in an emergent fashion (I don't think it is the plan) draws that line too close to the orthodoxy.
Replies from: TimS, wedrifid↑ comment by TimS · 2012-10-25T16:00:52.468Z · LW(p) · GW(p)
Apparently I really need to write the companion piece to Please Don't Fight the Hypothetical titled When and How to Fight the Hypothetical.
↑ comment by wedrifid · 2012-10-26T02:22:15.316Z · LW(p) · GW(p)
When I tell you things like i have been reading this site and overcomingbias for years, that I am neither particularly ignorant, particularly doctrinaire, nor particularly thick, it may well be that I would like better treatment.
You may be none of those things but this post received the appropriate treatment (albeit with more resulting commentary that is desirable).
But it is also information about the emergent behavior of this community towards those who are probably its "hot market."
The people who are the 'hot market' would be turned off by site if it contained many posts like this. Partly because of the low standard of reasoning but mostly because of the petulant whining. We don't want that. We downvote.
comment by Shmi (shminux) · 2012-10-24T16:16:08.954Z · LW(p) · GW(p)
Personally, I can think of LOTS of reasons to doubt that Newcomb's problem is even theoretically possible to set.
If you allow arbitrarily high but not 100%-accurate predictions (as EY is fond of repeating, 100% is not a probability), the original Newcomb's problem is defined as the limit when prediction accuracy goes to 100%. As noted in other comments, the "winning" answer to the problem is not sensitive to the prediction level just above 50% accuracy (1/(2-1000/1000000), to be precise), so the limiting case must have the same answer.
Replies from: mwenglercomment by Richard_Kennaway · 2012-10-24T16:11:42.095Z · LW(p) · GW(p)
I think you're correct in raising the general issue of what hypothetical problems it makes sense to consider, but your application to Newcomb's does not go very far.
Personally, I can think of LOTS of reasons to doubt that Newcomb's problem is even theoretically possible to set.
You didn't give any, though, and Newcomb's problem does not require an infallible Omega, only a fairly reliable one. The empirical barrier to believing in Omega is assumed away by another hypothesis: that you are sure that Omega is honest and reliable.
Personally, I think I can reliably predict that Eliezer would one-box against Omega, based on his public writings. I'm not sure if that implies that he would one-box against me, even if he agrees that he would one-box against Omega and that my prediction is based on good evidence that he would.
Replies from: faul_sname, Kindly, mwengler↑ comment by faul_sname · 2012-10-24T17:09:30.376Z · LW(p) · GW(p)
I'm pretty sure Eliezer would one-box against Omega any time box B contained more money than box A. Against you or me, I'm pretty sure he would one box with the original 1000000:1000 problem (that's kind of the obvious answer), but not sure if it were a 1200:1000 problem.
Replies from: ArisKatsaris, mwengler↑ comment by ArisKatsaris · 2012-10-25T11:54:59.919Z · LW(p) · GW(p)
A further thing to note: If Eliezer models other people as either significantly overestimating or significantly understimating the probability he'll one-box against them, both possibilities increase the probability he'll actually two-box against them.
So it all depends on Eliezer's model of other people's model of Eliezer's model of their model. Insert The Princess Bride reference. :-)
Replies from: faul_sname↑ comment by faul_sname · 2012-10-25T17:27:44.790Z · LW(p) · GW(p)
Or at least your model of Eliezer models other people modeling his model of them. He may go one level deeper and model other people's model of his model of other people's model of his model of them, or (more likely) not bother and just use general heuristics. Because modeling breaks down around one or two layers of recursion most of the time.
↑ comment by mwengler · 2012-10-25T14:59:08.674Z · LW(p) · GW(p)
Now we are getting somewhere good! Certainty rarely shows up in predictions, especially about the future. Your decision theory may be timeless, but don't confuse the map with the territory, the universe may not be timeless.
Unless you are assigning a numerical, non-zero, non-unity probability to Omega's accuracy, you do not know when to one-box and when to two-box with arbitrary amounts of money in the boxes. And unless your FAI is a chump, it is considering LOTS of details in estimating Omega's accuracy, no doubt including considerations of how much the FAI's own finiteness of knowledge and computation fails to constrain the possibility that Omega is tricking it.
A NASA engineer had been telling Feynman that the liquid rocket motor had a zero probability of exploding on takeoff. Feynman convinced him that this was not an engineering answer. The NASA engineer then smiled and told Feynman the probability of the liquid rocket motor exploding on take off was "epsilon." Feynman replied (and I paraphrase from memory) "Good! Now we are getting somewhere! Now all you have to tell me is what your estimate for the value of epsilon is, and how you arrived at that number."
Any calculation of your estimate of Omega's responsibility which does not include gigantic terms for the evaluation of the probability that Omega is tricking you in a way you haven't figure out yet is likely to fail. I base that on the prevalence and importance of con games in the best natural experiment on intelligence we have: humans.
↑ comment by Kindly · 2012-10-24T21:20:01.690Z · LW(p) · GW(p)
If Eliezer knows that your prediction is based on good evidence that he would one-box, then that screens off the dependence between your prediction and his decision, so he should two-box.
Replies from: Richard_Kennaway, faul_sname↑ comment by Richard_Kennaway · 2012-10-24T21:32:02.127Z · LW(p) · GW(p)
Surely the same applies to Omega. By hypothesis, Eliezer knows that Omega is reliable, and since Eliezer does not believe in magic, he deduces that Omega's prediction is based on good evidence, even if Omega doesn't say anything about the evidence.
My only reason for being unsure that Eliezer would one-box against me is that there may be some reflexivity issue I haven't thought of, but I don't think this one works.
One issue is that I'm not going around making these offers to everyone, but the only role that that plays in the original problem is to establish Omega's reliability without Newcomb having to explain how Omega does it. But I don't think it matters where the confidence in Omega's reliability comes from, as long as it is there.
Replies from: Kindly↑ comment by Kindly · 2012-10-24T23:33:47.199Z · LW(p) · GW(p)
If you know that Omega came to a conclusion about you based on things you wrote on the Internet, and you know that the things you wrote imply you will one-box, then you are free to two-box.
Edit: basically the thing you have to ask is, if you know where Omega's model of you comes from, is that model like you to a sufficient extent that whatever you decide to do, the model will also do?
Replies from: mwengler↑ comment by mwengler · 2012-10-25T14:50:26.366Z · LW(p) · GW(p)
Ah, but the thing you DON'T know is that Omega isn't cheating. Cheating LOOKS like magic but isn't. Implicit in my point, certainly part of my thinking, is that unless you understand deeply and for sure HOW the trick is done, you can expect the trick will be done on you. So unless you can think of a million dollar upside to not getting the million dollars, you should let yourself be the mark of the conman Omega since your role seems to include getting a million dollars for whatever reasons Omega has to do that.
You should only two box if you understand Omega's trick so well that you are sure you can break it, i.e. that you will get the million dollars anyway. And the value of breaking Omega's trick is that the world doesn't need more successful con men.
Considering the likelihood of being confronted by a fake Omega rather than a real one, it would seem a matter of great lack of foresight to not want to address this problem in coding your FAI.
↑ comment by faul_sname · 2012-10-25T17:24:55.654Z · LW(p) · GW(p)
Unless he figures you're not an idiot and you already know that, in which case it's better for him to have a rule that says "always one-box on Newcomb-like problems whenever the payoff for doing so exceeds n times the payoff for failed two-boxing" where n is a number (probably between 1.1 and 100) that represents the payment differences. Obviously, if he's playing against something with no ability to predict his actions (e.g. a brick) he's going to two-box no matter what. But a human with theory of mind is definitely not a brick and can predict his action with far better than random accuracy.
↑ comment by mwengler · 2012-10-25T14:39:37.825Z · LW(p) · GW(p)
Personally, I think I can reliably predict that Eliezer would one-box against Omega, based on his public writings. I'm not sure if that implies that he would one-box against me,
And since any FAI Eliezer codes is (nearly) infinitely more likely to be presented Newcomb's boxes by one such as you, or Penn and Teller, or Madoff than by Omega or his ilk, this would seem to be a more important question than the Newcomb's problem with Omega.
Really the main point of my post is "Omega is (nearly) impossible therefore problems presuming Omega are (nearly) useless". But the discussion has come mostly to my Newcomb's example making explicit its lack of dependence on an Omega. But here in this comment you do point out that the "magical" aspect of Omega MAY influence the coding choice made. I think this supports my claim that even Newcomb's problem, which COULD be stated without an Omega, may have a different answer than when stated with an Omega. That it is important when coding an FAI to consider just how much evidence it should require that it has an Omega it is dealing with before it concludes that it does. In the long run, my concern is that an FAI coded to accept an Omega will be susceptible to accepting people deliberately faking Omega, which are in our universe (nearly) infinitely more present than true Omegas.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-10-25T16:12:49.275Z · LW(p) · GW(p)
Omega problems are not posed for the purpose of being prepared to deal with Omega should you, or an FAI, ever meet him. They are idealised test problems, thought experiments, for probing the strengths and weaknesses of formalised decision theories, especially regarding issues of self-reference and agents modelling themselves and each other. Some of these problems may turn out to be ill-posed, but you have to look at each such problem to decide whether it makes sense or not.
comment by faul_sname · 2012-10-24T15:34:02.490Z · LW(p) · GW(p)
What if were set some problem where we are told to assume that
2+2 = 5
1+1 = 2
1+1+1+1+1 = 5
3rd one looks fine to me. :)
Edit: Whoosh. Yes, that was the sound of this going right over my head.
Replies from: mwenglercomment by ArisKatsaris · 2012-10-25T09:53:25.253Z · LW(p) · GW(p)
"Why even call this part of the site discussion? "
We are free to discuss, we are also free to downvote or upvote others depending on the quality of said discussion. In this case, you seem to not be addressing any of the responses you've gotten regarding the flaws in your argument, but just chose to complain about downvotes.
Omega-as-infallible-entity isn't required for Newcomb-style problems. If you're to argue that you don't believe that predicting people's behaviour with even slightly-above-random-chance is theoretically possible, then try to make that argument - but you'll fail. Perfect predictive accuracy may be physically impossible, given quantum uncertainty, but thankfully it's not required.
Replies from: mwengler↑ comment by mwengler · 2012-10-25T14:29:16.555Z · LW(p) · GW(p)
At the time I added the edit, I had two comments and 6 net downvotes. I had replied to the two comments. It is around 25 hours later now. For me, 25 hour gaps in my responses to lesswrong will be typical, I'm not sure a community which can't work with that is even close to optimal. So here I am commenting on comments.
Of course you're free to downvote and I'm free to edit. Of course we are both free as is everyone else, to speculate whether the results are what we would want, or note. Free modulo determinism, that is.
As far as I know, this is the first thread in which it has ever been pointed out that Omega doesn't need to be infallible or even close to infallible in order for the problem to work. A newcomb's problem set with a gratuitous infallible predictor is inferior to a newcomb's problem set with a currently-implementable but imperfect prediction algorithm. Wouldn't you agree? When I say inferior, I mean both as a guide to the humans such as myself trying to make intellectual progress here, and as a guide to the coders of FAI.
As far as I am concerned, a real and non-trivial improvement has been proposed to the statement of Newcomb's problem as a result of my so-called "discussion" posting. An analagous improvement in another argument would be 1) Murder is wrong because my omniscient, omnipotent, and all-good god says it is. 2) I don't think an omniscient omnipotent all-good god is possible in our universe 1) well obviously you don't need such a god to see that Murder is wrong.
Whether my analogy seems self-aggrandizing or not, the value to the discussion of taking extraneous antecedents out of discussed problems I hope will be generally understood.
Replies from: ArisKatsaris, benelliott↑ comment by ArisKatsaris · 2012-10-25T16:43:53.367Z · LW(p) · GW(p)
As far as I know, this is the first thread in which it has ever been pointed out that Omega doesn't need to be infallible or even close to infallible in order for the problem to work. [...] As far as I am concerned, a real and non-trivial improvement has been proposed to the statement of Newcomb's problem as a result of my so-called "discussion" posting.
I'll note here that the lesswrong wiki page on Newcomb's problem has a section which says the following:
Irrelevance of Omega's physical impossibility
Sometimes people dismiss Newcomb's problem because a being such as Omega is physically impossible. Actually, the possibility or impossibility of Omega is irrelevant. Consider a skilled human psychologist that can predict other humans' actions with, say, 65% accuracy. Now imagine they start running Newcomb trials with themselves as Omega."
Also this section wasn't recently added, it has been there since November 2010.
In short you're not the first person to introduce to us the idea of Omega being impossible.
↑ comment by benelliott · 2012-10-25T14:41:18.672Z · LW(p) · GW(p)
A newcomb's problem set with a gratuitous infallible predictor is inferior to a newcomb's problem set with a currently-implementable but imperfect prediction algorithm. Wouldn't you agree?
No, in maths you want to pick the simplest possible thing that embodies the principle you want to study, needless complications are distracting. Throwing in a probabilistic element to something that works fine as a deterministic problem is needless.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2012-10-25T15:24:52.050Z · LW(p) · GW(p)
Throwing in a probabilistic element to something that works fine as a deterministic problem is needed.
Typo?
Replies from: benelliott↑ comment by benelliott · 2012-10-25T15:32:10.613Z · LW(p) · GW(p)
Yes, thanks
comment by James_Blair · 2012-10-26T00:41:14.615Z · LW(p) · GW(p)
Is Omega Impossible?
No, Omega is possible. I have implemented Newcomb's Game as a demonstration. This is not a probabilistic simulation, this omega is never wrong.
It's really very obvious if you think about it like a game designer. To the obvious objection: Would a more sophisticated Omega be any different in practice?
For my next trick, I shall have an omnipotent being create an immovable object and then move it.
edit: sorry about the bugs. it's rather embarrassing, i have not used these libraries in ages.
Replies from: mwengler↑ comment by mwengler · 2012-10-26T12:45:39.002Z · LW(p) · GW(p)
It's really very obvious if you think about it like a game designer.
Your Omega simulation actually loads the box after you have chosen not before, while claiming to do otherwise. If this is a simulation of Omega, thank you for making my point.
Replies from: James_Blair