Posts

Comments

Comment by dv82matt on Open thread, 11-17 March 2014 · 2014-03-21T05:04:02.466Z · LW · GW

Articles:

http://phys.org/news/2013-04-emergence-complex-behaviors-causal-entropic.html

http://www.newyorker.com/online/blogs/elements/2013/05/a-grand-unified-theory-of-everything.html

http://www.bbc.com/news/science-environment-22261742

Paper:

http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf

Comment by dv82matt on 2013 Less Wrong Census/Survey · 2013-11-22T05:42:31.113Z · LW · GW

Did the survey.

Comment by dv82matt on The dangers of zero and one · 2013-11-17T05:54:50.701Z · LW · GW

But can you be 99.99% confident that 1159 is a prime?

This doesn't affect the thrust of the post but 1159 is not prime. Prime factors are 19 and 61.

Comment by dv82matt on Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime · 2013-11-08T22:42:36.413Z · LW · GW

I agree that you can be 99.99% (or more) certain that 53 is prime but I don't think you can be that confident based only on the arguement you gave.

If a number is composite, it must have a prime factor no greater than its square root. Because 53 is less than 64, sqrt(53) is less than 8. So, to find out if 53 is prime or not, we only need to check if it can be divided by primes less than 8 (i.e. 2, 3, 5, and 7). 53's last digit is odd, so it's not divisible by 2. 53's last digit is neither 0 nor 5, so it's not divisible by 5. The nearest multiples of 3 are 51 (=17x3) and 54, so 53 is not divisible by 3. The nearest multiples of 7 are 49 (=7^2) and 56, so 53 is not divisible by 7. Therefore, 53 is prime.

There are just too many potential errors that could occur in this chain of reasoning. For example, how sure are you that you correctly listed the primes less than 8? Even a mere typo at this stage of the argument could result in an erroneous conclusion.

Anyway just to be clear I do think your high confidence that 53 is prime is justified, but that the argument you gave for it is insufficient in isolation.

Comment by dv82matt on Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? · 2013-06-19T03:21:35.551Z · LW · GW

Are the various people actually being presented with the same problem? It makes a difference if the predictor is described as a skilled human rather than as a near omniscient entity.

The method of making the prediction is important. It is unlikely that a mere human without computational assistance could simulate someone in sufficient detail to reliably make one boxing the best option. But since the human predictor knows that the people he is asking to choose also realize this he still might maintain high accuracy by always predicting two boxing.

edit: grammar

Comment by dv82matt on 2012 Less Wrong Census/Survey · 2012-11-05T03:45:54.181Z · LW · GW

Did it.

Comment by dv82matt on Newcomb's Problem: A problem for Causal Decision Theories · 2010-08-22T16:36:26.034Z · LW · GW

This is interesting. I suspect this is a selection effect, but if it is true that there is a heavy bias in favor of one boxing among a more representative sample in the actual Newcomb's problem, then a predictor that always predicts one boxing could be suprisingly accurate.

Comment by dv82matt on Open Thread, August 2010-- part 2 · 2010-08-21T09:05:33.186Z · LW · GW

It is intended to illustrate that for a given level of certainty one boxing has greater expected utility with an infallible agent than it does with a fallible agent.

As for different behaviors, I suppose one might suspect the fallible agent of using statistical methods and lumping you into a reference class to make its prediction. One could be much more certain that the infallible agent’s prediction is based on what you specifically would choose.

Comment by dv82matt on Open Thread, August 2010-- part 2 · 2010-08-21T07:04:06.927Z · LW · GW

You may have misunderstood what is meant by "smart predictor".

The wiki entry does not say how Omega makes the prediction. Omega may be intelligent enough to be a smart predictor but Omega is also intelligent enough to be a dumb predictor. What matters is the method that Omega uses to generate the prediction. And whether the method of prediction causally connects Omega’s prediction back to the initial conditions that causally determine your choice.

Furthermore a significant part of the essay explains in detail why many of the assumptions associated with Omega are problematic.

Edited to add that on rereading I can see how the bit where I say, "It doesn’t state whether Omega is sufficiently smart." is a bit misleading. It should be read as a statement about the method of making the prediction not about Omega's intelligence.

Comment by dv82matt on Open Thread, August 2010-- part 2 · 2010-08-21T04:43:08.786Z · LW · GW

I have written a critique of the position that one boxing wins on Newcomb's problem but have had difficulty posting it here on Less Wrong. I have temporarily posted it here

Comment by dv82matt on Newcomb's Problem: A problem for Causal Decision Theories · 2010-08-16T23:56:42.473Z · LW · GW

Newcomb’s problem is a poor vehicle for illustrating points about rationality. It is a minefield of misconceptions and unstated assumptions. In general the one boxers are as wrong as the two boxers. When Omega is not infallible the winning strategy depends on how Omega arrives at the prediction. If that information is not assumed or somehow deducible then the winning strategy is impossible to determine.

Your point about casual decision theory being flawed in some circumstances may be correct but using Newcomb’s problem to illustrate it detracts from the argument.

Consider a condensed analogy. Someone will roll a standard six sided die. You can bet on six or not-six to come up. Both bets double your money if you win. Assume betting on six wins. Since six wins any decision theory that has you betting not-six is flawed.

Comment by dv82matt on My Fundamental Question About Omega · 2010-02-12T22:01:45.842Z · LW · GW

I’m finding "correct" to be a loaded term here. It is correct in the sense that your conclusions follow from your premises, but in my view it bears only a superficial resemblance to Newcomb’s problem. Omega is not defined the way you defined it in Newcomb-like problems and the resulting difference is not trivial.

To really get at the core dilemma of Newcomb’s problem in detail one needs to attempt to work out the equilibrium accuracy (that is the level of accuracy required to make one-boxing and two-boxing have equal expected utility) not just arbitrarily set the accuracy to the upper limit where it is easy to work out that one-boxing wins.

Comment by dv82matt on My Fundamental Question About Omega · 2010-02-12T20:48:55.633Z · LW · GW

First, thanks for explaining your down vote and thereby giving me an opportunity to respond.

We say that Omega is a perfect predictor not because it's so very reasonable for him to be a perfect predictor, but so that people won't get distracted in those directions.

The problem is that it is not a fair simplification, it disrupts the dilemma in such a way as to render it trivial. If you set the accuracy of the prediction to %100 many of the other specific details of the problem become largely irrelevant. For example you could then put $999,999.99 into box A and it would still be better to one-box.

It’s effectively the same thing as lowering the amount in box A to zero or raising the amount in box B to infinity. And one could break the problem in the other direction by lowering the accuracy of the prediction to %50 or equalizing the amount in both boxes.

We must disagree about what is the heart of the dilemma. How can it be all about whether Omega is wrong with some fractional probability?

It’s because the probability of a correct prediction must be between %50 and %100 or it breaks the structure of the problem in the sense that it makes the answer trivial to work out.

Rather it's about whether logic (2-boxing seems logical) and winning are at odds.

I suppose it is true that some people have intuitions that persist in leading them astray even when the probability is set to %100. In that sense it may still have some value if it helps to isolate and illuminate these biases.

Or perhaps whether determinism and choice is at odds, if you are operating outside a deterministic world-view. Or perhaps a third thing, but nothing --in this problem -- about what kinds of Omega powers are reasonable or possible. Omega is just a device being used to set up the dilemma.

My objection here doesn’t have to do with whether it is reasonable for Omega to possess such powers but with the over-simplification of the dilemma to the point where it is trivial.

Comment by dv82matt on My Fundamental Question About Omega · 2010-02-12T08:28:51.867Z · LW · GW

The basic concept behind Omega is that it is (a) a perfect predictor

I disagree, Omega can have various properties as needed to simplify various thought experiments, but for the purpose of Newcomb-like problems Omega is a very good predictor and may even have a perfect record but is not a perfect predictor in the sense of being perfect in principle or infallible.

If Omega were a perfect predictor then the whole dilemma inherent in Newcomb-like problems ceases to exist and that short circuits the entire point of posing those types of problems.

Comment by dv82matt on Open Thread: September 2009 · 2009-09-12T01:58:38.391Z · LW · GW

I don’t think Newcomb’s Problem can easily be stated as a real (as opposed to a simply logical) problem. Any instance of Newcomb’s problem that you can feasibly construct in the real world it is not a strict one shot problem. I would suggest that optimizing a rational agent for the strictly logical one shot problem one is optimizing for a reality that we don’t exist in.

Even if I am wrong about Newcomb’s problem effectively being an iterated type of problem treating it as if it is seems to solve the dilemma.

Consider this line of reasoning. Omega wants to make the correct prediction. I want Omega to put the million dollars in the box. If I one-box I will either reward Omega for putting the money in the box or punish Omega for not putting the money in the box. Since Omega has a very high success rate I can deduce that Omega puts a high value on making the correct prediction I will therefore put a correspondingly high value on the instrumental value of spending the thousand dollars to influence Omega’s decision. But here’s the thing, this reasoning occurs before Omega even presents you with the problem. It is worked out by Omega running your decision algorithm based on Omega’s scan of your brain. It is effectively the first iteration.

You are then presented with the choice for what is effectively the second time and you deduce that any real Omega (as opposed to some platonic ideal of Omega) does something like the sequence described above in order to generate it’s prediction.

In Charlie’s case you may reason that Charlie either doesn’t care or isn’t able to produce a very accurate prediction and so reason he probably isn’t running your decision algorithm so spending the thousand dollars to try to influence Charlie’s decision has very low instrumental value.

In effect you are not just betting on the probability that the prediction is accurate you are also betting on whether your decision algorithm is affecting the outcome.

I’m not sure how to calculate this but to take a stab at it:

Edit: Removed a misguided attempt at a calculation.

Comment by dv82matt on Open Thread: September 2009 · 2009-09-10T23:04:39.815Z · LW · GW

Concerning Newcomb’s Problem I understand that the dominant position among the regular posters of this site is that you should one-box. This is a position I question.

Suppose Charlie takes on the role of Omega and presents you with Newcomb’s Problem. So far as it is pertinent to the problem Charlie is identical to Omega with the notable exception that his prediction is only %55 likely to be accurate. Should you one-box or two-box in this case?

If you one-box then the expected utility is (.55 1,000,000) $550,000 and if you two-box then it is (.45 1,001,000) $450,450 so it seems you should still one-box even when the prediction is not particularly accurate. Thoughts?