Posts

Comments

Comment by Unknown on Recognizing Intelligence · 2008-11-08T04:26:40.000Z · LW · GW

Eliezer is making a disguised argument that the universe is caused by intelligent design: the fact that the laws of nature stay the same over time, instead of changing randomly, shows that the Intelligence has goals that remain stable over time, even if we don't know what those goals are.

Comment by Unknown on Mundane Magic · 2008-11-01T10:30:27.000Z · LW · GW

Eliezer, exactly how many decibels of evidence would it require to persuade you that there is magic in the universe?

For example, see this claim of magic: http://www.clairval.com/lettres/en/2006/12/08/2061206.htm

How many times would a coin have to come up heads (if there were some way for it to test this) before there would be a chance you wouldn't defy the data in a case like this? If you saw 20 heads in a row, would you expect more of them? Or 40?

Comment by Unknown on Horrible LHC Inconsistency · 2008-09-22T15:45:21.000Z · LW · GW

Basically, everyone knows that the probability of the LHC destroying the earth is greater than one in a million, but no one would do anything to stop the thing from running, for the same reason that no one would pay Pascal's Mugger. (My interests evidently haven't changed much!)

Comment by Unknown on Sorting Pebbles Into Correct Heaps · 2008-08-10T05:37:13.000Z · LW · GW

In fact, a superintelligent AI would easily see that the Pebble people are talking about prime numbers even if they didn't see that themselves, so as long as they programmed the AI to make "correct" heaps, it certainly would not make heaps of 8, 9, or 1957 pebbles. So if anything, this supports my position: if you program an AI that can actually communicate with human beings, you will naturally program it with a similar morality, without even trying.

Apart from that, this post seems to support TGGP's position. Even if there is some computation (i.e. primeness) which is actually determining the Pebble people, there is no particular reason to use that computation instead of some other. So if a random AI were programmed that purposely made non-prime heaps, there would be no objective problem with this. So Allan Crossman's claim that "it's positively dangerous to believe in an objective account of morality" is a completely subjective statement. It's dangerous in comparison to your subjective idea of which heaps are correct, yet, but objectively there is nothing dangerous about non-prime heaps. So there's no reason to program an AI without regard for Friendlieness. If there's something matters, it will find it, and if nothing matters, well then nothing matters, not even being made into paperclips.

Comment by Unknown on Anthropomorphic Optimism · 2008-08-06T03:33:49.000Z · LW · GW

Roko: it's good to see that there is at least one other human being here.

Carl, thanks for that answer, that makes sense. But actually I suspect that normal humans have bounded utility functions that do not increase indefinitely with, for example, cheese-cakes. Instead, their functions have an absolute maximum which is actually reachable, and nothing else that is done will actually increase it.

Michael Vassar: Actually in real life I do some EXTREMELY counterintuitive things. Also, I would be happy to know the actual consequences of my beliefs. I'm not afraid that I would have to act in any particular way, because I am quite aware that I am a human being and do not have to act according to the consequences of my beliefs unless I want to. I often hold beliefs without acting on them, in fact.

If there is a 90% chance that utility maximization is correct, and a 10% chance that Roko is correct (my approximate estimates), how should one act? You cannot simply "use the math", as you suggest, because conditional on the 10% chance, you shouldn't be using the math at all.

Comment by Unknown on Anthropomorphic Optimism · 2008-08-05T18:20:48.000Z · LW · GW

Nick, can you explain how that happens with bounded utility functions? I was thinking basically something like this: if your maximum utility is 1000, then something that has a probability of one in a million can't have a high expected value or disvalue, because it can't be multiplied by more than 1000, and so the expected value can't be more than 0.001.

This seems to me the way humans naturally think, and the reason that sufficiently low-probability events are simply ignored.

Comment by Unknown on Anthropomorphic Optimism · 2008-08-05T16:02:45.000Z · LW · GW

From Nick Bostrom's paper on infinite ethics:

"If there is an act such that one believed that, conditional on one’s performing it, the world had a 0.00000000000001% greater probability of containing infinite good than it would otherwise have (and the act has no offsetting effect on the probability of an infinite bad), then according to EDR one ought to do it even if it had the certain side‐effect of laying to waste a million human species in a galactic‐scale calamity. This stupendous sacrifice would be judged morally right even though it was practically certain to achieve no good. We are confronted here with what we may term the fanaticism problem."

Later:

"Aggregative consequentialism is often criticized for being too “coldly numerical” or too revisionist of common morality even in the more familiar finite context. Suppose that I know that a certain course of action, though much less desirable in every other respect than an available alternative, offers a one‐in‐a‐million chance of avoiding catastrophe involving x people, where x is finite. Whatever else is at stake, this possibility will overwhelm my calculations so long as x is large enough. Even in the finite case, therefore, we might fear that speculations about low‐probability‐high‐stakes scenarios will come to dominate our moral decision making if we follow aggregative consequentialism."

Exactly. Utility maximizing together with an unbounded utility function necessarily lead to what Nick calls fanaticism. This is the usual use of the term: people call other people fanatics when their utility functions seem to be unbounded.

As Eliezer has pointed out, it is a dangerous sign when many people agree that something is wrong without agreeing why; we see this happening in the case of Pascal's Wager and Pascal's Mugging. In reality, a utility maximizer with an unbounded utility function would accept both. The readers of this blog, being human, are not utility maximizers. But they are unwilling to admit it because certain criteria of rationality seem to require being such.

Comment by Unknown on Anthropomorphic Optimism · 2008-08-05T12:00:34.000Z · LW · GW

The "mistake" Michael is talking about it the belief that utility maximization can lead to counter intuitive actions, in particular actions that humanly speaking are bound to be useless, such as accepting a Wager or a Mugging.

This is in fact not a mistake at all, but a simple fact (as Carl Shulman and Nick Tarleton suspect.) The belief that it does not is simply a result of Anthropomorphic Optimism as Eliezer describes it; i.e. "This particular optimization process, especially because it satisfies certain criteria of rationality, must come to the same conclusions I do." Have you ever considered the possibility that your conclusions do not satisfy those criteria of rationality?

Comment by Unknown on The Meaning of Right · 2008-07-29T14:45:02.000Z · LW · GW

After thinking more about it, I might be wrong: actually the calculation might end up giving the same result for every human being.

Caledonian: what kind of motivations do you have?

Comment by Unknown on The Meaning of Right · 2008-07-29T13:37:17.000Z · LW · GW

As I've stated before, we are all morally obliged to prevent Eliezer from programming an AI. For according to this system, he is morally obliged to make his AI instantiate his personal morality. But it is quite impossible that the complicated calculation in Eliezer's brain should be exactly the same as the one in any of us: and so by our standards, Eliezer's morality is immoral. And this opinion is subjectively objective, i.e. his morality is immoral and would be even if all of us disagreed. So we are all morally obliged to prevent him from inflicting his immoral AI on us.

Comment by Unknown on Setting Up Metaethics · 2008-07-28T18:27:27.000Z · LW · GW

I vote in favor of banning Caledonian. He isn't just dissenting, which many commenters do often enough. He isn't even trying to be right, he's just trying to say Eliezer is wrong.

Comment by Unknown on When (Not) To Use Probabilities · 2008-07-24T12:01:12.000Z · LW · GW

Eliezer, the money pump results from circular preferences, which should exist according to your description of the inconsistency. Suppose we have a million statements, each of which you believe to be true with equal confidence, one of which is "The LHC will not destroy the earth."

Suppose I am about to pick a random statement from the list of a million, and I will destroy the earth if I happen to pick a false statement. By your own admission, you estimate that there is more than one false statement in the list. You will therefore prefer that I play a lottery with odds of 1 in a million, destroying the earth only if I win.

It makes no difference if I pick a number randomly between one and a million, and then play the lottery mentioned (ignoring the number picked.)

But now if I pick a number randomly between one and a million, and then play the lottery mentioned only if I didn't pick the number 500,000, while if I do pick the number 500,000, I destroy the earth only if the LHC would destroy the earth, then you would prefer this state of affairs, since you prefer "destroy the earth if the LHC would destroy the earth" to "destroy the earth with odds of one in a million."

But now I can also substitute the number 499,999 with some other statement that you hold with equal confidence, so that if I pick 499,999, instead of playing the lottery, I destroy the earth if this statement is false. You will also prefer this state of affairs for the same reason, since you hold this statement with equal confidence to "The LHC will not destroy the earth."

And so on. It follows that you prefer to go back to the original state of affairs, which constitutes circular preferences and implies a money pump.

Comment by Unknown on When (Not) To Use Probabilities · 2008-07-24T05:18:06.000Z · LW · GW

Eliezer, you are thinking of Utilitarian (also begins with U, which may explain the confusion.) See http://utilitarian-essays.com/pascal.html

I'll get back to the other things later (including the money pump.) Unfortunately I will be busy for a while.

Comment by Unknown on When (Not) To Use Probabilities · 2008-07-24T03:17:55.000Z · LW · GW

Can't give details, there would be a risk of revealing my identity.

I have come up with a hypothesis to explain the inconsistency. Eliezer's verbal estimate of how many similar claims he can make, while being wrong on average only once, is actually his best estimate of his subjective uncertainty. How he would act in relation to the lottery is his estimate influenced by the overconfidence bias. This is an interesting hypothesis because it would provide a measurement of his overconfidence. For example, which would he stop: The "Destroy the earth if God exists" lottery, or "Destroy the earth at odds of one in a trillion"? How about a quadrillion? A quintillion? A googleplex? One in Graham's number? At some point Eliezer will have to prefer to turn off the God lottery, and comparing this to something like one in a billion, his verbal estimate, would tell us exactly how overconfident he is.

Since the inconsistency would allow Eliezer to become a money-pump, Eliezer has to admit that some irrationality must be responsible for it. I assign at least a 1% chance to the possibility that the above hypothesis is true. Given even such a chance, and given Eliezer's work, he should come up with methods to test the hypothesis, and if it is confirmed, he should change his way of acting in order to conform with his actual best estimate of reality, rather than his overconfident estimate of reality.

Unfortunately, if the hypothesis is true, by that very fact, Eliezer is unlikely to take these steps. Determining why can be left as an exercise to the reader.

Comment by Unknown on When (Not) To Use Probabilities · 2008-07-23T15:06:01.000Z · LW · GW

Recently I did some probability calculations, starting with "made-up" numbers, and updating using Bayes' Rule, and the result was that something would likely happen which my gut said most firmly would absolutely not, never, ever, happen.

I told myself that my probability assignments must have been way off, or I must have made an error somewhere. After all, my gut couldn't possibly be so mistaken.

The thing happened, by the way.

This is one reason why I agree with RI, and disagree with Eliezer.

Comment by Unknown on Touching the Old · 2008-07-20T18:22:49.000Z · LW · GW

I've touched things a few thousand years old. But I think I get more psychological effect from just looking at a bird, for example, and thinking of its ancestors flying around in the time of the dinosaurs.

Comment by Unknown on Could Anything Be Right? · 2008-07-19T14:45:17.000Z · LW · GW

I've mentioned in the past that human brains evaluate moral propositions as "true" and "false" in the same way as other propositions.

It's true that it there are possible minds that do not do this. But the first AI will be programmed by human beings who are imitating their own minds. So it is very likely that this AI will evaluate moral propositions in the same way that human minds do, namely as true or false. Otherwise it would be very difficult for human beings to engage this AI in conversation, and one of the goals of the programmers would be to ensure that it could converse.

This is why, as I've said before, that programming an AI does not require an understanding of morality, it just requires enough knowledge to program general intelligence. And this is what is going to actually happen, in all probability; the odds that Eliezer's AI will be the very first AI are probably less than 1 in a 1000, given the number of people trying.

Comment by Unknown on Could Anything Be Right? · 2008-07-18T17:24:00.000Z · LW · GW

Poke, in the two sentences:

"You should open the door before attempting to walk through it."

"You should not murder."

The word "should" means EXACTLY the same thing. And since you can understand the first claim, you can understand the second as well.

Comment by Unknown on The Gift We Give To Tomorrow · 2008-07-17T13:06:51.000Z · LW · GW

Mike Blume: "Intelligence is a product of structure, and structure comes from an ordering of lower levels."

I agree with that (at least for the kind of intelligence we know about), but the structure rests on universal laws of physics: how did those laws get to be universal?

Comment by Unknown on The Gift We Give To Tomorrow · 2008-07-17T07:14:56.000Z · LW · GW

We might be living in a simulation. If we are, then as Eliezer pointed out himself, we have no idea what kind of physics exist in the "real world." In fact, there is no reason to assume any likeness at all between our world and the real world. For example, the fundamental entities in the real world could be intelligent beings, instead of quarks. If so, then there could be some "shadowy figure" after all. This might be passing the buck, but at least it would be passing it back to somewhere where we can't say anything about it anymore.

Comment by Unknown on Fundamental Doubts · 2008-07-12T18:19:01.000Z · LW · GW

Poke, if anyone could give credence to the idea that "science is just another religion", it would be you.

Comment by Unknown on Fundamental Doubts · 2008-07-12T14:37:13.000Z · LW · GW

"Very Bad Sign #243: Many people all agree that a statement is definitely true but they can't agree on why and/or what the statement means."

Do you mean very bad for the person rejecting the statement? Such a consensus would seem to suggest that there are many reasons for affirming its truth, and many different ways it can be interpreted which are still true.

Comment by Unknown on My Kind of Reflection · 2008-07-11T18:22:10.000Z · LW · GW

Michael Vassar: instead of arguing about the meaning of "honest" or "dishonest", do you think it is possible for a person to know by introspection whether or not he has "really been trying" to get at the truth about something or not?

If it is, then people still shouldn't disagree: the one who knows that he hasn't been trying to get at the truth should just admit it, and accept the position of the other guy as more reasonable.

If it isn't, then your account does not supply an argument against Robin Hanson (which I take it you thought that it does.)

Comment by Unknown on Where Recursive Justification Hits Bottom · 2008-07-09T04:58:53.000Z · LW · GW

Peter Turney: yes, I define Occam's Razor in such a way that all orderings of the hypotheses are Occamian.

The razor still cuts, because in real life, a person must choose some particular ordering of the hypotheses. And once he has done this, the true hypothesis must fall relatively early in the series, namely after a finite number of other hypotheses, and before an infinite number of other hypotheses. The razor cuts away this infinite number of hypotheses and leaves a finite number.

Comment by Unknown on Where Recursive Justification Hits Bottom · 2008-07-09T03:12:30.000Z · LW · GW

In fact, an anti-Occam prior is impossible. As I've mentioned before, as long as you're talking about anything that has any remote resemblance to something we might call simplicity, things can decrease in simplicity indefinitely, but there is a limit to increase. In other words, you can only get so simple, but you can always get more complicated. So if you assign a one-to-one correspondence between the natural numbers and potential claims, it follows of necessity that as the natural numbers go to infinity, the complexity of the corresponding claims goes to infinity as well. And if you assign a probability to each claim, while making your probabilities sum to 1, then the probability of the more and more complex claims will go to 0 in the limit.

In other words, Occam's Razor is a logical necessity.

Comment by Unknown on Where Recursive Justification Hits Bottom · 2008-07-08T17:22:36.000Z · LW · GW

I don't see why we shouldn't admit that the more times the sun rises, the more likely it is not to rise the next time... In general life insurance agencies assume that the more days you have lived, the more likely you are to die the next day.

Comment by Unknown on Will As Thou Wilt · 2008-07-07T13:31:57.000Z · LW · GW

First interpretation is true; third interpretation is partially true. Second interpretation false, fourth interpretation mainly false (because people can execute a process which will change their desires in some unforeseeable manner.)

Comment by Unknown on Is Morality Given? · 2008-07-06T18:27:38.000Z · LW · GW

I've thought about Space Cannibals and the like before (i.e. creatures that kill one of the sexes during sexual reproduction). My suspicion is that even if such creatures evolved and survived, by the time they had a civilization, many would be saying to one another, "There really should be a better way..."

Evidence for this is the fact that even now, there are many human beings claiming it is wrong to kill other animals, despite the fact that humans evolved to kill and eat other animals. Likewise, in the ancestral environment, various tribes usually did kill each other rather than cooperate. But this didn't stop them from beginning to cooperate. So I suspect that Space Cannibals would do something similar. And in any case, I would fully admit that murder couldn't in fact be wrong for the Space Cannibals in the same way it is for us, even if there is an external moral truth.

In answer to Robin's question, assuming that morality exists, it probably has a number of purposes. And if one of the purposes is to preserve things in existence (i.e. moral truths correspond roughly with what is necessary to preserve things), then of course there will be a selection pressure to perceive moral truth. The disclaimer should not be needed, but this is not in any way a claim that it is moral to maximize inclusive genetic fitness.

Comment by Unknown on The Bedrock of Fairness · 2008-07-03T18:43:08.000Z · LW · GW

Eliezer: as you are aware yourself, we don't know how to compute it, nor how to run a computation that computes how to compute it. If we leave it up to the superintelligence to decide how to interpret "helping" and "hurting," it will be in a position no worse than our own, and possibly better, seeing that we are not superintelligent.

Comment by Unknown on Created Already In Motion · 2008-07-01T18:09:43.000Z · LW · GW

"Fuzzle" = "Morally right."

Only in terms of how this actually gets into a human mind, there is a dynamic first: before anyone has any idea of fuzzleness, things are already being sent to the action system. Then we say, "Oh, these are things are fuzzle!", i.e. these are the type of things that get sent to the action system. Then someone else tells us that something else is fuzzle, and right away it gets sent to the action system too.

Comment by Unknown on The Moral Void · 2008-07-01T03:03:55.000Z · LW · GW

There's no particular need to renew the torture and dust specks debate, so I'll just point out that GBM, Nominull, Ian C., and Manon de Gaillande have all made similar points: if you say, "if there is an external objective morality that says you should kill babies, why should you listen?" the question is the same as "if you should kill babies, why should you do it?"

Yes, and if 2 and 2 make 5, why should I admit it?

It isn't in fact true that I should kill babies, just as 2 and 2 don't make 5. But if I found out that 2 and 2 do make 5, of course I should admit it, and if I found out that I should kill babies, of course I should do it. As Nominull says, Eliezer's objection to this is an objection to reason itself: if an argument establishes conclusively something you happen not to like, you should reject the conclusion.

Comment by Unknown on The Moral Void · 2008-06-30T09:05:39.000Z · LW · GW

"I mean... if an external objective morality tells you to kill babies, why should you even listen?"

This is an incredibly dangerous argument. Consider this : "I mean... if some moral argument, whatever the source, tells me to prefer 50 years of torture to any number of dust specks, why should I even listen?"

And we have seen many who literally made this argument.

Comment by Unknown on What Would You Do Without Morality? · 2008-06-29T14:28:14.000Z · LW · GW

I just had another idea: maybe I would begin to design an Unfriendly AI. After all, being an evil genius would at least be fun, and besides, it would be a way to get revenge on Eliezer for proving that morality doesn't exist.

Comment by Unknown on What Would You Do Without Morality? · 2008-06-29T14:07:45.000Z · LW · GW

It seems people are interpreting the question in two different ways, one that we don't have any desires any more, and therefore no actions, and the other in the more natural way, namely that "moral philosophy" and "moral claims" have no meaning or are all false. The first way of interpreting the question is useless, and I guess Eliezer intended the second.

Most commenters are saying that it would make no difference to them. My suspicion is that this is true, but mainly because they already believe that moral claims are meaningless or false.

Possibly (I am not sure of this) Eliezer hopes that everyone will answer in this way, so that he can say that morality is unnecessary.

Personally, I agree with Dynamically Linked. I would start out by stealing wallets and purses, and it would just go downhill from there. In other words, if I didn't believe that such things were wrong, the bad feeling that results from doing them, and the idea that it hurts people, wouldn't be strong enough to stop me, and once I got started, the feeling would go away too-- this much I know from the experience of doing wrong. And once I had changed the way I feel about these things, the way I feel about other things (too horrible to mention at the moment) would begin to change too. So I can't really tell where it would end, but it would be bad (according to my present judgment).

There are others who would follow or have followed the same course. TGGP says that over time his life did change after he ceased to believe in morality, and at one point he said that he would torture a stranger to avoid stubbing his toe, which presumably he would not have done when he believed in morality.

So if it is the case that Eliezer hoped that morality is unnecessary to prevent such things, his hope is in vain.

Comment by Unknown on [deleted post] 2008-06-28T12:20:18.000Z

If the Neanderthals or some similar species had survived until the present day, presumably there would be a similar case. Probably Neanderthals had brainware quite different from ours (although quite possibly not as different as the difference between women and men, since this difference goes back a lot longer.)

Comment by Unknown on No Universally Compelling Arguments · 2008-06-27T03:29:45.000Z · LW · GW

TGGP, the evidence is that Eliezer suggested the reason to avoid this error is to avoid converting to Christianity. Presumably the real reason to avoid the error (if it is one, which he hasn't shown convincingly yet) is to avoid turning the universe into paperclips.

Comment by Unknown on The Design Space of Minds-In-General · 2008-06-26T16:24:04.000Z · LW · GW

In regard to AIXI: One should consider more carefully the fact that any self-modifying AI can be exactly modeled by a non-self modifying AI.

One should also consider the fact that no intelligent being can predict its own actions-- this is one of those extremely rare universals. But this doesn't mean that it can't recognize itself in a mirror, despite its inability to predict its actions.

Comment by Unknown on No Universally Compelling Arguments · 2008-06-26T14:53:11.000Z · LW · GW

Just to be clear, as far as I can remember after reading every post on OB, no one else has posted specifically under the title "Unknown." So there's only one of me.

Comment by Unknown on No Universally Compelling Arguments · 2008-06-26T14:17:52.000Z · LW · GW

Prase, I think I would agree with that. But it seems Eliezer isn't quite seeing is that even if mind-space in general is completely arbitrary, people programming an AI aren't going to program something completely arbitrary. They're going to program it to use assumptions and ways of argument that they find acceptable, and so it will also draw conclusions that they find acceptable, even if it does this better than they do themselves.

Also, Eliezer's conclusion, "And then Wright converted to Christianity - yes, seriously. So you really don't want to fall into this trap!" seems to suggest that a world where the AI converts everyone to Christianity is worse than a world that the AI fills with paperclips, by suggesting that converting to Christianity is the worst thing that can happen to you. I wonder if Eliezer really believes this, and would rather be made into paperclips than into a Christian?

Comment by Unknown on No Universally Compelling Arguments · 2008-06-26T13:28:46.000Z · LW · GW

Excuse me, that's 42, not 49.

Comment by Unknown on No Universally Compelling Arguments · 2008-06-26T13:06:57.000Z · LW · GW

Roko is basically right. In a human being, the code that is executing when we try to decide what is right or what is wrong is the same type of code that executes when we try decide how much are 6 times 7. The brain has a general pattern signifying "correctness," whatever that may be, and it uses this identical pattern to evaluate "6 times 7 is 49" and "murder is wrong."

Of course you can ask why the human brain matches "murder is wrong" to the "correctness" pattern, and you might say that it is arbitrary (or you might not.) Either way, if we can program an AGI at all, it will be able to reason about ethical issues using the same code that it uses when it reasons about matters of fact. It is true that it is not necessary for a mind to do this. But our mind does it, and doubtless the first mind-programmers will imitate our minds, and so their AI will do it as well.

So it is simply untrue that we have to give the AGI some special ethical programming. If we can give it understanding, packaged into this is also understanding of ethics.

Naturally, as Roko says, this does not imply the existence of any ghost, anymore than the fact that Deep Blue makes moves unintelligible to its programmers implies a ghost in Deep Blue.

This also gives some reason for thinking that Robin's outside view of the singularity may be correct.

Comment by Unknown on The Design Space of Minds-In-General · 2008-06-25T18:07:08.000Z · LW · GW

Phil Goetz was not saying that all languages have the word "the." He said that the word "the" is something every ENGLISH document has in common. His criticism is that this does not mean that Hamlet is more similar to an English restaurant menu than an English novel is to a Russian novel. Likewise, Eliezer's argument does not show that we are more like petunias then like an AI.

Comment by Unknown on Surface Analogies and Deep Causes · 2008-06-23T12:28:07.000Z · LW · GW

Caledonian, I didn't say that the Razor leads to the conclusion that "it is more probable that two things which share a property are identical than not." The Razor leads to the conclusion that "the two things are identical" is more likely than some other specific hypothesis that they are not identical in some specific way.

There are of course an infinite number of ways in which two things can fail to be identical, so in order to compare the probability that the two are identical with the probability that they are not, we have to sum the probabilities for all the ways they could fail to be identical; and thus the conclusion will be that they are more likely not identical than identical, as you correctly stated.

If you look back, though, you will see that I never said anything opposed to this anyway.

Comment by Unknown on Optimization and the Singularity · 2008-06-23T07:43:24.000Z · LW · GW

Eliezer, if AGI (or something else) ends up being designed without Friendliness, and Robin turns out to be right that this has no major harmful consequences, and about the doubling time, would you admit that his argument might be better, or would you say that he had been lucky?

Comment by Unknown on Surface Analogies and Deep Causes · 2008-06-23T05:47:22.000Z · LW · GW

Correction: in my last comment it should have been "if more complex claims, on average, are more probable than simpler claims," not "if more probable claims, on average, are more probable than simpler claims".

Comment by Unknown on Surface Analogies and Deep Causes · 2008-06-23T02:35:02.000Z · LW · GW

Cyan: "Minimum description length" works for English and probably most other languages as well, including abstract logical languages. Increase the number of properties enough, and it will definitely work for any language.

Caledonian: the Razor isn't intended to prove anything, it is intended to give an ordering of the probability of various accounts. Suppose we have 100 properties, numbered from one to a hundred. X has property #1 through #100. Y has property #1. Which is more likely: Y has properties #1 through #100 as well, or Y has property #1, all prime numbered properties except #17, and property #85. I think it is easy enough to see which of these is simpler and more likely to be true.

Peter Turney: the argument for the Razor is that on average, more complicated claims must be assigned a lower prior probability than simpler claims. If you assign prior probabilities at all, this is necessary on average, no matter how you define simplicity. The reason is that according to any definition of simplicity that corresponds even vaguely with the way we use the word, you can't get indefinitely simpler, but you can get indefinitely more complicated. So if all your probabilities are equal, or if more probable claims, on average, are more probable than simpler claims, your prior probabilities will not add to 1, but to infinity.

Comment by Unknown on Surface Analogies and Deep Causes · 2008-06-22T19:27:23.000Z · LW · GW

I doubt it particularly matters which precise measure of simplicity I use, probably any reasonable measure will do. Consider the same with one hundred properties: X has properties 1 through 100. If Y has properties 12, 14, 15, 27, 28, 29, 43, 49, 62, 68, 96, and 100, but no others, then it will take more bits to say which properties X and Y have, than the number of bits it will take to specify that X and Y share all the same properties.

Of course, this seems to support Guest's argument; and yes, once we see that X and Y share a property, the simplest hypothesis is that they are the same. Of course this can be excluded by additional evidence.

Comment by Unknown on Surface Analogies and Deep Causes · 2008-06-22T18:59:27.000Z · LW · GW

"But there is just no law which says that if X has property A and Y has property A then X and Y must share any other property."

"X & Y both have properties A & B" is logically simpler than "X & Y have property A, X has B, and Y does not have B"

So if X and Y share property A, and X has B, this is evidence, by Ockham's razor, that Y has property B.

Comment by Unknown on The Outside View's Domain · 2008-06-21T16:05:25.000Z · LW · GW

Peter de Blanc: see http://www.overcomingbias.com/2007/07/beware-the-insi.html, posted by Robin Hanson. In particular : "Most, perhaps all, ways to overcome bias seem like this. In the language of Kahneman and Lovallo's classic '93 paper, we allow an outside view to overrule an inside view... If overcoming bias comes down to having an outside view overrule an inside view, then our questions become: what are valid outside views, and what will motivate us to apply them?"

What do you think this means, if not that overcoming bias means taking outside views?

Comment by Unknown on The Outside View's Domain · 2008-06-21T14:52:28.000Z · LW · GW

The implied disagreement here between the "inside view" of "outside views" (i.e. a limited domain) and the "outside view" of "outside views" (i.e. something that applies in general) is the same as Eliezer's disagreement with Robin about the meaning of Aumann.

If Robin is right, then Eliezer is against overcoming bias in principle, since this would be taking an outside view (according to Robin's understanding). Of course, if Eliezer is right, it just means that Robin is biased against inside views. Each of these consequences is very strange; if Robin is right, Eliezer is in favor of bias despite posting on a blog on overcoming bias, while if Eliezer is right, Robin is biased against his own positions, among other things.