Posts

Comments

Comment by Ian_Maxwell on Normal Ending: Last Tears (6/8) · 2009-02-04T19:02:23.000Z · LW · GW

Has anyone else noticed that in this particular 'compromise', the superhappies don't seem to be actually sacrificing anything?

I mean, their highest values are being ultra super happy and having sex all the time, and they still get to do that. It's not as if they wanted not to create literature or eat hundreds of pseudochildren. Whereas humans will no longer get to feel frustrated or exhausted, and babyeaters will no longer get to eat real children.

I don't think the superhappies are quite as fair-minded as Akon thought. They agreed to take on traits of humanity and babyeating in an attempt to placate everyone, not because it was a fair trade.

Comment by Ian_Maxwell on Worse Than Random · 2008-11-12T04:02:05.000Z · LW · GW

@Mike Plotz: It's true that you can't do better than random in predicting (theoretical nonphysical) coin tosses, but you also can't do worse than random. As Eliezer pointed out, the claim isn't "it is always possible to to better than random", but "any algorithm which can be improved by adding randomness, can be improved even more without adding randomness."

Comment by Ian_Maxwell on Psychic Powers · 2008-09-15T02:30:59.000Z · LW · GW

@Ken: I am interested in your claim. You can understand that your personal testimony is not really enough to convince, but I will assume that you are posting in good faith and are serious about (dis)proving your psychic abilities to your own satisfaction.

You may wish to attempt the following modification on the rock-paper-scissors experiment: Your wife (or another party) will roll a six-sided die. 1-2, she will throw rock; 3-4, she will throw paper; 5-6, she will throw scissors. In this way, her throw will be entirely random (and so not predictable through ordinary mental reasoning), and yet she will know in advance what she plans to throw (and so it will be predictable given sufficient access to her inner mental state). If over a large number of trials you are able to guess her throws more often than expected, you are probably onto something.

Comment by Ian_Maxwell on The Comedy of Behaviorism · 2008-08-04T13:59:57.000Z · LW · GW

Eliezer, to steal one of your phrases: You know, you're right.

That said, I was already quite willing to call Watson mistaken. He was mistaken about other things---in particular, he latched onto classical conditioning and treated it as the One Simple Principle That Can Explain All Behavior---so it's not terrifically surprising. One gets the impression that he was primarily interested in making a name for himself.

Amusingly, Skinner gets most of the flak for the sort of ridiculosity that Watson espoused, even though he explicitly stated in his monographs that internal mental life exists (in particular, he stated that it is a type of behavior, not an explanation for behavior).

Comment by Ian_Maxwell on The Comedy of Behaviorism · 2008-08-03T15:57:23.000Z · LW · GW

I agree that this post's introduction to behaviorism is no more than a common mischaracterization. It is the sort of mischaracterization that has spread farther than the original idea, to the point that psychology textbooks (which are more often than not terribly inaccurate) repeat the error and psychology graduates write Wikipedia articles saying that "Behaviorists believe consciousness does not exist".

Behaviorism is a methodology, not a hypothesis. It is the methodology that attempts to explain behavior without recourse to internal mental states. The basis for this approach is that internal mental states can only be inferred from behavior in the first place, so that they offer no additional predictive power. That said, it may turn out that a certain class of behaviors tend to lump together, and there would be no problem in labelling these "angry behaviors" or "vengeful behaviors" and describing an organism as "angry" when it exhibits angry behaviors. A behaviorist will not hypothesize that there is an internal angry feeling corresponding to this angry state. He will not hypothesize that there is not an internal angry feeling corresponding to this angry state. He will not hypothesize about internal feelings at all, because he has no way of testing his hypothesis if he does.

It may be that modern neuroscience makes certain "internal explanations" testable after all. This does not make behaviorism a bad methodology! It works quite well if you don't happen to have an MRI scanner on hand. It works a lot better than ascribing a subject's lashing out to "rage" and, when asking how you know he's enraged, saying, "Because he's lashing out."

Comment by Ian_Maxwell on Fundamental Doubts · 2008-07-12T15:24:37.000Z · LW · GW

I had thought of that particular plot hole solution. In fact, however, most violations of thermodynamics and other physical laws seem to occur within the Matrix, not outside. That is, the rules of the Matrix do not add up to normality.

There actually is a cover in the movie, though: the human energy source is "combined with a source of fusion". This is, as one review stated, like elaborately explaining how a 747 is powered by rubber-bands and then mentioning that this is combined with four jet engines.

Comment by Ian_Maxwell on Timeless Physics · 2008-07-03T15:36:00.000Z · LW · GW

If I understand this model correctly, it has the consequence that from a typical point in the configuration space there are not only many futures (i.e. paths starting at this point, along which entropy is strictly increasing), but many pasts (i.e. paths starting at this point, along which entropy is strictly decreasing). Does this sound correct?

Comment by Ian_Maxwell on Artificial Addition · 2008-06-25T15:52:40.000Z · LW · GW

Bog: You are correct. That is, you do not understand this article at all. Pay attention to the first word, "Suppose..."

We are not talking about how calculators are designed in reality. We are discussing how they are designed in a hypothetical world where the mechanism of arithmetic is not well-understood.

Comment by Ian_Maxwell on Artificial Addition · 2008-06-12T20:38:54.000Z · LW · GW

This old post led me to an interesting question: will AI find itself in the position of our fictional philosophers of addition? The basic four functions of arithmetic are so fundamental to the operation of the digital computer that an intelligence built on digital circuitry might well have no idea of how it adds numbers together (unless told by a computer scientist, of course).

Comment by Ian_Maxwell on Timeless Identity · 2008-06-03T13:52:08.000Z · LW · GW

This argument makes no sense to me:

If you've been cryocrastinating, putting off signing up for cryonics "until later", don't think that you've "gotten away with it so far". Many worlds, remember? There are branched versions of you that are dying of cancer, and not signed up for cryonics, and it's too late for them to get life insurance.

This is only happening in the scenarios where I didn't sign up for cryonics. In the ones where I did sign up, I'm safe and cozy in my very cold bed. These universes don't exist contingent on my behavior in this one; what possible impact could my choice here to sign up for cryonics have on my alternate-universe Doppelgängeren?

Comment by Ian_Maxwell on Three Dialogues on Identity · 2008-04-22T13:02:25.000Z · LW · GW

It seems to me that there is an important distinction between these scenarios. Of course, it could be that I'm just not enlightened enough to see the total similarity.

In the first scenario, 'you' are at least attempting to explain yourself to the shaman. In fact, you have answered, both literally with "yes" and to the shaman's intent by explaining. That he does not believe your explanation is a separate matter.

In the second scenario, I imagine your literal answer to John would be "no"---because there is no such thing as "same stuff" anyway. Why, then, didn't you at any point tell him "no" or "there is no such thing as 'same stuff' anyway"? If John refused to believe your explanation, this would of course be similar to the first case.

In the third scenario, 'Eliezer' has refined his question to this point: "I want to know if the lower levels of organization underlying the banana have a substantially different structure than before, and whether the causal relation between that structure and my subjective experience has changed in style." What in the world is ill-defined in this question? What word do we have to taboo? (Perhaps 'structure', perhaps 'subjective experience'?) It seems deserving of a straight answer to me.

(One possibility is that you are suggesting future advances in understanding, so that you really don't know what could be ill-defined about such a question---you are just saying in general that seemingly commonsense ideas may not be as solid as they appear. In that case, it's hard to object, but it would be nice if I could imagine knowledge that would make me believe 'Eliezer' and John weren't asking real questions.)

Comment by Ian_Maxwell on Distinct Configurations · 2008-04-12T16:44:17.000Z · LW · GW

This is the first clear explanation of the phenomenon of quantum entanglement that I have ever read (though I gather it's still a simplification since we're assuming the mirrors aren't actually made out of particles like everything else). I have never really understood this phenomenon of "observation", but suddenly it's obvious why it should make a difference. Thank you.

Comment by Ian_Maxwell on Hand vs. Fingers · 2008-03-30T13:48:09.000Z · LW · GW

I agree with some others that Eliezer is here arguing against a fairly naïve form of anti-reductionism, and indeed is explaining rather than refuting it. However, I assume, Eliezer, that the point of your entry is (in keeping with the theme of the blog) to illustrate a certain sort of bias through its effects, rather than to prove to everyone that reductionism is really truly true. So explanation over refutation is entirely appropriate here.

Comment by Ian_Maxwell on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T01:08:01.000Z · LW · GW

If harm aggregates less-than-linearly in general, then the difference between the harm caused by 6271 murders and that caused by 6270 is less than the difference between the harm caused by one murder and that caused by zero. That is, it is worse to put a dust mote in someone's eye if no one else has one, than it is if lots of other people have one.

If relative utility is as nonlocal as that, it's entirely incalculable anyway. No one has any idea of how many beings are in the universe. It may be that murdering a few thousand people barely registers as harm, because eight trillion zarquons are murdered every second in Galaxy NQL-1193. However, Coca-Cola is relatively rare in the universe, so a marginal gain of one Coca-Cola is liable to be a far more weighty issue than a marginal loss of a few thousand individuals.

(This example is deliberately ridiculous.)

Comment by Ian_Maxwell on The Allais Paradox · 2008-01-20T05:34:48.000Z · LW · GW

"Nainodelac and Tarleton Nick": This is not about risk aversion. I agree that if it is vital to gain at least $20,000, 1A is a superior choice to 1B. However, in that case, 2A is also a superior choice to 2B. The error is not in preferring 1A, but in simultaneously preferring 1A and 2B.

Comment by Ian_Maxwell on One Life Against the World · 2007-12-01T03:56:33.000Z · LW · GW

I don't see the relevancy of Mr. Burrows' statement (correct, of course) that "Very wealthy people give less, as a percentage of their wealth and income, than people of much more limited means. For wealthy philanthropists, the value from giving may be in status from the publicity of large gifts."

This is certainly of concern if our goal is to maximize the virtue of rich people. If it is to maximize general welfare, it is of no concern at all. The recipients of charity don't need a percentage's worth of food, but a certain absolute amount.