Posts

Comments

Comment by Mike_Blume on Markets are Anti-Inductive · 2009-02-26T20:50:06.000Z · LW · GW

Jeremy: we're drifting from the topic, but I don't believe the Final Fantasy games are produced, distributed, or sold by Sony. Thus the decision to release FF for multiple platforms was not a decision made by Sony, simply one which affected Sony.

Comment by Mike_Blume on Markets are Anti-Inductive · 2009-02-26T06:01:18.000Z · LW · GW

John: I don't think Eliezer's saying that a stock that has recently risen is now more likely to fall. Quite the opposite in fact. Any given stock should be about as likely to fall as to rise, at least if we weight by the amount of the rise. That is, if I hold a share of XYZ, which costs $100, and I anticipate a 99% chance that the stock will rise to $101 tomorrow, then I should also expect a 1% chance that the stock will drop to $1 tomorrow. Were that not true, the share would be worth nearly $101 right now, not tomorrow.

See also: Conservation of Expected Evidence

Comment by Mike_Blume on Three Worlds Decide (5/8) · 2009-02-03T23:19:00.000Z · LW · GW

Peter - I am, sadly, not an astrophysicist, but it seems reasonable that such an act would substantially decrease the negentropy available from that matter, which is important if you're a species of immortals thinking of the long haul.

Comment by Mike_Blume on Three Worlds Decide (5/8) · 2009-02-03T10:31:10.000Z · LW · GW

Svein: No, you've got to suggest someone else to stun, I'm pretty sure.

I doubt Eliezer's grand challenge to us would be to contribute less than four bits to his story.

Comment by Mike_Blume on Three Worlds Decide (5/8) · 2009-02-03T10:18:41.000Z · LW · GW

Carl - I'm pretty sure either way we get three more chapters.

Comment by Mike_Blume on The Baby-Eating Aliens (1/8) · 2009-02-02T00:36:00.000Z · LW · GW

Eliezer, if I understand you correctly, you would prefer a universe tiled with paperclips to one containing both a human civilization and a babyeating one. Let us say the babyeating captain shares your preference, and you and he have common knowledge of both these preferences.

Would you now press a button exterminating humanity?

Comment by Mike_Blume on The Super Happy People (3/8) · 2009-02-01T09:27:29.000Z · LW · GW

Apparently the Super Happy race have adopted knuth arrow notation more broadly than we have.

Comment by Mike_Blume on OB Status Update · 2009-01-27T21:50:54.000Z · LW · GW

Ian: A public password anonymous account is good, but that account cannot be able to delete or edit its own posts, or you can have chaos. On Reddit once, a guy started a thread for interesting confessions, and created an account for the occasion, whose password he made public. Some good stories went up, and then were deleted by a random vandal a few hours later.

Comment by Mike_Blume on Interpersonal Entanglement · 2009-01-21T06:14:54.000Z · LW · GW

Extending Aurini's point, I think it is worth asking to what extent we have already integrated catpeople into our culture today. I think many of us would agree that the women featured in pornographic films are catwomen of a kind. What about pop stars, boy bands, etc.? What about mainstream fiction? On Firefly, Kaylee is beautiful, has an above-female-average sex drive, and falls in love with the introverted, socially awkward intellectual character - isn't she exactly the sort of catgirl most male sci-fi fans would want?

It seems like the problems you've identified here don't suddenly begin at the moment you switch on a fully convincing interactive simulation of a human being - there is a continuum, and as our technology progresses, we will naturally tend to move down it. Where shall those of us who look ahead and wish for a eudaemonic future dig our trenches and hold our ground?

(posting from a different homepage today - it seemed appropriate, given the topic)

Comment by Mike_Blume on Sympathetic Minds · 2009-01-19T16:32:47.000Z · LW · GW

"To a paperclip maximizer, the humans are just machines with pressable buttons. No need to feel what the other feels - if that were even possible across such a tremendous gap of internal architecture. How could an expected paperclip maximizer "feel happy" when it saw a human smile? "Happiness" is an idiom of policy reinforcement learning, not expected utility maximization. A paperclip maximizer doesn't feel happy when it makes paperclips, it just chooses whichever action leads to the greatest number of expected paperclips. Though a paperclip maximizer might find it convenient to display a smile when it made paperclips - so as to help manipulate any humans that had designated it a friend."

Correct me if I'm wrong, but haven't you just pretty accurately described a human sociopath?

Comment by Mike_Blume on Building Weirdtopia · 2009-01-13T03:03:48.000Z · LW · GW

"I'm not moving. You move. Bastard."

Fine, we'll both move to different Everett branches.

Weirdtopia: A deeper understanding of anthropics leads us to consider quantum immortality valid, as long as the death is instantaneous. We prepare an electron in a spin up state, and measure its angular momentum on the x axis. Left, your faction terminates, right, mine.

Comment by Mike_Blume on Eutopia is Scary · 2009-01-12T18:38:02.000Z · LW · GW

Zubon - Forgive my rudeness, there was totally supposed to be a "but thanks very much for the rec." in there somewhere.

Comment by Mike_Blume on Eutopia is Scary · 2009-01-12T18:19:07.000Z · LW · GW

Zubon - This may be drifting off topic, but I'm sure I don't have to tell you that some girls (and probably some guys too) use poly as nothing more than a cheap, easy escape route from a relationship with which they've grown displeased. I had this done to me last summer, and it was quite simply the most miserable experience of my life. The depression and feelings of worthlessness with which it left me have only just begun to abate, and it will probably be some time before I can deal with the level of trust necessary for a vanilla relationship, let alone seriously consider choosing poly.

Comment by Mike_Blume on Eutopia is Scary · 2009-01-12T11:45:34.000Z · LW · GW

I've been thinking about roughly this question a lot the past few weeks. My best guess is the end of sexual fidelity and/or self modification to remove sexual jealousy. Were I to be frozen and then thawed, and find that poly was now the norm I would honestly be disgusted and afraid. The kind of love that I had hoped and dreamed of would be effectively dead. None the less, I know that even with our current mind design there are people today who are poly and seem very happy with it. It seems at least plausible that without the complication of jealousy, romantic love could be that much more Fun.

I don't want this to happen, not at all, but if forced to make a guess, that would be mine.

Comment by Mike_Blume on Harmful Options · 2008-12-26T00:51:49.000Z · LW · GW

Is it possible that Eliezer has indirectly answered Robin's question about gifting from a few days ago? That is, is it possible that I gain more benefit from a copy of Tropic Thunder given to me by my brothers than from one I purchase myself? By giving it to me as a gift, they have removed from me the necessity of comparing it to other films I could purchase, as well as the thought that I could have spent the money in a more "responsible" fashion.

Comment by Mike_Blume on You Only Live Twice · 2008-12-13T05:01:04.000Z · LW · GW

Chances are, it would look like most of what they found good and righteous in the world is gone. Would you inflict that on someone?

How about you let him quickly experience the last 200 years for himself. As quickly or as slowly as necessary, maybe even actually living through each subjective day, or maybe doing the whole thing in five years. Allow his mind to reconfigure itself to our newer (improved) understanding of morality by the same process by which ours did.

Comment by Mike_Blume on You Only Live Twice · 2008-12-13T01:27:32.000Z · LW · GW

I'd like to be a little more clear on this, I've heard a few different things.

Are there arrangements I can make which will ensure that a week after my death, my head will be full of cryopreserving fluid and my heart will be beating in someone else's chest?

Comment by Mike_Blume on Excluding the Supernatural · 2008-09-12T08:13:52.000Z · LW · GW

Somebody let me know if I'm pushing my allowed post rate?

Tim: I'm not sure about that definition. Are we saying unexplainable by natural law as understood by humans at the time - ie quantum tunneling was supernatural 100 years ago, but is no longer?

Or would that mean unexplainable by the natural laws that exist? I just don't like this one because then we've simply defined the supernatural out of existence. The set of supernatural things and the set of real things would be non-overlapping by definition.

Comment by Mike_Blume on Excluding the Supernatural · 2008-09-12T07:41:01.000Z · LW · GW

Z. M. Davis: But if you think about the things that the homunculus tends to do, I think you would find yourself needing to move to levels below the homunculus to do it. To give it a coherent set of actions it is likely to take, and not to take, at any given time, you would have to populate it with wants, with likes, with beliefs, with structures for reasoning about beliefs.

I think eventually you would come to an algorithm of which the homunculus would have to be an instantiation, and you would have to assume that that algorithm was represented somewhere.

I just don't see how you can make sensible predictions about ontologically basic complicated things. And I know people will go on about how you can't make predictions about a person with free will, but that's a crock. You expect me to try to coherently answer your post. I expect a cop to arrest me if I drive too fast. More to the point, we don't expect neurologically intact humans to spend three years walking backwards, or talk to puddles, or remove their clothing and sing "I'm a little teapot" in Times Square.

And the same goes for gods, incidentally. Religious folk will say that their gods' ways are ineffable, that they can't be predicted. But they still expect their gods to answer prayers, and forgive sins, and torture people like me for millennia, and they don't expect them to transform mount everest into a roast beef sandwich, or thunder forth nursery rhymes from the heavens.

They have coherent expectations, and for those expectations to make sense you have to open the black box and put things in there. You have to postulate structure, and relationships between parts, and soon you haven't got something ontologically basic anymore.

Comment by Mike_Blume on The Truly Iterated Prisoner's Dilemma · 2008-09-06T08:19:00.000Z · LW · GW

Marshall I think that's a bit of a cop-out. People's lives are at stake here and you have to do something. If nothing else, you can simply choose to play defect, worst case the PM does the same, and you save a billion lives (in the first scenario). Are you going to phone up a billion mothers and tell them you let their children die so as not to deal with a character you found unsavory? The problem's phrased the way it is to take that option entirely off the table.

Yes, it will do evil things, if you want to put it that way. Your car will do evil things without a moment's hesitation. You put a brick on the accelerator and walk away, it'll run over and kill a little girl. Your car is an evil potential murderer. You still voluntarily interact with it. (unless you are carfree in which case congrats, so am I, but that's entirely irrelevant to my metaphor)

Besides that, what do you mean calling the PM chaotic? It's quite a simple agent that maximizes paperclips. You're the chaotic agent, you want to maximize happiness and fairness, love and lust, aesthetic beauty and intellectual challenge. Make up your mind already and decide what you want to maximize!

Comment by Mike_Blume on The Truly Iterated Prisoner's Dilemma · 2008-09-05T06:26:18.000Z · LW · GW

I'm almost seeing shades of Self-PA here, except it's Self-PA that co-operates.

If I assume that the other agent is perfectly rational, and if I further assume that whatever I ultimately choose to do will be perfectly rational (hence Self-PA), then I know that my choice will match that of the paperclip maximizer. Thus, I am now choosing between (D,D) and (C,C), and I of course choose to co-operate.

Comment by Mike_Blume on Harder Choices Matter Less · 2008-08-31T21:45:35.000Z · LW · GW

Douglas Knight:

It's more amusing if you get the outside input from other people. (but it's biased)

Not at all - just internally number the choices, and ask a friend to choose 1, 2, or 3. Then, again, react to the result emotionally and act on your reaction. My girlfriend and I do this all the time.

Comment by Mike_Blume on Moral Error and Moral Disagreement · 2008-08-11T04:25:08.000Z · LW · GW

I do not eat steak, because I am uncertain of what my own morality outputs with respect to steak-eating. It seems reasonable to me to imagine that cows are capable of experiencing pain, of fearing death. Of being, and ceasing to be. If you are like the majority of human beings, you do eat steak. The propositions I have suggested do not seem reasonable to you.

Do you imagine that there are facts about the brains of cattle which we could both learn - facts drawn from fMRI scans, or from behavioral science experiments, perhaps - which would bring us into agreement on the issue?

Comment by Mike_Blume on Sorting Pebbles Into Correct Heaps · 2008-08-10T03:29:16.000Z · LW · GW

TGGP: Well, any idiot can see that the fish only don't disagree because they're not accomplishing anything to disagree about. They don't build any heaps at all, the stupid layabouts. Thus, theirs is a wholly trivial and worthless sort of agreement. The point of life is to have large, correct heaps. To say we should build no heaps is as good as suicide.

Comment by Mike_Blume on Morality as Fixed Computation · 2008-08-08T03:30:52.000Z · LW · GW

hmmm...It seems to me that the actions we choose to take consist in derivatives with of our utility function with respect to information about the world. so if we have utility (programmer desires X, quantity 20 of X exists) = 20, then isn't it just a question of ensuring that the derivative is taken only with respect to the latter variable, keeping the first fixed?

Comment by Mike_Blume on The Meaning of Right · 2008-07-29T15:49:51.000Z · LW · GW

I'm still wrestling with this here -

Do you claim that the CEV of a pygmy father would assert that his daughter's clitoris should not be sliced off? Or that the CEV of a petty thief would assert that he should not possess my iPod?

Comment by Mike_Blume on The Meaning of Right · 2008-07-29T03:27:07.000Z · LW · GW

There needs to be a separate word for that subset of our values that is interpersonal, prosocial, to some extent expected to be agreed-upon, which subset does not always win out in the weighing; this subset is often also called "morality" but that would be confusing.

Are you maybe referring to manners/etiquette/propriety?

Comment by Mike_Blume on Leave a Line of Retreat · 2008-07-27T21:00:32.000Z · LW · GW

This reminds me of an item from a list of "horrible job interview questions" we once devised for SIAI:

Would you kill babies if it was intrinsically the right thing to do? Yes/No

If you circled "no", explain under what circumstances you would not do the right thing to do: I assume by "intrinsically right thing to do", you do not intend something straightforward like "here are five babies carrying a virus which, if left unchecked, will wipe out half the population of the planet. There is no means by which they can be quarantined, the virus can cross even the cold reaches of space. The only way to save us is to kill them". I assume rather, that you, Eliezer Yudkowsky, hand me a booklet, possibly hundreds of pages long. On page 0 are listed my most cherished moral truths, and on page N is written: "thus, it is right and decent to kill as many babies as possible, whenever the opportunity arises. Any man who walks past a mother pushing a stroller, and does not immediately throttle the infant where it lies, is nothing more than a moral coward." For all n between 1 and N inclusive, the statements on page n seem to me to follow naturally and self-evidently from my acceptance of the statements on page n-1. As I look up, astonishment etched on my face, I see you standing before me, grinning broadly. You hand me a long, curved blade, and tell me the staff of the SIAI are taking the afternoon off to raid the local nursery, and would I like to join?

Under these circumstances I would assign high probability to the idea that you are morally ill, and wish to murder infants for your own enjoyment. That somewhere in the proof you have given me is a logical error - the moral equivalent of dividing by zero. I would imagine, not that morality led me astray, but that my incomplete knowledge of morality led me not to spot this error. I would show the proof to as many moral philosophers as I could, ones whose intelligence and expertise in the field I respected, and held to be above my own, and who were initially as unenthusiastic as I am at the prospect of infanticide. I would ask them if they could point me to an error in the proof, and explain to me clearly and fully why this step, which had seemed so simple to me, is not a legal move in the dance at that point. If they could not explain this to me to my satisfaction, I would devote much of my time from then on to the study of morality so that I could better understand it, and until I could, would distrust any moral conclusions I came to on my own. If none of them could find an error, I would still assign high probability to the notion that somewhere in the proof is an error which we humans have not advanced sufficiently in the study of metamorality to discover. I would consider it one of the most important outstanding problems in the field, and would, again, distrust any major moral decisions which did not clearly add up to normality until it was solved.

Just as the mathematical "proof" that 2=1 would, if accepted, destroy the foundations of mathematics itself, and must therefore be doubted until we can discover its error, so your proof that killing babies is good, would, if accepted, destroy the foundations of my morality, and so I must doubt it until I can find an error.

I am well aware that a fundamentalist could take my previous paragraph, replace "killing babies" with "oral sex" and thus make his prudery unassailable by argument. So much the worse for him, I say. If he considers the prohibition of a mutually beneficial and joyful act to be at the foundation of his morality, then he is a miserable creature and all my rationality will not save him from himself.

I have tried indirectly to answer your question. To answer it directly I will have to resort to what seems a paradox. I would not do "the right thing to do" if I know, at bottom, that it simply is not the right thing to do.

If you circled "yes", how right would it have to be, for how many babies? N/A

So, would I get the job?

Comment by Mike_Blume on The Gift We Give To Tomorrow · 2008-07-17T08:27:01.000Z · LW · GW

I tried, Unknown, I really did. I wanted badly to be a theist for a long time, and I really tried to think along the path you're suggesting. But we've learned so much about the myriad ways that intelligence isn't fundamental - can't be fundamental. It's too complex, has too many degrees of freedom. You want to postulate a perfect essence of intelligence? Fine - whose? What will it want, and not want? What strategies of rationality will it execute? Intelligence is a product of structure, and structure comes from an ordering of lower levels. As fundamental as it seems from the inside, I don't think there's any way to put back the clock and view intelligence as an irreducible entity the way you seem to want to.

Comment by Mike_Blume on Lawrence Watt-Evans's Fiction · 2008-07-15T08:39:05.000Z · LW · GW

You mentioned rationalist fiction, and my mind immediately jumped to this - are you familiar with the graphic short story "Fleep"? Main character passes out, comes to in a phone booth encased in concrete, with a phonebook full of gibberish, a letter in his pocket he can't read, a few coins and various sundries. From inside the booth he experiments and calculates, manages to work out where he is, who he is, what's happened, and what to do next.

Comment by Mike_Blume on Fundamental Doubts · 2008-07-12T23:43:31.000Z · LW · GW

I've always taken cogito as "Here are some thoughts bouncing about. They must be causally related to some set of existent phenomena." which I think is pretty safe.

Comment by Mike_Blume on Living in Many Worlds · 2008-06-05T04:57:54.000Z · LW · GW

The only place where I see it not summing to normality is quantum immortality - any thoughts?

Comment by Mike_Blume on A Failed Just-So Story · 2008-06-04T05:27:16.000Z · LW · GW

Perhaps we have not evolved to be susceptible to religion as such, but modern religions function as superstimuli to some need which we previously evolved.

Comment by Mike_Blume on Where Experience Confuses Physicists · 2008-04-26T08:53:49.000Z · LW · GW

Rot13:

Anzrf: Cb'zv: Zvgpuryy Cbegre Aunetynar: Qba'g xabj lrg - vf vg nalbar? Qr'qn: Qnavry Qraarg Lh'ry: Gung'f lbh - Ryvrmre Lhqxbjfxl Un'eb: Ebova Unafba Ob'zn: Znk Obea Ri'uh: Uhtu Rirergg

Unir V tbg vg?