Comment by Z._M._Davis on Markets are Anti-Inductive · 2009-02-26T04:50:36.000Z · LW · GW

Doug, I meant ceteris paribus.

Comment by Z._M._Davis on Markets are Anti-Inductive · 2009-02-26T02:32:57.000Z · LW · GW

Psy and John, I think the idea is this: if you want to buy a hundred shares of OB at ten dollars each, because you think it's going to go way up, you have to buy them from someone else who's willing to sell at that price. But clearly that person does not likewise think that the price of OB is going to go way up, because if she did, why would she sell it to you now, at the current price? So in an efficient market, situations where everyone agrees on the future movement of prices simply don't occur. If everyone thought the price of OB was going to go to thirty dollars a share, then said shares would already be trading at thirty (modulo expectations about interest rates, inflation, &c.).

Comment by Z._M._Davis on On Not Having an Advance Abyssal Plan · 2009-02-24T05:33:02.000Z · LW · GW

Kellen: "I am looking for some introductory books on rationality. [...] If any of you [...] have any suggestions [...]"

Cf. "Recommended Rationalist Reading."

Comment by Z._M._Davis on Wise Pretensions v.0 · 2009-02-21T02:32:49.000Z · LW · GW

"To care about the public image of any virtue [...] is to limit your performance of that virtue to what all others already understand of it. Therefore those who would walk the Way of any virtue must first relinquish all desire to appear virtuous before others, for this will limit and constrain their understanding of the virtue. "

Is it possible to quote this without being guilty of trying to foster the public image of not caring about public image? That's a serious question; I had briefly updated the "Favorite Quotes" section of my Facebook before deciding the potential irony was too great. And does my feeling compelled to ask this have anything to do with the fact that I still don't understand Löb's theorem?

Comment by Z._M._Davis on Cynicism in Ev-Psych (and Econ?) · 2009-02-15T04:50:11.000Z · LW · GW

"I assume that underlying this is that you love your own minds and despise your own bodies, or are at best indifferent to them."

Well, duh.

Comment by Z._M._Davis on Beware of Stephen J. Gould · 2009-02-13T04:59:00.000Z · LW · GW

Isn't the byline usually given as "Stephen Jay Gould"?

Comment by Z._M._Davis on (Moral) Truth in Fiction? · 2009-02-09T20:42:22.000Z · LW · GW

Tom: "Hmmm.. Maybe we should put together a play version of 3WC [...]"

That reminds me! Did anyone ever get a copy of the script to Yudkowski Returns? We could put on a benefit performance for SIAI!


Comment by Z._M._Davis on The Thing That I Protect · 2009-02-08T05:49:03.000Z · LW · GW

Nick: "Where was it suggested otherwise?"

Oh, no one's explicitly proposed a "wipe culturally-defined values" step; I'm just saying that we shouldn't assume that extrapolated human values converge. Cf. the thread following "Moral Error and Moral Disagreement."

Comment by Z._M._Davis on The Thing That I Protect · 2009-02-08T04:30:30.000Z · LW · GW

Nick Hay: "[N]either group is changing human values as it is referred to here: everyone is still human, no one is suggesting neurosurgery to change how brains compute value."

Once again I fail to see how culturally-derived values can be brushed away as irrelevant under CEV. When you convince someone with a political argument, you are changing how their brain computes value. Just because the effect is many orders of magnitude subtler than major neurosurgery doesn't mean it's trivial.

Comment by Z._M._Davis on Epilogue: Atonement (8/8) · 2009-02-08T02:06:00.000Z · LW · GW

I don't think I see how moral-philosophy fiction is problematic at all. When you have a beautiful moral sentiment that you need to offer to the world, of course you bind it up in a glorious work of high art, and let the work stand as your offering. That makes sense. When you have some info you want to share with the world about some dull ordinary thing that actually exists, that's when you write a journal article. When you've got something to protect, something you need to say, some set of notions that you really are entitled to, then you write a novel.

Just as it is dishonest to fail to be objective in matters of fact, so it is dishonest to feign objectivity where there simply is no fact. Why pretend to make arguments when what you really want to write is a hymn?

Comment by Z._M._Davis on The Thing That I Protect · 2009-02-07T23:46:44.000Z · LW · GW

"I'm curious if anyone knows of any of EY's other writings that address the phenomenon of rationality as not requiring consciousness."

Cf. Eliezer-sub-2002 on evolution and rationality.

Comment by Z._M._Davis on Three Worlds Decide (5/8) · 2009-02-03T21:55:00.000Z · LW · GW

Geoff: "They also don't withhold information from each other. This could allow a specially-crafted memory to disrupt or destroy the entire race."

This is not Star Trek, my Lord.

Comment by Z._M._Davis on The Super Happy People (3/8) · 2009-02-02T01:46:18.000Z · LW · GW
"All right. Open a channel, transmitting my voice only." [...] Out of sight of the visual frame, Akon gestured [...] [emphasis added]


Comment by Z._M._Davis on Value is Fragile · 2009-01-29T18:15:30.000Z · LW · GW

I suspect it gets worse. Eliezer seems to lean heavily on the psychological unity of humankind, but there's a lot of room for variance within that human dot. My morality is a human morality, but that doesn't mean I'd agree with a weighted sum across all possible extrapolated human moralities. So even if you preserve human morals and metamorals, you could still end up with a future we'd find horrifying (albeit better than a paperclip galaxy). It might be said that that's only a Weirdtopia, that's you're horrified at first, but then you see that it's actually for the best after all. But if "the utility function [really] isn't up for grabs," then I'll be horrified for as long as I damn well please.

Comment by Z._M._Davis on OB Status Update · 2009-01-27T23:26:30.000Z · LW · GW

Eliezer: " But if we don't get good posts from the readership, we (Robin/Eliezer/Nick) may split off OB again."

I'm worried that this will happen. If we're not getting main post submissions from non-Robin-and-Eliezer people now, how will the community format really change things? For myself, I like to comment on other people's posts, but the community format doesn't appeal to me: to regularly write good main posts, I'd have to commit the time to become a Serious Blogger, and if I wanted to do that, I'd start my own venue, rather than posting to a community site.

Comment by Z._M._Davis on Higher Purpose · 2009-01-24T03:45:27.000Z · LW · GW

"There would never be another Gandhi, another Mandela, another Aung San Suu Kyi—and yes, that was a kind of loss, but would any great leader have sentenced humanity to eternal misery, for the sake of providing a suitable backdrop for eternal heroism? Well, some of them would have. But the down-trodden themselves had better things to do." —from "Border Guards"

Comment by Z._M._Davis on Failed Utopia #4-2 · 2009-01-21T17:08:23.000Z · LW · GW

I take it the name is a coincidence.

nazgulnarsil: "What is bad about this scenario? the genie himself [sic] said it will only be a few decades before women and men can be reunited if they choose. what's a few decades?"

That's the most horrifying part of all, though--they won't so choose! By the time the women and men reïnvent enough technology to build interplanetary spacecraft, they'll be so happy that they won't want to get back together again. It's tempting to think that the humans can just choose to be unhappy until they build the requisite technology for reünification--but you probably can't sulk for twenty years straight, even if you want to, even if everything you currently care about depends on it. We might wish that some of our values are so deeply held that no circumstances could possibly make us change them, but in the face of an environment superinelligently optimized to change our values, it probably just isn't so. The space of possible environments is so large compared to the narrow set of outcomes that we would genuinely call a win that even the people on the freak planets (see de Blanc's comment above) will probably be made happy in some way that their preSingularity selves would find horrifying. Scary, scary, scary. I'm donating twenty dollars to SIAI right now.

Comment by Z._M._Davis on Eutopia is Scary · 2009-01-14T16:42:47.000Z · LW · GW

On second thought, correction: relativity restoring far away lands, yes, preserving intuitions, no.

Comment by Z._M._Davis on Eutopia is Scary · 2009-01-14T16:39:38.000Z · LW · GW

"preserve/restore human intuitions and emotions relating to distance (far away lands and so on)"

Arguably Special Relativity already does this for us. Although I freely admit that a space opera is kind of the antithesis of a Weirdtopia.

Comment by Z._M._Davis on Continuous Improvement · 2009-01-11T02:48:42.000Z · LW · GW

"[...] which begs the question [sic] of how we can experience these invisible hedons [...]"

Wh--wh--you said you were sympathetic!

Comment by Z._M._Davis on Changing Emotions · 2009-01-06T17:10:36.000Z · LW · GW

Abigail, I don't think we actually disagree. I certainly wouldn't defend the strong Bailey/Blanchard thesis that transwomen can be neatly sorted into autogynephiles and gay men. However, I am confident that autogynephilia is a real phenomenon in at least some people, and that's all I was trying to refer to in my earlier comment--sorry I wasn't clearer.

Comment by Z._M._Davis on Changing Emotions · 2009-01-05T05:45:04.000Z · LW · GW

Eliezer: "[E]very time I can recall hearing someone say 'I want to know what it's like to be the opposite sex', the speaker has been male. I don't know if that's a genuine gender difference in wishes [...]"

sighs There's a name for it.

Eliezer: "Strong enough to disrupt personal identity, if taken in one shot?"

Is it cheating if you deliberately define your personal identity such that the answer is No?

Frelkins: "I mean, if anyone wants to check it out, just try Second Life."

Not exactly what we're looking for, unfortunately ...

Frelkins: "[T]hey flunk the shoe chatter and reveal themselves quickly."

Surely you're not literally claiming that there are no women who aren't good at shoe chatter. Maybe in Second Life there are enough men using female avatars such that P(male-in-RL | female-avatar-bad-at-shoe-chatter) really is greater than P(female-in-RL | female-avatar-bad-at-shoe-chatter). But I should hope that being a woman or man is not conflated with behaving in gender-typical ways, for to do so is to deliberately ignore the nontrivial amount of variation in actually existing women and men.

Frelkins, in the other thread, you said you were saddened by Tino Sehgal's Edge answer about the end of masculinity as we know it, and you asked, "Why do even men hate men nowadays?" Well, please take my word for it that Sahgal and friends don't literally hate men. Rather, we just find it kind of obnoxious that far too often, being male is systematically conflated with talking about porn or football or whatever it is that "guys' guys" talk about (I wouldn't know--or I wish that I didn't). I hope I am not misunderstood--of course there is nothing wrong with being typically feminine or masculine. It's just that there should be other options.

adept42: "Therefore, since we can only observe gendered behaviors through social interaction the presumption should be each behavior has a social origin; biology carries the burden of proof to prove otherwise on a case-by-case basis."

I really don't think that follows. These empirical questions aren't like a court trial, where "nature" is the prosecution and "nurture" is innocent until proven guilty (cf. Eliezer's "The Scales of Justice, the Notebook of Rationality"). Rather, for each question, we must search for evidence and seek out the most accurate belief possible, being prepared to update as new evidence comes in. Sometimes this is very painful, when there's something you desperately want to be true, and you're afraid of the evidence. But we must be brave together, else we be utterly deceived. And what would we do then?

Comment by Z._M._Davis on Free to Optimize · 2009-01-04T20:27:17.000Z · LW · GW

Should "Fun" then be consistently capitalized as a term of art? Currently I think we have "Friendly AI theory" (captial-F, lowercase-t) and "Friendliness," but "Fun Theory" (capital-F capital-T) but "fun."

Comment by Z._M._Davis on Dunbar's Function · 2008-12-31T03:48:59.000Z · LW · GW

"[...] naturally specializing further as more knowledge is discovered and we become able to conceptualize more complex areas of study [...]"

So, how does this spiral of specialization square with living by one's own strength?

Could there be a niche for generalists?

Comment by Z._M._Davis on Devil's Offers · 2008-12-25T18:13:03.000Z · LW · GW

"A singleton might be justified in prohibiting standardized textbooks in certain fields, so that people have to do their own science [...]"

No textbooks?! CEV had better overrule you on this one, or my future selves across the many worlds are all going to scream bloody murder. It may be said that I'm missing the point: that ex hypothesi the Friendly AI knows better than me.

But I'm still going to cry.

Comment by Z._M._Davis on Living By Your Own Strength · 2008-12-22T05:13:25.000Z · LW · GW "But if you deleted the Pythagorean Theorem from my mind entirely, would I have enough math skills left to grow it back the next time I needed it?"

It's easy if you're allowed to keep the law of cosines ...

Comment by Z._M._Davis on High Challenge · 2008-12-21T16:56:07.000Z · LW · GW

"I sometimes think that futuristic ideals phrased in terms of 'getting rid of work' would be better reformulated as 'removing low-quality work to make way for high-quality work'."

Alternatively, you could taboo work and play entirely, speaking instead of various forms of activity, and their various costs and benefits.

Comment by Z._M._Davis on For The People Who Are Still Alive · 2008-12-15T17:41:32.000Z · LW · GW

I'm finding Eliezer's view attractive, but it does have a few counterintuitive consequences of its own. If we somehow encountered shocking new evidence that MWI, &c. is false and that we live in a small world, would weird people suddenly become much more important? Did Eliezer think (or should he have thought) that weird people are more important before coming to believe in a big world?

Comment by Z._M._Davis on The Mechanics of Disagreement · 2008-12-10T19:36:11.000Z · LW · GW

"How about if it were an issue that you were not too heavily invested in [...]"

Hal, the sort of thing you suggest has already been tried a few times over at Black Belt Bayesian; check it out.

Comment by Z._M._Davis on True Sources of Disagreement · 2008-12-09T08:02:06.000Z · LW · GW

Tiiba, you're really overstating Eliezer and SIAI's current abilities. CEV is a sketch, not a theory, and there's a big difference between "being concerned about Friendliness" and "actually knowing how to build a working superintelligence right now, but holding back due to Friendliness concerns."

Comment by Z._M._Davis on True Sources of Disagreement · 2008-12-09T06:53:49.000Z · LW · GW "The stakes are very high for this 'guess'. The ethical implications of getting it wrong are huge." True. "The designers of the simulation or emulation fully intend to pass the Turing test; that is, it is the explicit purpose of the designers of the software to fool the interviewer."

To clarify, I'm talking about something like a Moravec transfer, not a chatbot. Maybe a really sophisticated chatbot could pass the Turing test, but if we know that a given program was designed simply to game the Turing test, then we won't be impressed by its passing the test. The designers aren't trying to fool the interviewer; they're trying to build a brain (or something that does the same kind of thing). We know that brains exist.

"I don't see why the burden of proof should be on me."

The reason is that the human brain is not magic. It's doing something, and whatever that something is, it would incredibly surprising if it's the only structure in the vastness of the space of all possible things that could do it. Yes, consciousness is a mystery unto me, and I'm waving my hands. I don't know how to build a person. But the burden of proof is still on you.

Comment by Z._M._Davis on True Sources of Disagreement · 2008-12-09T05:22:33.000Z · LW · GW

Michael Tobis, suppose a whole brain emulation of someone is created. You have a long, involved, normal-seeming conversation with the upload, and she claims to have qualia. Even if it is conceded that there's no definitive objective test for consciousness, doesn't it still seem like a pretty good guess that the upload is conscious? Like, a really really good guess?

Comment by Z._M._Davis on Chaotic Inversion · 2008-12-01T08:06:40.000Z · LW · GW

Catherine: "If you think of yourself as a system whose operations you cannot OR can predict [...]"

But isn't this actually true? I mean, law of the excluded middle, right?

Or am I just trying to hard to be clever?

Comment by Z._M._Davis on Singletons Rule OK · 2008-12-01T08:03:02.000Z · LW · GW

Eliezer asks why one might be emotionally opposed to the idea of a singleton. One reason might be that Friendly AI is impossible. Life on the rapacious hardscrapple frontier may be bleak, but it sure beats being a paperclip.

Comment by Z._M._Davis on Chaotic Inversion · 2008-11-29T13:51:13.000Z · LW · GW

I'm sure you've already heard this, but have you tried reading relevant papers rather than random websites?

Personally, I'm kind of giving up on "discipline" as such, in favor of looking for things worth doing and then doing them because they are worth doing. Why torture myself trying to regulate and control every minute, when that doesn't even work? Of course every minute is precious, but just because I'm not following a schedule doesn't mean nothing valuable is getting done. Whatever happened to the power of play? The first virtue is curiosity, isn't it?

Results are mixed so far, but with a certain history, even "mixed" counts as a win.

Comment by Z._M._Davis on Thanksgiving Prayer · 2008-11-29T00:17:14.000Z · LW · GW "Nobody actually feels gratitude towards things like 'economies of scale' and 'comparative advantage' [...]"

Maybe not, but they really ought to.

Comment by Z._M._Davis on Surprised by Brains · 2008-11-23T16:39:21.000Z · LW · GW

"Are you familiar with Ricardo's [...]"

It was cute in "The Simple Truth," but in the book, you might want to consider cutting down on the anachronisms. Intelligences who've never seen an intelligence falls under standard suspension-of-disbelief, but explicit mention of Ricardo or the Wright brothers is a bit grating.

Comment by Z._M._Davis on Whither OB? · 2008-11-17T22:54:47.000Z · LW · GW

Roland: "Yes, we need a community forum where everyone can post."

At the risk of preëmpting Nick Tarleton, we have one.

My two cents: I like the blog/comments-sections format, and I don't like the community format.

I had a couple of ideas for posts (which I never got around to writing up, natch); and one of the reasons I don't have my own blog is because being a Serious Blogger would be too much of a time commitment. But this idea of seven weekly bloggers intrigues me--do I have enough good OB-type ideas to be part of such an endeavor?--maybe? I'll have to give this further thought.

Comment by Z._M._Davis on Lawful Creativity · 2008-11-09T19:02:04.000Z · LW · GW

Eliezer: "To all defending Modern Art: Please point to at least one item available online which exemplifies that which you think I'm ignoring or missing."

This Al Held piece. Upon first glance, it's just a white canvas with a black triangle at the top and the bottom. This is not True Art, you say--but then you read the title, and it all makes sense! Clever! Shocking!

Art! (Hat tip Scott McCloud.)

Comment by Z._M._Davis on Back Up and Ask Whether, Not Why · 2008-11-09T02:19:56.000Z · LW · GW

"I have written a blog post on the issue"

I'd love to read it, but the link here is broken, nor do I see any new posts on the Transhuman Goodness homepage. I hope you saved a local copy!

Comment by Z._M._Davis on Back Up and Ask Whether, Not Why · 2008-11-07T00:19:55.000Z · LW · GW

Julian: "You have to un-assume the decision before you stand any chance of clear thought."

Of course the decision-theoretic logic of this is unassailable, but I continue to worry that the real-world application to humans is nontrivial.

Here, I have a parable. Suppose Jones is a member of a cult, and holds it as a moral principle that it is good and right and virtuous to obey the Great Leader. So she tries to obey, but feels terrible about failing to obey perfectly, and she ends up having a nervous breakdown and removing herself from the cult in shame. Then, afterwards, as a defensive emotional reaction, she adopts an individualist philosophy and comes up with all sorts of clever arguments to the effect that the Great Leader isn't special at all and really no one has any duty to obey or even listen to her!

So then Jones reads "The Bottom Line," and realizes her adoption of individualism wasn't rational, and was simply a reaction to having been hurt so badly. If she had only been better at obeying the Great Leader, then, as a matter of (subjunctive) fact, she never would have come up with all those clever arguments, and wouldn't have found them convincing if someone else had told them to her. At this point, as rationalists, we advise Jones to clear her mind, unassume her decision to leave the cult, and reëvaluate the matter cleanly. But this advice might be underspecified: if she is supposed to reëvaluate her choice using her current morality, the decision is obvious: stay free. If she is supposed to reëvaluate using her original cult-morality, the decision is obvious: crawl back to the Great Leader, begging for forgiveness.

You can't reset yourself to a state a perfect emptiness; you have to reset yourself to something. Nor can you reset yourself to a state of perfect emptiness in order to decide what to reset yourself to; that's just an infinite regress.

I guess the best answer I can give to someone in such a dilemma (the answer I give myself) is to say, "Rational agents act to preserve their current goal system; my past self was confused, I cannot be bound by the terms of her confusion; I can only act from who I am, now, what I want, now."

But then that sounds like saying that the principle of the bottom line doesn't apply across morality changes. Which is a little suspicious.

Comment by Z._M._Davis on Back Up and Ask Whether, Not Why · 2008-11-06T20:58:28.000Z · LW · GW

I worry that blanking your mind is a tad underspecified. If you delete all your current justifications and metajustifications, you become a rock, not a perfect ghost of decision-theoretic emptiness. What we want is to delete just our current notion of a good means, while preserving our ultimate ends, but the human brain just might not be typed that strongly. When asking what is truly valuable, it could sometimes seem that the answer really is "whatever you want it to be"--except that we don't want it to be whatever we want it to be; we want an answer (even knowing that that answer has to be a fact about ourselves, rather than about some universal essence of goodness). Ack!

Comment by Z._M._Davis on Hanging Out My Speaker's Shingle · 2008-11-06T05:40:47.000Z · LW · GW

Alex: "Most of the time this blog seems like it could've been written on some distant planet in the year 5050, totally sealed off from the rest of today's humanity."

Don't you prefer it that way?

Comment by Z._M._Davis on Complexity and Intelligence · 2008-11-04T22:54:48.000Z · LW · GW

POSTSCRIPT-- Strike that stray does in the penultimate sentence. And re compatibilism, the short version is that no, you don't have free will, but you shouldn't feel bad about it.

Comment by Z._M._Davis on Complexity and Intelligence · 2008-11-04T22:48:02.000Z · LW · GW

Henry: "Both of those concepts seem completely apt for describing perfectly deterministic systems. But, in describing the "complexity" of the universe even in something as simple as the 'pattern of stars that exists' one would still have to take into account potential non-deterministic factors such as human behavior. [...] [A]re you saying that you are a strict determinist?"

I'll take this one. Yes, we're presuming determinism here, although the determinism of the Many Worlds Interpretation is a little different from the single-world determinism we're used to thinking about. Also, I notice that in an earlier comment, you spoke of free will and determinism as if the concepts were opposed, but depending on exactly what you mean by free will, this does is not necessarily the case. For Eliezer's take, see, e.g., "Timeless Control," or for the philosophical mainstream, google compatibilism.

Comment by Z._M._Davis on Efficient Cross-Domain Optimization · 2008-10-28T21:31:18.000Z · LW · GW

Billswift, re rereading the series, check out Andrew Hay's list and associated graphs.

Comment by Z._M._Davis on Which Parts Are "Me"? · 2008-10-22T23:30:14.000Z · LW · GW

"Davis, what you were saying made sense to me, so I'm confused as to what you could be confused about."

I came up with a nice story (successful reflection decreases passion; failed reflection increases it) that seems to fit the data (Eliezer says reflection begets emotional detachment, whereas I try to reflect and drive myself mad), but my thought process just felt really (I can think of no better word:) muddled, so I'm wondering whether I should wonder whether I made a fake explanation.

Comment by Z._M._Davis on Which Parts Are "Me"? · 2008-10-22T23:00:27.000Z · LW · GW "ZM, the question is whether being more reflective makes you less passionate, not so much the absolute level [...]"

But if that were the only issue at hand, then that would generate the prediction that I would be even more unstable (!) if I were less analytical, which is the opposite of what I would have predicted.

Yes, it could possibly be that it is this introspection/reflectivity dichotomy that's tripping me up. A deep conceptual understanding that one's self can be distinct from what-this-brain-is-thinking-and-feeling-right-now does not necessarily imply the ability to draw this distinction consistently and in real time. Maybe successful reflectivity decreases passion, but an awareness of, combined with an inability to reconcile, the morass of conflicting thoughts and desires, only inflames the passions?

Okay, now I'm really confused. I think.

Comment by Z._M._Davis on Which Parts Are "Me"? · 2008-10-22T21:56:53.000Z · LW · GW

I'm confused. Eliezer, you seem to be saying that reflectivity leads to distance from one's emotions, but this completely contradicts my experience: I'm constantly introspecting and analyzing myself, and yet I am also extremely emotional, not infrequently to the point of hysterical crying fits. Maybe I'm introspective but not reflective in the sense meant here? I will have to think about this for a while.

Comment by Z._M._Davis on Bay Area Meetup for Singularity Summit · 2008-10-21T01:58:56.000Z · LW · GW

RSVPing in the affirmative; thank you for getting this together.