Posts

Comments

Comment by Patrick_(orthonormal) on Formative Youth · 2009-02-25T04:56:23.000Z · LW · GW

Interesting. Since people are commenting on fiction vs. non-fiction, it's interesting to note that my formative books were all non-fiction (paleontology, physics, mathematics, philosophy), and that I now find myself much more easily motivated to try understanding the problems of the world than motivated to try fixing them.

Plural of anecdote, etc, etc.

Comment by Patrick_(orthonormal) on True Ending: Sacrificial Fire (7/8) · 2009-02-05T19:21:14.000Z · LW · GW

I'm not sure, but was this line:

But, from the first species, we learned a fact which this ship can use to shut down the Earth starline

supposed to read "the Huygens starline"?

Comment by Patrick_(orthonormal) on The Baby-Eating Aliens (1/8) · 2009-01-30T17:15:27.000Z · LW · GW

I was going to say that this (although very good) wasn't quite Weird enough for your purposes; the principal value of the Baby-Eaters seems to be "individual sacrifice on behalf of the group", which we're all too familiar with. I can grok their situation well enough to empathize quickly with the Baby-Eaters. I'd have hoped for something even more foreign at first sight.

Then I checked out the story title again.

Eagerly awaiting the next installments!

Comment by Patrick_(orthonormal) on Value is Fragile · 2009-01-30T01:33:40.000Z · LW · GW

(E.g. repeating the mantra "Politics is the Mind-Killer" when tempted to characterize the other side as evil)

Uh, I don't mean that literally, though doing up a whole Litany of Politics might be fun.

Comment by Patrick_(orthonormal) on Value is Fragile · 2009-01-30T01:30:51.000Z · LW · GW

Carl:

Those are instrumental reasons, and could be addressed in other ways.

I wouldn't want to modify/delete hatred for instrumental reasons, but on behalf of the values that seem to clash almost constantly with hatred. Among those are the values I meta-value, including rationality and some wider level of altruism.

I was trying to point out that giving up big chunks of our personality for instrumental benefits can be a real trade-off.

I agree with that heuristic in general. I would be very cautious regarding the means of ending hatred-as-we-know-it in human nature, and I'm open to the possibility that hatred might be integral (in a way I cannot now see) to the rest of what I value. However, given my understanding of human psychology, I find that claim improbable right now.

My first point was that our values are often the victors of cultural/intellectual/moral combat between the drives given us by the blind idiot god; most of human civilization can be described as the attempt to make humans self-modify away from the drives that lost in the cultural clash. Right now, much of this community values (for example) altruism and rationality over hatred where they conflict, and exerts a certain willpower to keep the other drive vanquished at times. (E.g. repeating the mantra "Politics is the Mind-Killer" when tempted to characterize the other side as evil).

So far, we haven't seen disaster from this weak self-modification against hatred, and we've seen a lot of good (from the perspective of the values we privilege). I take this as some evidence that we can hope to push it farther without losing what we care about (or what we want to care about).

Comment by Patrick_(orthonormal) on Value is Fragile · 2009-01-29T18:50:25.000Z · LW · GW

Carl:

I don't think that automatic fear, suspicion and hatred of outsiders is a necessary prerequisite to a special consideration for close friends, family, etc. Also, yes, outgroup hatred makes cooperation on large-scale Prisoner's Dilemmas even harder than it generally is for humans.

But finally, I want to point out that we are currently wired so that we can't get as motivated to face a huge problem if there's no villain to focus fear and hatred on. The "fighting" circuitry can spur us to superhuman efforts and successes, but it doesn't seem to trigger without an enemy we can characterize as morally evil.

If a disease of some sort threatened the survival of humanity, governments might put up a fight, but they'd never ask (and wouldn't receive) the level of mobilization and personal sacrifice that they got during World War II— although if they were crafty enough to say that terrorists caused it, they just might. Concern for loved ones isn't powerful enough without an idea that an evil enemy threatens them.

Wouldn't you prefer to have that concern for loved ones be a sufficient motivating force?

Comment by Patrick_(orthonormal) on Value is Fragile · 2009-01-29T16:48:38.000Z · LW · GW

Roko:

Not so fast. We like some of our evolved values at the expense of others. Ingroup-outgroup dynamics, the way we're most motivated only when we have someone to fear and hate: this too is an evolved value, and most of the people here would prefer to do away with it if we can.

The interesting part of moral progress is that the values etched into us by evolution don't really need to be consistent with each other, so as we become more reflective and our environment changes to force new situations upon us, we realize that they conflict with one another. The analysis of which values have been winning and which have been losing (in different times and places) is another fascinating one...

Comment by Patrick_(orthonormal) on Investing for the Long Slump · 2009-01-22T18:43:54.000Z · LW · GW

Doug S:

If the broker believes some investment has a positive expectation this year (and is not very likely to crash terribly), he could advise John Smith to invest in it for a year minus a day, take the proceeds and go to Vegas. If he arrives with $550,000 instead of $500,000, there's a betting strategy more likely to wind up with $1,000,000 than the original plan.

The balance of risk and reward between the investment part and the Vegas part should have an optimal solution; but since anything over $1,000,000 doesn't factor nearly as much in John's utility function, I'd expect he's not going to bother with investment schemes that have small chances of paying off much more than $1,000,000, and he'd rather look for ones that have significant chances of paying off something in between.

Comment by Patrick_(orthonormal) on Investing for the Long Slump · 2009-01-22T18:34:51.000Z · LW · GW

Given your actual reasons for wondering about the world economy in 2040 conditioned on there not having been an extinction/Singularity yet, the survivalist option is actually worth a small hedge bet. If you can go (or convince someone else to go) live in a very remote area, with sufficient skills and resources to continue working quietly on building an FAI if there's a non-existential global catastrophe, that looks like it has a strongly positive expectation (since in those circumstances, the number of competing AI attempts will probably be few if any).

Now considering the Slump scenarios in which civilization stagnates but survives, it looks like there's not much prospect of winding up with extra capital in that situation, relative to others; but the capital you acquire might go relatively farther.

I have to say that the fact you're strongly considering these matters is a bit chilling. I'd be relieved if the reason were that you ascribed probability significantly greater than 1% to a Long Slump, but I suspect it's because you worry humanity will run out of time in many of the other scenarios before FAI work is finished- reducing you to looking at the Black Swan possibilities within which the world might just be saved.

Comment by Patrick_(orthonormal) on Building Weirdtopia · 2009-01-15T21:05:38.000Z · LW · GW

Sexual Weirdtopia: What goes on consensually behind closed doors doesn't (usually) affect the general welfare negatively, so it's not a matter of social concern. However, that particular bundle of biases known as "romantic love" has led to so much chaos in the past that it's become heavily regulated.

People start out life with the love-module suppressed; but many erstwhile romantics feel that in the right circumstances, this particular self-deception can actually better their lives. If a relationship is going well, the couple (or group, perhaps) can propose to fall in love, and ask the higher authorities for a particular love-mod for their minds.

Every so often, each loving relationship must undergo an "audit" in which they have the love-mods removed and decide whether to put them back in. No unrequited love is allowed; if one party ends it, the other must as well...

Comment by Patrick_(orthonormal) on Emotional Involvement · 2009-01-07T20:02:48.000Z · LW · GW

A rogue paperclipper in a mostly Friendly world can probably only be stopped by racial prejudice--to a rational creature, it's always easier to feed him your neighbor than it is to fight him.

A couple of problems with this statement, as I see it:

  1. The word "only". Forget five minutes— think for five seconds about Third Alternatives. At the very least, wouldn't an emotion for human-favoritism serve the goal better than an emotion for race-favoritism? Then everyone could cooperate more fully, not just each race by itself.

You could be using "racial prejudice" to mean "species prejudice" or something even wider, but that's not what the question's about. Your argument gives no reason for maintaining the current brain architecture, which creates these divisions of allegiance within the normal human race.

  1. Rational agents are doomed to fail because they won't cooperate enough? I stand with Eliezer: rational agents should WIN. If the inevitable result of noncooperation is eventual destruction, genuinely rational agents WILL find ways to cooperate; the Prisoner's Dilemma doesn't operate within every conceivable cooperative enterprise.
Comment by Patrick_(orthonormal) on Worse Than Random · 2008-11-11T19:36:56.000Z · LW · GW

You might want to footnote, before anyone starts making noise about the ant example, that colony selection is not a case of group selection but a case of individual selection on the queen (and drones), since the rest of the ants don't reproduce.

Comment by Patrick_(orthonormal) on Belief in Intelligence · 2008-10-27T01:06:33.000Z · LW · GW

Richard, You're making the exact point Eliezer just did, about how modeling the effects of intelligence doesn't generally proceed by running a simulation forward. The "ordinarily" he speaks of, I assume, refers to the vast majority of physical systems in the Universe, in which there are no complicated optimization processes (especially intelligences) affecting outcomes on the relevant scales.

Comment by Patrick_(orthonormal) on You Provably Can't Trust Yourself · 2008-08-20T01:49:40.000Z · LW · GW

Vladimir,

Just to clarify (perhaps unnecessarily): by an attractor I mean a moral framework from which you wouldn't want to self-modify radically in any direction. There do exist many distinct attractors in the space of 'abstracted idealized dynamics', as Eliezer notes for the unfortunate Pebblesorters: they might modify their subgoals, but never approach a morality indifferent to the cardinality of pebble heaps.

Eliezer's claim of moral convergence and the CEV, as I understand it, is that most humans are psychologically constituted so that our moral frameworks lie in the 'basin' of a single attractor; thus the incremental self-modifications of cultural history have an ultimate destination which a powerful AI could deduce.

I suspect, however, that the position is more chaotic than this; that there are distinct avenues of moral progress which will lead us to different attractors. In your terms, since our current right is after all not entirely comprehensive and consistent, we could find that both right1 and right2 are both right extrapolations from right, and that right can't judge unequivocally which one is better.

Comment by Patrick_(orthonormal) on You Provably Can't Trust Yourself · 2008-08-19T23:54:11.000Z · LW · GW

I agree— and I balk at the concept of "the" Coherent Extrapolated Volition precisely because I suspect there are many distinct attractors for a moral framework like ours. Since our most basic moral impulses come from the blind idiot god, there's no reason for them to converge under extrapolation; we have areas of agreement today on certain extrapolations, but the convergence seems to be more a matter of cultural communication. It's not at all inconceivable that other Everett branches of Earth have made very different forms of moral progress from us, no less consistent with reason or consequences or our moral intuitions.

I'd be very interested, of course, to hear Eliezer's reasons for believing the contrary.

Comment by Patrick_(orthonormal) on Setting Up Metaethics · 2008-07-28T18:10:37.000Z · LW · GW

Well, I find that my metamorality meets those criteria, with one exception.

To reiterate once, I think that the foundations of morality as we understand it are certain evolved impulses like the ones we can find in other primates (maternal love, desire to punish a cheater, etc); these are like other emotions, with one key difference: the social component that we expect and rely on others having the same reaction, and accordingly we experience other emotions as more subjective and our moral impulses as more objective.

Note that when I'm afraid of something, and you're not, this may surprise me but doesn't anger me; but if I feel moral outrage at something, and you don't, then I'm liable to get angry with you.

But of course our moralities aren't just these few basic impulses. Given our capacity for complex thought and for passing down complex cultures, we've built up many systems of morality that try to integrate all these impulses. It's a testament of the power of conscious thought to reshape our very perceptions of the world that we can get away with this— we foment one moral impulse to restrain another when our system tells us so, and we can work up a moral sentiment in extended contexts when our system tells us to do so. (When we fail to correctly extrapolate and apply our moral system, we later think of this as a moral error.)

Of course, some moral systems cohere logically better than others (which is good if we want to think of them as objective), some have better observable consequences, and some require less strenuous effort at reinterpreting experience. Moving from one moral system to another which improves in some of these areas is generally what we call "moral progress".

This account has no problems with #2 and #3; I don't see an "impossible question" suggesting itself (though I'm open to suggestions); the only divergence from your desired properties is that it only claims that we can hardly help but believe that some things are right objectively, whether we want them or not. It's not impossible for an alien species to evolve to conscious thought without any such concept of objective morality, or with one that differs from ours on the most crucial of points (say, our immediate moral pain at seeing something like us suffer); and there'd be nothing in the universe to say which one of us is "right".

In essence, I think that Subhan is weakly on the right track, but he doesn't realize that there are some human impulses stronger than anything we'd call "preference", or that a mix of moral impulse and reasoning and reclassifying of experience is at stake and is that much more complex than the interactions he supposes. Since we as humans have in common both the first-order moral impulses and the perception that these are objective and thus ought to be logically coherent, we aren't in fact free to construct our moral systems with too many degrees of freedom.

Sorry for the overlong comment. I'm eager to see what tomorrow's post will bring...

Comment by Patrick_(orthonormal) on Fundamental Doubts · 2008-07-12T23:21:53.000Z · LW · GW

Hmm. These doubts might seem sophomoric to us, since the "idiot god" of evolution couldn't conspire against our reasoning with the thoroughness of the Dark Lords of the Matrix. But it makes sense to consider these questions in the course of programming an AI, who will have cause to wonder whether its creators might have intentionally circumscribed its reasoning faculties...

Also, the problem with "cogito, ergo sum" is that it tempts us to posit a self distinct from the act of thinking, thus an immaterial soul, when the best interpretation seems to be that there is no "I" apart from the activity of my brain. I agree with Nietzsche here when he calls it a seductive trick of grammar, imagining that a verb implies a subject in this way.

Comment by Patrick_(orthonormal) on The Fear of Common Knowledge · 2008-07-09T17:26:27.000Z · LW · GW

Silas: Some of the more progressive Christian denominations, perhaps? Most of the elite members have become entirely embarrassed of claiming things like the unique divinity of Jesus, but manage to keep this relatively silent (with the partial exception of defectors like ex-Bishop Spong) so as not to offend the more traditional believers in their communion (who of course know about the elites' unbelief).

The Episcopal Communion, in particular, is going more into schism the more people start to reveal their real theologies.

Comment by Patrick_(orthonormal) on Moral Complexities · 2008-07-04T22:12:19.000Z · LW · GW

I fall closer to the morality-as-preference camp, although I'd add two major caveats.

One is that some of these preferences are deeply programmed into the human brain (i.e. "Punish the cheater" can be found in other primates too), as instincts which give us a qualitatively different emotional response than the instincts for direct satisfaction of our desires. The fact that these instincts feel different from (say) hunger or sexual desire goes a long way towards answering your first question for me. A moral impulse feels more like a perception of an external reality than a statement of a personal preference, so we treat it differently in argument.

The second caveat is that because these feel like perceptions, humans of all times and places have put much effort into trying to reconcile these moral impulses into a coherent perception of an objective moral order, denying some impulses where they conflict and manufacturing moral feeling in cases where we "should" feel it for consistency's sake. The brain is plastic enough that we can in fact do this to a surprising extent. Now, some reconciliations clearly work better than others from an interior standpoint (i.e. they cause less anguish and cognitive dissonance in the moral agent). This partially answers the second question about moral progress— the act of moving from one attempted framework to one that feels more coherent with one's stronger moral impulses and with one's reasoning.

And for the last question, the moral impulses are strong instincts, but sometimes others are stronger; and then we feel the conflict as "doing what we shouldn't".

That's where I stand for now. I'm interested to see your interpretation.

Comment by Patrick_(orthonormal) on What Would You Do Without Morality? · 2008-06-30T17:38:00.000Z · LW · GW

What would I do?

When faced with any choice, I'd try and figure out my most promising options, then trace them out into their different probable futures, being sure to include such factors as an action's psychological effect on the agent. Then I'd evaluate how much I prefer these futures, acknowledging that I privilege my own future (and the futures of people I'm close to) above others (but not unconditionally), and taking care not to be shortsighted. Then I'd try to choose what seems best under those criteria, applied as rationally as I'm capable of.

You know, the sort of thing that we all do anyway, but often without letting our conscious minds realize it, and thus often with some characteristic errors mixed in.

Comment by Patrick_(orthonormal) on The Moral Void · 2008-06-30T17:32:55.000Z · LW · GW

Eliezer,

Every time I think you're about to say something terribly naive, you surprise me. It looks like trying to design an AI morality is a good way to rid oneself of anthropomorphic notions of objective morality, and to try and see where to go from there.

Although I have to say the potshot at Nietzsche misses the mark; his philosophy is not a resignation to meaninglessness, but an investigation of how to go on and live a human or better-than-human life once the moral void has been recognized. I can't really explicate or defend him in such a short remark, but I'll say that most of the people who talk about Nietzsche (including, probably, me) read their own thoughts over his own; be cautious for that reason of dismissing him before reading any of his major works.

Comment by Patrick_(orthonormal) on The Ultimate Source · 2008-06-15T21:12:30.000Z · LW · GW

Oh, dang it.

Comment by Patrick_(orthonormal) on The Ultimate Source · 2008-06-15T21:11:48.000Z · LW · GW

HA:

Those are interesting empirical questions. Why jump to the conclusion?

I didn't claim it was a proof that some sort of algorithm was running; but given the overall increased effectiveness at maximizing utility that seems to come with the experience of deliberation, I'd say it's a very strongly supported hypothesis. (And to abuse a mathematical principle, the Church-Turing Thesis lends credence to the hypothesis: you can't consistently compete with a good algorithm unless you're somehow running a good algorithm.)

Do you have a specific hypothesis you think is better, or specific evidence that contradicts the hypothesis that some good decision algorithm is generally running during a deliberation?

Also, I think it'll be instructive to check the latest neuroscience research on them. We no longer need to go straight to our intuitions as a beginning and end point.

Oh, I agree, and I'm fascinated too by modern neuroscientific research into cognition. It just seems to me that what I've read supports the hypothesis above.

I wonder if you're bothered by Eliezer's frequent references to our intuitions of our cognition rather than sticking to a more outside view of it. It seems to me that his picture of "free will as experience of a decision algorithm" does find support from the more objective outside view, but that he's also trying to "dissolve the question" for those whose intuitions of introspection make an outside account "feel wrong" at first glance. It doesn't seem that's quite the problem for you, but it's enough of a problem for others that I think he's justified in spending time there.

Secondly, an illusion/myth/hallucination may be that you have the ultimate capacity to choose between "deliberation" (running some sort of decision tree/algorithm) and a random choice process in each given life instance...

Again, I don't think that anyone actually chooses randomly; even the worst decisions come out with far too much order for that to be the case. There is a major difference in how aware people are of their real deliberations (which chiefly amounts to how honest they are with themselves), and those who seem more aware tend to make better decisions and be more comfortable with them. That's a reason why I choose to try and reflect on my own deliberations and deliberate more honestly.

I don't need some "ultimate capacity" to not-X in order for X to be (or feel like, if you prefer) my choice, though; I just need to have visualized the alternatives, seen no intrinsic impediments and felt no external constraints. That's the upshot of this reinterpretation of free will, which both coincides with our feeling of freedom and doesn't require metaphysical entities.

Comment by Patrick_(orthonormal) on The Ultimate Source · 2008-06-15T17:34:59.000Z · LW · GW

Usually I don't talk about "free will" at all, of course! That would be asking for trouble - no, begging for trouble - since the other person doesn't know about my redefinition.

Boy, have we ever seen that illustrated in the comments on your last two posts; just replace "know" with "care". I think people have been reading their own interpretations into yours, which is a shame: your explanation as the experience of a decision algorithm is more coherent and illuminating than my previous articulation of the feeling of free will (i.e. lack of feeling of external constraint). Thanks for the new interpretation.

Hopefully Anonymous:

If I understand you correctly on calling the feeling of deliberation an epiphenomenon, do you agree that those who report deliberating on a straightforward problem (say, a chess problem) tend to make better decisions than those who report not deliberating on it? Then it seems that some actual decision algorithm is operating, analogously to the one the person claims to experience.

Do you then think that moral deliberation is characteristically different from strategic deliberation? If so, then I partially agree, and I think this might be the crux of your objection: that in moral decisions, we often hide our real objectives from our conscious selves, and look to justify those hidden motives. While in chess, there's very little sense of "looking for a reason to move the rook" as a high priority, the sort of motivated cognition this describes is pretty ubiquitous in human moral decision.

However, what I think Eliezer might reply to this is that there still is a process of deliberation going on; the ultimate decision does tend to achieve our goals far better than a random decision, and that's best explained by the running of some decision algorithm. The fact that the goals we pursue aren't always the ones we state— even to ourselves— doesn't prevent this from being a real deliberation; it just means that our experience of the deliberation is false to the reality of it.

Comment by Patrick_(orthonormal) on Thou Art Physics · 2008-06-06T22:47:12.000Z · LW · GW

If that was ambiguous, I meant that the falsehood was the positing of an "I" separate from the patterns of physical evolution of the brain.

Comment by Patrick_(orthonormal) on Thou Art Physics · 2008-06-06T22:45:22.000Z · LW · GW

...I actually can't see how the world would be different if I do have free will or if I don't. (Stephen Weeks)

In order for you to have free will, there has to be a "you" entity in the first place. . . (Matthew C.)

I have an idea where Eliezer is going with this, and I think the above comments are helpful in it.

Seems to me that the reason people intuitively feel there must be some such thing as free will is that there's a basic notion of free vs. constrained in social life, and that we project physical causality of our thoughts to be of the same form.

That is, we tend to think of physical determinism (or probabilistic determinism if we understand it) as if it were the same sort of thing as the way American law constrains our actions, or the way a psychopath holding a gun to our head would do the same. In either case, we can separate the self from the external constraint, and we directly feel that constraint. The fact that our thought processes don't feel constrained by an external agent, then, seems to indicate that they are free from any (deterministic or even probabilistic) necessity.

The falsehood here, as I see it, is that there is no "I" separate from the thoughts, emotions, actions, etc. that are all subject to the physical evolution of my brain; there's no separate thing which is "forced" to go along for the ride. But until we begin to really grasp that (and realize that Descartes was simply wrong in what he thought "Cogito, ergo sum" meant for the self), we have the false dilemma of "free will" versus "physics made me do it".

Comment by Patrick_(orthonormal) on Timeless Identity · 2008-06-03T22:36:23.000Z · LW · GW

David,

You're right not to feel a 'blow to your immortality' should that happen; but consider an alternate story:

You step into the teleport chamber on Earth and, after a weird glow surrounds you, you step out on Mars feeling just fine and dandy. Then somebody tells you that there was a copy of you left in the Earth booth, and that the copy was just assassinated by anti-cloning extremists.

The point of the identity post is that there's really no difference at all between this story and the one you just told, except that in this story you subjectively feel you've traveled a long way instead of staying in the booth on Earth.

Both of the copies are you (or, more precisely, before you step into the booth each copy is a future you); and to each copy, the other copy is just a clone that shares their memories up to time X.

Comment by Patrick_(orthonormal) on Timeless Identity · 2008-06-03T18:59:46.000Z · LW · GW

Dave,

Well, if you resolve not to sign up for cryonics and if the thinking on Quantum Immortality is correct, you might expect a series of weird (and probably painful) events to prevent you indefinitely from dying; while if you're signed up for it, the vast majority of the worlds containing a later "you" will be the ones revived after a peaceful death. So there's a big difference in the sort of experience you might anticipate, depending on whether you've signed up.

Comment by Patrick_(orthonormal) on A Premature Word on AI · 2008-06-02T19:04:00.000Z · LW · GW

Hang on, the automated manufacturing plant isn't quite what I mean by an optimization process of this sort. The "specialized intelligences" being discussed fit the bill better of something with strong optimizing powers but unambitious goals.

Comment by Patrick_(orthonormal) on A Premature Word on AI · 2008-06-02T17:39:00.000Z · LW · GW

Caledonian,

Oh, sure, ant colonies are optimization processes too. But there are a few criteria by which we can distinguish the danger of an ant colony from the danger of a human from the danger of an AGI. For example:

(1) How powerful is the optimization process— how tiny is the target it can achieve? A sophisticated spambot might reliably achieve proper English sentences, but I work towards a much smaller target (namely, a coherent conversation) which the spambot couldn't reliably hit.

Not counting the production of individual ants (which is the result of a much larger optimization process of evolution), the ant colony is able to achieve a certain social structure in the colony and to establish the same in a new colony. That's nice, but not really as powerful as it gets when compared to humans painting the Mona Lisa or building rockets.

(2) What are the goals of the process? An automated automobile plant is pretty powerful at hitting a small target (a constructed car of a particular sort, out of raw materials), but we don't worry about it because there's no sense in which the plant is trying to expand, reproduce itself, threaten humans, etc.

(3) Is the operation of the process going to change either of the above? This is, so far, only partially true for some advanced biological intelligences and some rudimentary machine ones (not counting the slow improvements of ant colonies under evolution); but a self-modifying AI has the potential to alter (1) and (2) dramatically in a short period of time.

Can you at least accept that a smarter-than-human AI able to self-modify would exceed anything we've yet seen on properties (1) and (3)? That's why the SIAI hopes to get (2) right, even given (3).

Comment by Patrick_(orthonormal) on Principles of Disagreement · 2008-06-02T17:12:05.000Z · LW · GW

Eliezer,

I also think that considering the particular topics is helpful here. In the math book, you were pretty confident the statement was wrong once you discovered a clear formal proof, because essentially there's nothing more to be said.

On the interpretation of quantum mechanics, since you believe we have almost all the relevant data we'll ever have (save for observed superpositions of larger and larger objects) and the full criteria to decide between these hypotheses given that information, you again think that disagreement is unfounded.

(I suggest you make an exception in your analysis for Scott Aaronson et al, whose view as I understand it is that progress in his research is more important than holding the Best Justified Interpretation at all times, if the different interpretations don't have consequences for that research; so he uses whatever one seems most helpful at the moment. This is more like asking a different valid question than getting the wrong answer to a question.)

But on the prospects for General AI in the next century, well, there's all sort of data you don't yet have that would greatly help, and others might have it; and updating according to Bayes on that data is intractable without significant assumptions. I think that explains your willingness to hear out Daniel Dennett (albeit with some skepticism).

Finally, I think that when it comes to religion you may be implicitly using the same second-order evaluation I've come around to. I still ascribe a nonzero chance to my old religion being true—I didn't find a knockdown logical flaw or something completely impossible in my experience of the world. I just came to the conclusion I didn't have a specific reason to believe it above others.

However, I'd refuse to give any such religion serious consideration from now on unless it became more than 50% probable to my current self, because taking up a serious religion changes one's very practice of rationality by making doubt a disvalue. Spending too much thought on a religion can get you stuck there, and it was hard enough leaving the first time around. That's a second-order phenomenon different from the others: taking the Copenhagen interpretation for a hypothesis doesn't strongly prevent you from discarding it later.

My best probability of finding the truth lies in the space of nonreligious answers instead of within any particular religion, so I can't let myself get drawn in. So I do form an object-level bias against religion (akin to your outright dismissal of Aumann), but it's one I think is justified on a meta-level.

Comment by Patrick_(orthonormal) on A Premature Word on AI · 2008-06-01T02:18:34.000Z · LW · GW

Caledonian, I think Eliezer's going off of his distinction (in Knowability of AI and elsewhere) between "optimal" and "optimized", which more colloquial senses of the words don't include. There may be more optimal ways of achieving our goals, but that doesn't take away from the fact that we regularly achieve results that

(1) we explicitly set out to do (2) we can distinguish clearly from other results (3) would be incredibly unlikely to achieve by random effort.

I.e. this comment isn't close to optimal, but it's optimized enough as a coherent reply in a conversation that you'd ascribe a decent level of intelligence to whatever optimization process produced it. You wouldn't, say, wonder if I were a spambot, let alone a random word generator.

Comment by Patrick_(orthonormal) on That Alien Message · 2008-05-22T22:12:05.000Z · LW · GW

Bambi,

The 'you gotta believe me technology' remark was probably a reference to the AI-Box Experiment.

Phillip,

None of the defenses you mentioned are safe against something that can out-think their designers, any more than current Internet firewalls are really secure against smart and determined hackers.

And blocking protein nanotech is as limited a defense against AGI as prohibiting boxcutters on airplanes is against general terrorist attack. Eliezer promoted it as the first idea he imagined for getting into physical space, not the only avenue.

Comment by Patrick_(orthonormal) on When Science Can't Help · 2008-05-21T09:01:00.000Z · LW · GW

Frank, I think you have an idea that many-worlds means a bunch of parallel universes, each with a single past and future, like parallel train tracks. That is most emphatically not what the interpretation means. Rather*, all of the universes with my current state in their history are actual futures that the current me will experience (weighted by the Born probabilities).

If there's an event which I might or might not witness (but which won't interfere with my existence), then that's really saying that there are versions of me that witness it and versions of me that don't. But when it comes to death, the only versions of me that notice anything are the ones that notice they're still alive. So I really should anticipate waking up alive— but my family should anticipate me being dead the next day, because most of their future versions live in worlds where I've passed on.

The conclusion above is contentious even among those who believe the many-worlds interpretation; however, the rejection of the 'parallel tracks' analogy is not contentious in the least. If (as you indicate) you think that you have one future and that the version of you who will be miraculously cured overnight isn't the same you, then you have misunderstood the many-worlds interpretation.

*This is an oversimplification and falsification, of course, but it's a damn sight closer than the other image.

Comment by Patrick_(orthonormal) on When Science Can't Help · 2008-05-19T23:39:00.000Z · LW · GW

Bad analogy, actually. If I have an incurable terminal illness today and fall asleep, I'll still have an incurable terminal illness in most of the worlds in which I wake up— so I should assign a very low subjective probability to finding myself cured tomorrow. (Or, more precisely, the vast majority of the configurations that contain someone with all my memories up to that point will be ones in which I'm waking up the next day with the illness.)

I'm not quite sure how it might play out subjectively at the very end of life sans cryonics; this is where the idea of quantum suicide gets weird, with one-in-way-more-than-a-million chances subjectively coming to pass. However, if I'm signed up for cryonics, and if there's a significant chance I'll be revived someday, that probability by far overwhelms those weird possibilities for continued consciousness: in the vast majority of worlds where someone has my memories up to that point, that someone will be a revived post-cryonic me. Thus I should subjectively assign a high probability to being revived.

Or so I think.

Comment by Patrick_(orthonormal) on When Science Can't Help · 2008-05-18T11:57:00.000Z · LW · GW

Sorry to be late to the party— but has nobody yet mentioned the effect that MWI has on assessing cryonics from a personal standpoint; i.e. that your subjective probability of being revived should very nearly be your probability estimate that revival will happen in some universe? If 9/10 of future worlds destroy all cryogenic chambers, and 9/10 of the ones left don't bother to revive you, then it doesn't matter to you: you'll still wake up and find yourself in the hundredth world. Such factors only matter if you think your revival would be a significant benefit to the rest of humanity (rather unlikely, in my estimation).

(Yes, there are quirks to be discussed in this idea. I've thought about some of them already, but I might have missed others. Anyhow, it's getting early.)

Comment by Patrick_(orthonormal) on The Failures of Eld Science · 2008-05-12T22:39:29.000Z · LW · GW

Does anyone else suspect that the last full paragraph is meant to give us the assignment for tomorrow morning?

As for my answers, I think that the particulars of this paradigm shift have to enter into it on some level— because as Eliezer pointed out earlier, the Schrödinger's Cat thought experiment really should have suggested the possibility of superimposed observers to someone, and from there the MWI doesn't seem too remote.

So I'd have to ascribe the delay in the MWI proposal in great part to the fact that it doesn't immediately cohere with our subjective experience of consciousness, and that the physicists were culturally separated from other disciplines (including even philosophy and literature) that were proposing less naive interpretations of consciousness.

Comment by Patrick_(orthonormal) on Many Worlds, One Best Guess · 2008-05-12T02:32:38.000Z · LW · GW

Well, now I think I understand why you chose to do the QM series on OB. As it stands, the series is a long explication of one of the most subtle anthropocentric biases out there— the bias in favor of a single world with a single past and future, based on our subjective perception of a single continuous conscious experience. It takes a great deal of effort before most of us are even willing to recognize that assumption as potentially problematic.

Oh, and one doesn't even have to assume the MWI is true to note this; the single-world bias is irrationally strong in us even if it turns out to correspond to reality.

Comment by Patrick_(orthonormal) on On Being Decoherent · 2008-04-29T03:01:32.000Z · LW · GW

I just wanted to say I've benefited greatly from this series, and especially from the last few posts. I'd studied some graduate quantum mechanics, but bailed out before Feynman paths, decoherence, etc; and from what I'd experienced with it, I was beginning to think an intuitive explanation of (one interpretation of) quantum mechanics was nigh-impossible. Thanks for proving me wrong, Eliezer.

The argument (from elegance/Occam's Razor) for the many-worlds interpretation seems impressively strong, too. I'll be interested to read the exchanges when you let the one-world advocates have their say.

Comment by Patrick_(orthonormal) on Variable Question Fallacies · 2008-03-05T18:33:14.000Z · LW · GW

More to the point: (P or ~P) isn't a theorem, it's an axiom. It is (so far as we can tell) consistent with our other axioms and absolutely necessary for many important theorems (any proof by contradiction— and there are some theorems like Brouwer's Fixed Point Theorem which, IIRC, don't seem to be provable any other way), so we accept a few counterintuitive but consistent consequences like (G or ~G) as the price of doing business. (The Axiom of Choice with the Banach-Tarski Paradox is the same way.)

OK, I've said enough on that tangent.

Comment by Patrick_(orthonormal) on Variable Question Fallacies · 2008-03-05T17:51:31.000Z · LW · GW

Actually, you can't quite escape the problem of the excluded middle by asserting that "This sentence is false" is not well-formed, or meaningful; because Gödel's sentence G is a perfectly well-formed (albeit horrifically complicated) statement about the properties of natural numbers which is undecidable in exactly the same way as Epimenides' paradox.

Mathematicians who prefer to use the law of excluded middle (i.e. most of us, including me) have to affirm that (G or ~G) is indeed a theorem, although neither G nor ~G are theorems! (This doesn't lead to a contradiction within the system, fortunately, because it's also impossible to formally prove that neither G nor ~G are theorems.)

Comment by Patrick_(orthonormal) on Zut Allais! · 2008-01-21T16:38:21.000Z · LW · GW

This matters emotionally, even though it shouldn't (or seems like it shouldn't).

Hypothetical money is not treated as equivalent to possessed money.

My point exactly. It's perfectly understandable that we've evolved a "bird in the hand/two in the bush" heuristic, because it makes for good decisions in many common contexts; but that doesn't prevent it from leading to bad decisions in other contexts. And we should try to overcome it in situations where the actual outcome is of great value to us.

A utility function can take things other than money into account, you know.

As well it should. But how large should you set the utilities of psychology that make you treat two descriptions of the same set of outcomes differently? Large enough to account for a difference of $100 in expected value? $10,000? 10,000 lives?

At some point, you have to stop relying on that heuristic and do the math if you care about making the right decision.

Comment by Patrick_(orthonormal) on Zut Allais! · 2008-01-20T21:15:08.000Z · LW · GW

How do the commenters who justify the usual decisions in the face of certainty and uncertainty with respect to gain and loss account for this part of the post?

There are various other games you can also play with certainty effects. For example, if you offer someone a certainty of $400, or an 80% probability of $500 and a 20% probability of $300, they'll usually take the $400. But if you ask people to imagine themselves $500 richer, and ask if they would prefer a certain loss of $100 or a 20% chance of losing $200, they'll usually take the chance of losing $200. Same probability distribution over outcomes, different descriptions, different choices.

Assuming that this experiment has actually been validated, there's hardly a clearer example of obvious bias than a person's decision on the exact same circumstance being determined by whether it's described as certain vs. uncertain gain or certain vs. uncertain loss.

And Eliezer, I have to compliment your writing skills: when faced with people positing a utility of certainty, the first thing that came to my mind was the irrational scale invariance such a concept must have if it fulfills the stated role. But if you'd just stated that, people would have argued to Judgment Day on nuances of the idea, trying to salvage it. Instead, you undercut the counterargument with a concrete reductio ad absurdum, replacing $24,000 with 24,000 lives- which you realized would make your interlocutors uncomfortable about making an incorrect decision for the sake of a state of mind. You seem to have applied a vital principle: we generally change our minds not when a good argument is presented to us, but when it makes us uncomfortable by showing how our existing intuitions conflict.

If and when you publish a book, if the writing is of this quality, I'll recommend it to the heavens.