Posts

Comments

Comment by Allan_Crossman on Three Worlds Decide (5/8) · 2009-02-03T11:32:39.000Z · LW · GW

... and stuns Akon (or everyone). He then opens a channel to the Superhappies, and threatens to detonate the star - thus preventing the Superhappies from "fixing" the Babyeaters, their highest priority. He uses this to blackmail them into fixing the Babyeaters while leaving humanity untouched.

Comment by Allan_Crossman on BHTV: Yudkowsky / Wilkinson · 2009-01-27T23:24:53.000Z · LW · GW

No, he says "you're the first person who etc..."

Comment by Allan_Crossman on Failed Utopia #4-2 · 2009-01-21T23:24:26.000Z · LW · GW

Is this a "failed utopia" because human relationships are too sacred to break up, or is it a "failed utopia" because the AI knows what it should really have done but hasn't been programmed to do it?

Comment by Allan_Crossman on Changing Emotions · 2009-01-05T18:00:57.000Z · LW · GW

that can support the idea that the much greater incidence of men committing acts of violence is "natural male aggression" that we can't ever eliminate.

The whole point of civilisation is to defeat nature and all its evils.

Comment by Allan_Crossman on Thanksgiving Prayer · 2008-11-28T23:46:14.000Z · LW · GW

... how isn't atheism a religion? It has to be accepted on faith, because we can't prove there isn't a magical space god that created everything.

I think there's a post somewhere on this site that makes the reasonable point that "is atheism a religion?" is not an interesting question. The interesting question is "what reasons do we have to believe or disbelieve in the supernatural?"

Comment by Allan_Crossman on BHTV: Jaron Lanier and Yudkowsky · 2008-11-01T21:56:25.000Z · LW · GW

My issue with this is that we don't, actually, have a philosophical/rational/scientific vision of capital-T Truth yet, despite all of our efforts. (Descartes, Spinoza, Kant, etc.)

Truth is whatever describes the world the way it is.

Even the capital-T Truth believers will admit that we don't know how to achieve an understanding of that truth, they'll just say that it's possible because there really is this kind of truth.

Do you mean an understanding of the way the world is, or an understanding of what truth is?

Isn't it the case, then that your embracing this kind of objective truth is itself a "true because it's useful" kind of thinking, not a "true because it's true" kind of thinking?

You can of course define "truth" however you like - it's just a word. If you're expecting some sort of actual relationship to hold between - say - ink on a page saying "Jupiter has four large moons" and the moons of Jupiter themselves, then of course there's no such thing; the universe is just made of protons, electrons, and such mundane objects.

But there still really are four large moons of Jupiter.

Comment by Allan_Crossman on Prices or Bindings? · 2008-10-21T21:31:34.000Z · LW · GW

Paul, that's a good point.

Eliezer: If all I want is money, then I will one-box on Newcomb's Problem.

Mmm. Newcomb's Problem features the rather weird case where the relevant agent can predict your behaviour with 100% accuracy. I'm not sure what lessons can be learned from it for the more normal cases where this isn't true.

Comment by Allan_Crossman on Prices or Bindings? · 2008-10-21T18:03:55.000Z · LW · GW

If a serial killer comes to a confessional, and confesses that he's killed six people and plans to kill more, should the priest turn him in? I would answer, "No." If not for the seal of the confessional, the serial killer would never have come to the priest in the first place.

It's important to distinguish two ways this argument might work. The first is that the consequences of turning him in are bad, because future killers will be (or might be) less likely to seek advice from priests. That's a fairly straightforward utilitarian argument.

But the second is that turning him in is somehow bad, regardless of the consequences, because the world in which every "confessor" did as you do is a self-defeating, impossible world. This is more of a Kantian line of thought.

Eliezer, can you be explicit which argument you're making? I thought you were a utilitarian, but you've been sounding a bit Kantian lately. :)

Comment by Allan_Crossman on How Many LHC Failures Is Too Many? · 2008-09-22T15:33:00.000Z · LW · GW

Benja: But it doesn't follow that you should conclude that the other people are getting shot, does it?

I'm honestly not sure. It's not obvious to me that you shouldn't draw this conclusion if you already believe in MWI.

(Clearly you learned nothing about that, because whether or not they get shot does not affect anything you're able to observe.)

It seems like it does. If people are getting shot then you're not able to observe any decision by the guards that results in you getting taken away. (Or at least, you don't get to observe it for long - I'm don't think the slight time lag matters much to the argument.)

Comment by Allan_Crossman on How Many LHC Failures Is Too Many? · 2008-09-22T05:27:00.000Z · LW · GW

Benja: Allan, you are right that if the LHC would destroy the world, and you're a surviving observer, you will find yourself in a branch where LHC has failed, and that if the LHC would not destroy the world and you're a surviving observer, this is much less likely. But contrary to mostly everybody's naive intuition, it doesn't follow that if you're a surviving observer, LHC has probably failed.

I don't believe that's what I've been saying; the question is whether the LHC failing is evidence for the LHC being dangerous, not whether surviving is evidence for the LHC having failed.

Comment by Allan_Crossman on How Many LHC Failures Is Too Many? · 2008-09-22T05:13:00.000Z · LW · GW

Allan: your intuition is wrong here too. Notice that if Zeus were to have independently created a zillion people in a green room, it would change your estimate of the probability, despite being completely unrelated.

I don't see how, unless you're told you could also be one of those people.

Comment by Allan_Crossman on How Many LHC Failures Is Too Many? · 2008-09-21T23:25:49.000Z · LW · GW

Simon: As I say above, I'm out of my league when it comes to actual probabilities and maths, but:

P(W|F) = P(F|W)P(W)/P(F)

Note that none of these probabilities are conditional on survival.

Is that correct? If the LHC is dangerous and MWI is true, then the probability of observing failure is 1, since that's the only thing that gets observed.

An analogy I would give is:

You're created by God, who tells you that he has just created 10 people who are each in a red room, and depending on a coin flip God made, either 0 or 10,000,000 people who are each in a blue room. You are one of these people. You turn the lights on and see that you're one of the 10 people in a red room. Don't you immediately conclude that there are almost certainly only 10 people, with nobody in a blue room?

The red rooms represent Everett worlds where the LHC miraculously and repeatedly fails. The blue rooms represent Everett worlds where the LHC works. God's coin flip is whether or not the LHC is dangerous.

i.e. You conclude that there are no people in worlds where the LHC works (blue rooms), because they're all dead. The reasoning still works even if the coin is biased, as long as it's not too biased.

Comment by Allan_Crossman on How Many LHC Failures Is Too Many? · 2008-09-21T22:44:56.000Z · LW · GW

Benja, I'm not really smart enough to parse the maths, but I can comment on the intuition:

The very small number of Everett branches that have the LHC non-working due to a string of random failures is the same in both cases [of LHC dangerous vs. LHC safe]

I see that, but if the LHC is dangerous then you can only find yourself in the world where lots of failures have occurred, but if the LHC is safe, it's extremely unlikely that you'll find yourself in such a world.

Thus, if all you know is that you are in an Everett branch in which the LHC is non-working due to a string of random failures, you have no information about whether the other Everett branches have the LHC happily chugging ahead, or dead.

The intuition on my side is that, if you consider yourself a random observer, it's amazing that you should find yourself in one of the extremely few worlds where the LHC keeps failing, unless the LHC is dangerous, in which case all observers are in such a world.

(I would like to stress for posterity that I don't believe the LHC is dangerous.)

Comment by Allan_Crossman on How Many LHC Failures Is Too Many? · 2008-09-21T17:21:51.000Z · LW · GW

Simon: the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth.

But - if the LHC was Earth-fatal - the probability of observing a world in which the LHC was brought fully online would be zero.

(Applying anthropic reasoning here probably makes more sense if you assume MWI, though I suspect there are other big-world cosmologies where the logic could also work.)

Comment by Allan_Crossman on How Many LHC Failures Is Too Many? · 2008-09-21T08:54:01.000Z · LW · GW

Oh God I need to read Eliezer's posts more carefully, since my last comment was totally redundant.

Comment by Allan_Crossman on How Many LHC Failures Is Too Many? · 2008-09-20T23:38:27.000Z · LW · GW

First collisions aren't scheduled to have happened yet, are they? In which case, the failure can't be seen as anthropic evidence yet, since we might as well be in a world where it hasn't failed, since such a world wouldn't have been destroyed yet in any case.

But if I'm not mistaken, even old failures will become evidence retrospectively once first collisions are overdue, since (assuming the unlikely case of the LHC actually being dangerous) all observers still alive would be in a world where the LHC failed; when it failed being irrelevant.

As much as the AP fascinates me, it does my head in. :)

Comment by Allan_Crossman on My Best and Worst Mistake · 2008-09-16T01:05:36.000Z · LW · GW

From your perspective, you should chalk this up to the anthropic principle: if I'd fallen into a true dead end, you probably wouldn't be hearing from me on this blog.

I'm not sure that can properly be called anthropic reasoning; I think you mean a selection effect. To count as anthropic, my existence would have to depend upon your intellectual development; which it doesn't, yet. :)

(Although I suppose my existence as Allan-the-OB-reader probably does so depend... but that's an odd way of looking at it.)

Comment by Allan_Crossman on The Truly Iterated Prisoner's Dilemma · 2008-09-05T02:42:58.000Z · LW · GW

I'm interested in the inconsistency of those who accept defection as the rational equilibrium in the one-shot PD, but find excuses to reject it in the finitely iterated known-horizon PD.

[...] What if neither party to the IPD thinks there's a realistic chance that the other party is stupid - if they're both superintelligences, say?

It's never worthwhile to cooperate in the one shot case, unless the two players' actions are linked in some Newcomb-esque way.

In the iterated case, if there's even a fairly small chance that the other player will try to establish cooperation, then it's worthwhile to cooperate on move 1. And since both players are superintelligences, surely they both realise that there is indeed a sufficiently high chance, since they're both likely to be thinking this. Is this line of reasoning really an "excuse"?

One more thing; could something like the following be made respectable?

  1. The prior odds of the other guy defecting in round 1 are .999
  2. But if he knows that I know fact #1, the odds become .999 x .999
  3. But if he knows that I know facts #1 and #2, the odds become .999 x .999 x .999

Etc...

Or is this nonsense?

Comment by Allan_Crossman on The Truly Iterated Prisoner's Dilemma · 2008-09-04T20:47:02.000Z · LW · GW

Carl - good point.

I shouldn't have conflated perfectly rational agents (if there are such things) with classical game-theorists. Presumably, a perfectly rational agent could make this move for precisely this reason.

Probably the best situation would be if we were so transparently naive that the maximizer could actually verify that we were playing naive tit-for-tat, including on the last round. That way, it would cooperate for 99 rounds. But with it in another universe, I don't see how it can verify anything of the sort.

(By the way, Eliezer, how much communication is going on between us and Clippy? In the iterated dilemma's purest form, the only communications are the moves themselves - is that what we are to assume here?)

Comment by Allan_Crossman on The True Prisoner's Dilemma · 2008-09-04T20:34:00.000Z · LW · GW

Vladimir: In case of prisoner's dilemma, you are penalized by ending up with (D,D) instead of better (C,C) for deciding to defect

Only if you have reason to believe that the other player will do whatever you do. While that's the case in Simpleton's example, it's not the case in Eliezer's.

Comment by Allan_Crossman on The Truly Iterated Prisoner's Dilemma · 2008-09-04T19:55:34.000Z · LW · GW

If it's actually common knowledge that both players are "perfectly rational" then they must do whatever game theory says.

But if the paperclip maximizer knows that we're not perfectly rational (or falsely believes that we're not) it will try and achieve a better score than it could get if we were in fact perfectly rational. It will do this by cooperating, at least for a time.

I think correct strategy gets profoundly complicated when one side believes the other side is not fully rational.

Comment by Allan_Crossman on The True Prisoner's Dilemma · 2008-09-04T19:47:00.000Z · LW · GW

Chris: Sorry Allan, that you won't be able to reply. But you did raise the question before bowing out...

I didn't bow out, I just had a lot of comments made recently. :)

I don't like the idea that we should cooperate if it cooperates. No, we should defect if it cooperates. There are benefits and no costs to defecting.

But if there are reasons for the other to have habits that are formed by similar forces

In light of what I just wrote, I don't see that it matters; but anyway, I wouldn't expect a paperclip maximizer to have habits so ingrained that it can't ever drop them. Even if it routinely has to make real trade-offs, it's presumably smart enough to see that - in a one-off interaction - there are no drawbacks to defecting.

Simpleton: No line of causality from one to the other is required.

Yeah, I get your argument now. I think you're probably right, in that extreme case.

Comment by Allan_Crossman on The True Prisoner's Dilemma · 2008-09-04T12:51:05.000Z · LW · GW

Psy-Kosh: They don't have to believe they have such causal powers over each other. Simply that they are in certain ways similar to each other.

I agree that this is definitely related to Newcomb's Problem.

Simpleton: I earlier dismissed your idea, but you might be on to something. My apologies. If they were genuinely perfectly rational, or both irrational in precisely the same way, and could verify that fact in each other...

Then they might be able to know that they will both do the same thing. Hmm.

Anyway, my 3 comments are up. Nothing more from me for a while.

Comment by Allan_Crossman on The True Prisoner's Dilemma · 2008-09-04T12:32:24.000Z · LW · GW

[D,C] will happen only if the other player assumes that the first player bets on cooperation

No, it won't happen in any case. If the paperclip maximizer assumes I'll cooperate, it'll defect. If it assumes I'll defect, it'll defect.

I debug my model of decision-making policies [...] by requiring the outcome to be stable even if I assume that we both know which policy is used by another player

I don't see that "stability" is relevant here: this is a one-off interaction.

Anyway, lets say you cooperate. What exactly is preventing the paperclip maximizer from defecting?

Comment by Allan_Crossman on The True Prisoner's Dilemma · 2008-09-04T12:05:38.000Z · LW · GW

simpleton: won't each side choose to cooperate, after correctly concluding that it will defect iff the other does?

Only if they believe that their decision somehow causes the other to make the same decision.

CarlJ: How about placing a bomb on two piles of substance S and giving the remote for the human pile to the clipmaximizer and the remote for its pile to the humans?

It's kind of standard in philosophy that you aren't allowed solutions like this. The reason is that Eliezer can restate his example to disallow this and force you to confront the real dilemma.

Vladimir: It's preferrable to choose (C,C) [...] if we assume that other player also bets on cooperation.

No, it's preferable to choose (D,C) if we assume that the other player bets on cooperation.

decide self.C; if other.D, decide self.D

We're assuming, I think, that you don't get to know what the other guy does until after you've both committed (otherwise it's not the proper Prisoner's Dilemma). So you can't use if-then reasoning.

Comment by Allan_Crossman on The True Prisoner's Dilemma · 2008-09-04T02:00:02.000Z · LW · GW

Michael: This is not a prisoner's dilemma. The nash equilibrium (C,C) is not dominated by a pareto optimal point in this game.

I don't believe this is correct. Isn't the Nash equilibrium here (D,D)? That's the point at which neither player can gain by unilaterally changing strategy.

Comment by Allan_Crossman on The True Prisoner's Dilemma · 2008-09-04T00:29:50.000Z · LW · GW

Prase, Chris, I don't understand. Eliezer's example is set up in such a way that, regardless of what the paperclip maximizer does, defecting gains one billion lives and loses two paperclips.

Basically, we're being asked to choose between a billion lives and two paperclips (paperclips in another universe, no less, so we can't even put them to good use).

The only argument for cooperating would be if we had reason to believe that the paperclip maximizer will somehow do whatever we do. But I can't imagine how that could be true. Being a paperclip maximizer, it's bound to defect, unless it had reason to believe that we would somehow do whatever it does. I can't imagine how that could be true either.

Or am I missing something?

Comment by Allan_Crossman on The True Prisoner's Dilemma · 2008-09-03T22:14:43.000Z · LW · GW

Damnit, Eliezer nitpicked my nitpicking. :)

Comment by Allan_Crossman on The True Prisoner's Dilemma · 2008-09-03T22:01:41.000Z · LW · GW

I agree: Defect!

Clearly the paperclip maximizer should just let us have all of substance S; but a paperclip maximizer doesn't do what it should, it just maximizes paperclips.

I sometimes feel that nitpicking is the only contribution I'm competent to make around here, so... here you endorsed Steven's formulation of what "should" means; a formulation which doesn't allow you to apply the word to paperclip maximizers.

Comment by Allan_Crossman on Magical Categories · 2008-08-25T04:16:42.000Z · LW · GW

Plato had a concept of "forms". Forms are ideal shapes or abstractions: every dog is an imperfect instantiation of the "dog" form that exists only in our brains.

Mmm. I believe Plato saw the forms as being real things existing "in heaven" rather than merely in our brains. It wasn't a stupid theory for its day; in particular, a living thing growing into the right shape or form must have seemed utterly mysterious, and so the idea that some sort of blueprint was laid out in heaven must have had a lot of appeal.

But anyway, forms as ideas "in our brains" isn't really the classical forms theory.

it is not difficult to believe in the existence of a "good" form.

In our brains, just maybe.

If we assume an AI that can develop its own forms, then it should be able to discover the Form of the Good.

Do you mean by looking into our brains, or by just arriving at it on its own?

Comment by Allan_Crossman on The Cartoon Guide to Löb's Theorem · 2008-08-18T02:53:16.000Z · LW · GW

Boiling it down to essentials, it looks to me like the key move is this:

  • If we can prove X, then we can prove Y.
  • Therefore, if X is true, then we can prove Y.

But this doesn't follow - X could be true but not provable.

Is that right? It's ages since I did logic, and never to a deep level, so excuse me if this is way off.

Comment by Allan_Crossman on The Bedrock of Morality: Arbitrary? · 2008-08-15T23:22:13.000Z · LW · GW

Eliezer, I think I kind-of understand by now why you don't call yourself a relativist. Would you say that it's the "psychological unity of mankind" that distinguishes you from relativists?

A relativist would stress that humans in different cultures all have different - though perhaps related - ideas about "good" and "right" and so on. I believe your position is that the bulk of human minds are similar enough that they would arrive at the same conclusions given enough time and access to enough facts; and therefore, that it's an objective matter of fact what the human concepts of "right" and "good" actually mean.

And since we are human, there's no problem in us continuing to use those words.

Am I understanding correctly?

It seems like your position would become more akin to relativism if the "psychological unity" turned out to be dubious, or if our galaxy turned out to be swarming with aliens, and people were forced to deal with genuinely different minds. In those cases, would there still be anything to separate you from actual relativists?

(In either case, it would still be an objective matter of fact what any given mind would call "good" if given enough time - but that would be a much less profound fact than it is for a species all alone and in a state of psychological unity.)

Comment by Allan_Crossman on Inseparably Right; or, Joy in the Merely Good · 2008-08-10T03:00:40.000Z · LW · GW

It's a datum (which any adequate metaethical theory must account for) that there can be substantive moral disagreement. When Bob says "Abortion is wrong", and Sally says, "No it isn't", they are disagreeing with each other.

I wonder though: is this any more mysterious than a case where two children are arguing over whether strawberry or chocolate ice cream is better?

In that case, we would happily say that the disagreement comes from their false belief that it's a deep fact about the universe which ice cream is better. If Eliezer is right (I'm still agnostic about this), wouldn't moral disagreements be explained in an analogous way?

Comment by Allan_Crossman on Sorting Pebbles Into Correct Heaps · 2008-08-10T02:38:42.000Z · LW · GW

This post hits me far more strongly than the previous ones on this subject.

I think your main point is that it's positively dangerous to believe in an objective account of morality, if you're trying to build an AI. Because you will then falsely believe that a sufficiently intelligent AI will be able to determine the correct morality - so you don't have to worry about programming it to be friendly (or Friendly).

I'm sure you've mentioned this before, but this is more forceful, at least to me. Thanks.

Personally, even though I've mentioned that I thought there might be an objective basis for morality, I've never believed that every mind (or even a large fraction of minds) would be able to find it. So I'm in total agreement that we shouldn't just assume a superintelligent AI would do good things.

In other words, this post drives home to me that, pragmatically, the view of morality you propose is the best one to have, from the point of view of building an AI.

Comment by Allan_Crossman on Morality as Fixed Computation · 2008-08-08T04:02:42.000Z · LW · GW

And I may not know what this question is, actually; I may not be able to print out my current guess nor my surrounding framework; but I know, as all non-moral-relativists instinctively know, that the question surely is not just "How can I do whatever I want?"

I'm not sure you've done enough to get away from being a "moral relativist", which is not the same as being an egoist who only cares about his own desires. "Moral relativism" just means this (Wikipedia):

In philosophy, moral relativism is the position that moral or ethical propositions do not reflect objective and/or universal moral truths [...] Moral relativists hold that no universal standard exists by which to assess an ethical proposition's truth.

Unless I've radically misunderstood, I think that's close to your position. Admittedly, it's an objective matter of fact whether some action is good according to the "blob of a computation" (i.e. set of ethical concerns) that any specific person cares about. But there's no objective way to determine that one "blob" is any more correct than another - except by the standards of those blobs themselves.

(By the way, I hope this isn't perceived as particular hostility on my part: I think some very ethical and upstanding people have been relativists. It's also not an argument that your position is wrong.)

Comment by Allan_Crossman on Hiroshima Day · 2008-08-07T19:57:59.000Z · LW · GW

Myself: I can't help but wonder about anthropic effects here. It might be the case that nuclear-armed species annihilate themselves with high probability (say 50% per decade), but of course, all surviving observers live on planets where it hasn't happened through sheer chance.

Just to expand on this (someone please stop me if this sort of speculative post is irritating):

Imagine there are a hundred Earths (maybe because of MWI, or because the universe is infinite, or whatever). Lets say there's a 90% chance of nuclear war before 2008, and such a war would reduce the 2008 population by 90%. In that case, you still end up with 53% of observers in 2008 living on an Earth where nuclear war didn't occur.

This implies that we might be overconfident, and assign too low a probability to nuclear war, just because we've survived as long as we have.

But: The argument seems to implicitly assume that I am a random observer in 2008. I'm not sure this is legitimate. Anthropic reasoning is irritatingly tricky.

Comment by Allan_Crossman on Hiroshima Day · 2008-08-07T16:11:41.000Z · LW · GW

Time has passed, and we still haven't blown up our world, despite a close call or two.

I can't help but wonder about anthropic effects here. It might be the case that nuclear-armed species annihilate themselves with high probability (say 50% per decade), but of course, all surviving observers live on planets where it hasn't happened through sheer chance.

(Though on the other hand, if an all-out nuclear war is survivable for a species like ours, then this line of thought wouldn't work.)

Comment by Allan_Crossman on No Logical Positivist I · 2008-08-04T13:35:49.000Z · LW · GW

Poke, can you expand a little on what you're driving at?

Also, Steven, how on Earth is that statement true under MWI? :)

Comment by Allan_Crossman on The Meaning of Right · 2008-07-30T21:43:00.000Z · LW · GW

We do not know very well how the human mind does anything at all. But that the the human mind comes to have preferences that it did not have initially, cannot be doubted.

I believe Eliezer is trying to create "fully recursive self-modifying agents that retain stable preferences while rewriting their source code". Like Sebastian says, getting the "stable preferences" bit right is presumably necessary for Friendly AI, as Eliezer sees it.

(This clause "as Eliezer sees it" isn't meant to indicate dissent, but merely my total incompetence to judge whether this condition is strictly necessary for friendly AI.)

Comment by Allan_Crossman on The Meaning of Right · 2008-07-30T13:56:00.000Z · LW · GW

I am assuming [the AI] acts, and therefore makes choices, and therefore has preferences, and therefore can have preferences which conflict with the preferences of other minds (including human minds).

An AI can indeed have preferences that conflict with human preferences, but if it doesn't start out with such preferences, it's unclear how it comes to have them later.

On the other hand, if it starts out with dubious preferences, we're in trouble from the outset.

Comment by Allan_Crossman on The Meaning of Right · 2008-07-29T18:47:00.000Z · LW · GW

Eliezer [in response to me]: This just amounts to defining should as an abstract computation, and then excluding all minds that calculate a different rule-of-action as "choosing based on something other than morality". In what sense is the morality objective, besides the several senses I've already defined, if it doesn't persuade a paperclip maximizer?

I think my position is this:

If there really was such a thing as an objective morality, it would be the case that only a subset of possible minds could actually discover or be persuaded of that fact.

Presumably, for any objective fact, there are possible minds who could never be convinced of that fact.

Comment by Allan_Crossman on The Meaning of Right · 2008-07-29T15:04:16.000Z · LW · GW

Eliezer: It's because when I say right, I am referring to a 1-place function

Like many others, I fall over at this point. I understand that Morality_8472 has a definite meaning, and therefore it's a matter of objective fact whether any act is right or wrong according to that morality. The problem is why we should choose it over Morality_11283.

Of course you can say, "according to Morality_8472, Morality_8472 is correct" but that's hardly helpful.

Ultimately, I think you've given us another type of anti-realist relativism.

Eliezer: But if you were stepping outside the human and hoping for moral arguments that would persuade any possible mind, even a mind that just wanted to maximize the number of paperclips in the universe, then sorry - the space of possible mind designs is too large to permit universally compelling arguments.

It's at least conceivable that there could be objective morality without universally compelling moral arguments. I personally think there could be an objective foundation for morality, but I wouldn't expect to persuade a paperclip maximizer.

Comment by Allan_Crossman on Religion's Claim to be Non-Disprovable · 2008-07-27T13:43:00.000Z · LW · GW

I hope the priests of Baal checked that it was indeed water, and not some sort of accelerant.

Comment by Allan_Crossman on Can Counterfactuals Be True? · 2008-07-24T21:04:23.000Z · LW · GW

In more detail, suppose there was in fact no conspiracy and Oswald was a lone, self-motivated individual. It might still turn out that the simplest way to imagine what would have happened if Oswald had not killed Kennedy, would be to imagine that there was in fact a conspiracy, and they found someone else, who did the job in the same way. That would arguably be the change which would minimize total forward and backward alterations to the timeline.

Hal: what you describe is called "backtracking" in the philosophical literature. It's not usually seen as legitimate, I think mostly because it doesn't correspond to what a sentence like "if X had occurred, then Y would have occurred" actually means in normal usage.

I mean, it's a really weird analysis that says "there really was no conspiracy, so if Oswald hadn't shot Kennedy, there would have been a conspiracy, and Kennedy would have been shot." :)

Comment by Allan_Crossman on Can Counterfactuals Be True? · 2008-07-24T12:01:31.000Z · LW · GW

Hmm, the second bit I just wrote isn't going to work, I suppose, since your knowledge of what came after the event will affect whether you believe in a conspiracy or not...

Comment by Allan_Crossman on Can Counterfactuals Be True? · 2008-07-24T11:53:13.000Z · LW · GW

Oh, and to talk about "the probability that John F. Kennedy was shot, given that Lee Harvey Oswald didn't shoot him", we write:

P(Kennedy_shot|Oswald_not)

If I've understood you, this is supposed to be a high value near 1. I'm just a noob at Bayesian analysis or Bayesian anything, so this was confusing me until I realised I also had to include all the other information I know: i.e. all the reports I've heard that Kennedy actually was shot, that someone else became president, and so on.

It seems like this would be a case where it's genuinely helpful to include that background information:

P(Kennedy_shot | Oswald_not & Reports_of_Kennedy_shot) = 1 or thereabouts


And to talk about "the probability that John F. Kennedy would have been shot, if Lee Harvey Oswald hadn't shot him", we write:

P(Oswald_not []-> Kennedy_shot)

Presumably this is the case where we pretend that all that background knowledge has been discarded?

P(Kennedy_shot | Oswald_not & no_knowledge_of_anything_after_October_1963) = 0.05 or something?

Comment by Allan_Crossman on The Gift We Give To Tomorrow · 2008-07-17T21:53:48.000Z · LW · GW

None of us can say what our descendants will or will not do, but there is no reason to believe that any particular part of human nature will be worthy in their eyes. [emphasis mine]

I can see one possible reason: we might have some influence over what they think.

Comment by Allan_Crossman on Probability is Subjectively Objective · 2008-07-14T15:48:15.000Z · LW · GW

there is no such thing as a probability that isn't in any mind.

Hmm. Doesn't quantum mechanics (especially if we're forgetting about MWI) give us genuine, objective probabilities?

Comment by Allan_Crossman on Rebelling Within Nature · 2008-07-13T14:31:31.000Z · LW · GW

Once you unwind past evolution and true morality isn't likely to contain [...]

I think either a word has been missed out here, or and should be then.

If I recall correctly, I did ask myself that, and sort of waved my hands mentally and said, "It just seems like one of the best guesses - I mean, I don't know that people are valuable, but I can't think of what else could be."

I find this fairly ominous, since that handwaved belief happens to be my current belief: that conscious states are the only things of (intrinsic) value: since only those conscious states can contain affirmations or denials that whatever they're experiencing has value.

Comment by Allan_Crossman on The Fear of Common Knowledge · 2008-07-11T18:41:55.000Z · LW · GW

ES: for the puzzle to make sense we have to assume that the islanders have no memory of exactly when they came to be on the island.

I don't see what difference that makes. All that matters is that everyone is present before it's announced that someone has blue eyes, and everyone has made an accurate count of how many other people have blue eyes, and nobody knows their own eye colour.