Posts

Comments

Comment by Paul_Gowder on Shut up and do the impossible! · 2008-10-11T21:25:00.000Z · LW · GW

He put up a very good fight.

Comment by Paul_Gowder on The Truly Iterated Prisoner's Dilemma · 2008-09-04T22:14:02.000Z · LW · GW

Eliezer: the rationality of defection in these finitely repeated games has come under some fire, and there's a HUGE literature on it. Reading some of the more prominent examples may help you sort out your position on it.

Start here:

Robert Aumann. 1995. "Backward Induction and Common Knowledge of Rationality." Games and Economic Behavior 8:6-19.

Cristina Bicchieri. 1988. "Strategic Behavior and Counterfactuals." Synthese 76:135-169.

Cristina Bicchieri. 1989. "Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge." Erkenntnis 30:69-85.

Ken Binmore. 1987. "Modeling Rational Players I." Economics and Philosophy 3:9-55.

Jon Elster. 1993. "Some unresolved problems in the theory of rational behaviour." Acta Sociologica 36: 179-190.

Philip Reny. 1992. "Rationality in Extensive-Form Games." The Journal of Economic Perspectives 6:103-118.

Phillip Petit and Robert Sugden. 1989. "The Backward Induction Paradox." The Journal of Philosophy 86:169-182.

Brian Skyrms. 1998. "Subjunctive Conditionals and Revealed Preference." Philosophy of Science 65:545-574

Robert Stalnaker. 1999. "Knowledge, Belief and Counterfactual Reasoning in Games." in Cristina Bicchieri, Richard Jeffrey, and Brian Skyrms, eds., The Logic of Strategy. New York: Oxford University Press.

Comment by Paul_Gowder on Whither Moral Progress? · 2008-07-16T19:53:02.000Z · LW · GW

Nick,

Fair enough, but consider the counterfactual case: suppose we believed that there were some fact about a person that would permit enslaving that person, but learned that the set of people to whom those facts applied was the null set. It seems like that would still represent moral progress in some sense.

Perhaps not the sort that Eliezer is talking about, though. But I'm not sure that the two can be cleanly separated. Consider slavery again, or the equality of humanity in general. Much of the moral movement there can be seen as changing interpretations of Christianity -- that is, people thought the Bible justified slavery, then they stopped thinking that. Is that a purely moral change? Or is that a better interpretation of a body of religious thought?

Comment by Paul_Gowder on Whither Moral Progress? · 2008-07-16T18:59:28.000Z · LW · GW

Nick:

I don't think discovering better instrumental values toward the same terminal values you always had counts as moral progress, at least if those terminal values are consciously, explicitly held.

Why on earth not? Aristotle thought some people were naturally suited for slavery. We now know that's not true. Why isn't that moral progress?

(Similarly, general improvements in reasoning, to the extent they allow us to reject bad moral arguments as well as more testable kinds of bad arguments, could count as moral progress.)

Comment by Paul_Gowder on Whither Moral Progress? · 2008-07-16T09:08:04.000Z · LW · GW

One possibility: we can see a connection between morality and certain empirical facts -- for example, if we believe that more moral societies will be more stable, we might think that we can see moral progress in the form of changes that are brought about by previous morally related instability. That's not very clear -- but a much clearer and more sophisticated variant on that idea can perhaps be seen in an old paper by Joshua Cohen, "The Arc of the Moral Universe" (google scholar will get it, and definitely read it, because a) it's brilliant, and b) I'm not representing it very well).

Or we might think that some of our morally relevant behaviors are consistently dependent on empirical facts, in which we might progress in finding out. For example, we might have always thought that beings who are as intelligent as we are and have as complex social and emotional lives as do we deserve to be treated as equals. Suppose we think the above at year 1 and year 500, but at year 500, we discover that some group of entities X (which could include fellow humans, as with the slaves, or other species) is as intelligent, etc., and act accordingly. Then it seems like we've made clearly directional moral progress -- we've learned to more accurately make the empirical judgments about which our unchanged moral judgment depends.

Comment by Paul_Gowder on Is Morality Given? · 2008-07-06T17:02:10.000Z · LW · GW

So here's a question Eliezer: is Subhan's argument for moral skepticism just a concealed argument for universal skepticism? After all, there are possible minds that do math differently, that do logic differently, that evaluate evidence differently, that observe sense-data differently...

Either Subhan can distinguish his argument from an argument for universal skepticism, or I say that it's refuted by reductio, since universal skepticism fails to the complete impossibility of asserting it consistently + things like moorean facts.

Comment by Paul_Gowder on The Bedrock of Fairness · 2008-07-03T22:47:57.000Z · LW · GW

Suppose that 98% of humans, under 98% of the extrapolated spread, would both choose a certain ordering of arguments, and also claim that this is the uniquely correct ordering. Is this sufficient to just go ahead and label that ordering the rational one? If you refuse to answer that question yourself, what is the procedure that answers it?

Again, this is why it's irreducibly social. If there isn't a procedure that yields a justified determinate answer to the rationality of that order, then the best we can do is take what is socially accepted at the time and in the society in which such a superintelligence is created. There's nowhere else to look.

Comment by Paul_Gowder on The Bedrock of Fairness · 2008-07-03T20:19:43.000Z · LW · GW

Eleizer,

Things like the ordering of arguments are just additional questions about the rationality criteria, and my point above applies to them just as well -- either there's a justifiable answer ("this is how arguments are to be ordered,") or it's going to be fundamentally socially determined and there's nothing to be done about it. The political is really deeply prior to the workings of a superintelligence in such cases: if there's no determinate correct answer to these process questions, then humans will have to collectively muddle through to get something to feed the superintelligence. (Aristotle was right when he said politics was the ruling science...)

On the humans for humans point, I'll appeal back to the notion of modeling minds. If we take P to be a reason, then all we have to be able to tell the superintelligence is "simulate us and consider what we take to be reasons," and, after simulating us, the superintelligence ought to know what those things are, what we mean when we say "take to be reasons," etc. Philosophy written by humans for humans ought to be sufficient once we specify the process by which reasons that matter to humans are to be taken into account.

Comment by Paul_Gowder on The Bedrock of Fairness · 2008-07-03T19:34:53.000Z · LW · GW

Right, but those questions are responsive to reasons too. Here's where I embrace the recursion. Either we believe that ultimately the reasons stop -- that is, that after a sufficiently ideal process, all of the minds in the relevant mind design space agree on the values, or we don't. If we do, then the superintelligence should replicate that process. If we don't, then what basis do we have for asking a superintelligence to answer the question? We might as well flip a coin.

Of course, the content of the ideal process is tricky. I'm hiding the really hard questions in there, like what counts as rationality, what kinds of minds are in the relevant mind design space, etc. Those questions are extra-hard because we can't appeal to an ideal process to answer them on pain of circularity. (Again, political philosophy has been struggling with a version of this question for a very long time. And I do mean struggling -- it's one of the hardest questions there is.) And the best answer I can give is that there is no completely justifiable stopping point: at some point, we're going to have to declare "these are our axioms, and we're going with them," even though those axioms are not going to be justifiable within the system.

What this all comes down to is that it's all necessarily dependent on social context. The axioms of rationality and the decisions about what constitute relevant mind-space for any such superintelligence would be determined by the brute facts of what kind of reasoning is socially acceptable in the society that creates such a superintelligence. And that's the best we can do.

Comment by Paul_Gowder on The Bedrock of Fairness · 2008-07-03T18:40:57.000Z · LW · GW

Eliezer,

The resemblance between my second suggestion and your thing didn't go unnoticed -- I had in fact read your coherent extrapolated volition thing before (there's probably an old e-mail from me to you about it, in fact). I think it's basically correct. But the method of justification is importantly different, because the idea is that we're trying to approximate something with epistemic content -- we're not just trying to do what you might call a Xannon thing -- we're not just trying to model what humans would do. Rather, we're trying to model and improve a specific feature of humanity that we see as morally relevant -- responsiveness to reasons.

That's really, really important.

In the context of your dialogue above, it's what reconciles Xannon and Yancy: even if Yancy can't convince Xannon that there's some kind of non-subjective moral truth, he ought to be able to convince Xannon that moral beliefs should be responsive to reasons -- and likewise, even if Xannon can't convince Yancy that what really matters, morally, is what people can agree on, he should be able to convince Yancy that the best way to get at it in the real world is by a collective process of reasoning.

So you see that this method of justification does provide a way to answers to questions like "friendliness to whom." I know what I'm doing, Eliezer. :-)

Comment by Paul_Gowder on The Bedrock of Fairness · 2008-07-03T09:13:40.000Z · LW · GW

That's a really fascinating question. I don't know that there'd be a "standard" answer to this -- were the questions taken up, they'd be subject to hot debate.

Are we specifying that this ultrapowerful superintelligence has mind-reading power, or the closest non-magical equivalent in the form of access to every mental state that an arbitrary individual human has, even stuff that now gets lumped under the label "qualia"/ability to perfectly simulate the neurobiology of such an individual?

If so, then two approaches seem defensible to me. First: let's assume there is an answer out there to moral questions, in a form that is accessible to a superintelligence, and let's just assume the hard problem away, viz., that the questioners know how to tell the superintelligence where to look (or the superintelligence can figure it out itself).

We might not be able to produce a well-formed specification of what is to be computed when we're talking about moral questions (it's easy to think that any attempt to do so would rig the answer in advance -- for example, if you ask it for universal principles, you're going to get something different from what you'd get if you left the universality variable free...). But if the superintelligence could simulate our mental processes such that it could tell what it is that we want (for some appropriate values of we, like the person asking or the whole of humanity if there was any consensus -- which I doubt), then in principle it could simply answer that by declaring what the truth of the matter is with respect to that which it has determined that we desire.

That assumes the superintelligence has access to moral truth, but once we do that, I think the standard arguments against "guardianship" (e.g. the first few chapters of Robert Dahl, Democracy and its Critics) fail, in that if they're true -- if people are really better off deciding for themselves (the standard argument), and making people better off is what is morally correct, then we can expect the superintelligence to return "you figure it out." And then the answer to "friendly to who" or "so you get to decide what's friendly" is simply to point to the fact that the superintelligence has access to moral truth.

The more interesting question perhaps is what should happen if the superintelligence doesn't have access to moral truth (either because there is no such thing in the ordinary sense, or because it exists but is unobservable). I assume here that being responsive to reasons is an appropriate way to address moral questions (if not, all bets are off). Then the superintelligence loses one major advantage over ordinary human reasoning (access to the truth on the question), but not the other (while humans are responsive to reasons in a limited and inconsistent sense, the supercomputer is ideally responsive to reasons). For this situation, I think the second defensible outcome would be that the superintelligence should simulate ideal democracy. That is, it should simulate all the minds in the world, and put them into an unlimited discussion with one another, as if they were bayesians with infinite time. The answers it would come up with would be the equivalent to the most legitimate conceivable human decisional process, but better...

I'm pretty sure this is a situation that hasn't come under sustained discussion in the literature as such (in superintelligence terms -- though it has come up in discussions of benevolent dictators and the value of democracy), so I'm talking out my ass a little here, but drawing on familiar themes. Still, the argument defending these two notions -- especially the second -- isn't a blog comment, it's a series of long articles or more.

Comment by Paul_Gowder on The Bedrock of Fairness · 2008-07-03T08:25:31.000Z · LW · GW

Eliezer, to the extent I understand what you're referencing with those terms, the political philosophy does indeed go there (albeit in very different vocabulary). Certainly, the question about the extent to which ideas of fairness are accessible at what I guess you'd call the object level are constantly treated. Really, it's one of the most major issues out there -- the extent to which reasonable disagreement on object-level issues (disagreement that we think we're obligated to respect) can be resolved on the meta-level (see Waldron, Democracy and Disagreement, and, for an argument that this leads into just the infinite recursion you suggest, at least in the case of democratic procedures, see the review of the same by Christiano, which google scholar will turn up easy).

I think the important thing is to separate two questions: 1. what is the true object-level statement, and 2. to what extent do we have epistemic access to the answer to 1? There may be an objectively correct answer to 1, but we might not be able to get sufficient grip on it to legitimately coerce others to go along -- at which point Xannon starts to seem exactly right.

Oh, hell, go read Ch. 5. of Hobbes, Leviathan. And both of Rawls's major books.

I mean, Xannon has been around for hundreds of years. Here's Hobbes, from previous cite.

But no one mans Reason, nor the Reason of any one number of men, makes the certaintie; no more than an account is therefore well cast up, because a great many men have unanimously approved it. And therfore, as when there is a controversy in account, the parties must by their own accord, set up for right Reason, the Reason of some Arbitrator, or Judge, to whose sentence they will both stand, or their controversie must either come to blowes, or be undecided, for want of a right Reason constituted by Nature...

Comment by Paul_Gowder on The Bedrock of Fairness · 2008-07-03T06:33:53.000Z · LW · GW

What's the point?

You realize, incidentally, that there's a huge literature in political philosophy about what procedural fairness means. Right? Right?

Comment by Paul_Gowder on Where Philosophy Meets Science · 2008-04-17T22:18:30.000Z · LW · GW

gaaahhh. I stop reading for a few days, and on return, find this...

Eliezer, what do these distinctions even mean? I know philosophers who do scary bayesian things, whose work looks a lot -- a lot -- like math. I know scientists who make vague verbal arguments. I know scientists who work on the "theory" side whose work is barely informed by experiments at all, I know philosophers who are trying to do experiments. It seems like your real distinction is between a priori and a posteriori, and you've just flung "philosophy" into the former and "science" into the latter, basically at random.

(I defy you to find an experimental test for Bayes Rule, incidentally -- or to utter some non-question-begging statistical principle by which the results could be evaluated.)

Comment by Paul_Gowder on Zombie Responses · 2008-04-05T15:39:52.000Z · LW · GW

I think part of the problem is that your premise 3 is question-begging: it assumes away epiphenomenalism on the spot. An epiphenomenalist has to bite the bullet that our feeling that we consciously cause things is false. (Also, what could it mean to have an empirical probability over a logical truth?)

Comment by Paul_Gowder on Hand vs. Fingers · 2008-03-31T07:18:00.000Z · LW · GW

Unknown: that's not an ontological claim (at least for the dangerous metaethical commitments I mentioned in the caveat above).

Comment by Paul_Gowder on Hand vs. Fingers · 2008-03-31T05:48:00.000Z · LW · GW

Richard: the claim I'm trying out depends on us not being able to learn that information, for if we could learn it, the claim would have some observable content, and thereby have scientific implications.

Comment by Paul_Gowder on Hand vs. Fingers · 2008-03-30T21:53:56.000Z · LW · GW

Richard: I'm making a slightly stronger claim, which is that ontological claims with no scientific implications aren't even relevant for philosophical issues of practical reason, so, for example, the question of god's existence has no relevance for ethics (contra, e.g., Kant's second critique). (Of course, to make this fly at all, I'm going to have to say that metaethical positions aren't ontological claims, so I'm probably getting all kinds of commitments I don't want here, and I'll probably have to recant this position upon anything but the slightest scrutiny, but it seems like it's worth considering.)

Comment by Paul_Gowder on Hand vs. Fingers · 2008-03-30T19:41:58.000Z · LW · GW

Although I prefer an even weaker kind of scientism: scientism'': an ontological claim is boring if it has no scientific implications. By boring, I mean, tells us nothing relevant to practical reason. Which is why I'm happy to take Richard's property dualism: I accept scientism'', ergo, it doesn't matter.

Comment by Paul_Gowder on Hand vs. Fingers · 2008-03-30T18:59:52.000Z · LW · GW

Richard:

How about scientism': an ontological claim is coherent only if it has scientific implications?

Eliezer:

I doubt you can conceive a non-prime number as prime. I think that the best way to think of "can conceive" here would be "can fully contemplate, without anything blowing up." So I can conceive of a zombie world, but I can't conceive of a world where, say, P is both true and not true, because I'd not know how to evaluate anything there. Likewise, I can't conceive of, say, 4, as a prime number, because I can't understand 4 except as it implies 2x2. That might strengthen your perceived connection between conceivability and logical possibility...

Comment by Paul_Gowder on Initiation Ceremony · 2008-03-29T07:39:50.000Z · LW · GW

If you guys are going to rig elections, I want in.

Comment by Paul_Gowder on Scarcity · 2008-03-27T17:59:38.000Z · LW · GW

I agree with Bobvis: a LOT of this is rational:

When University of North Carolina students learned that a speech opposing coed dorms had been banned, they became more opposed to coed dorms (without even hearing the speech). (Probably in Ashmore et. al. 1971.)

This seems straight Bayes to me. The banning of the speech counts as information about the chance that you'll agree with it, and for a reasonably low probability of banning speech that isn't dangerous to the administration (i.e. speech that won't convince), Everyone's Favorite Probability Rule kicks in and makes it totally rational to become more opposed to coed dorms -- assuming, that is, that you believe your chance of being convicted comes largely from rational sources (a belief that practical agents are at least somewhat committed to having).

When a driver said he had liability insurance, experimental jurors awarded his victim an average of four thousand dollars more than if the driver said he had no insurance. If the judge afterward informed the jurors that information about insurance was inadmissible and must be ignored, jurors awarded an average of thirteen thousand dollars more than if the driver had no insurance. (Broeder 1959.)

This too seems rational, though in this case only mostly, not totally. We can understand jurors as trying to balance the costs and the benefits of the award (not their legal job, but a perfectly sane thing to do). And the diminishing marginal utility of wealth suggests that imposing a large judgment on an insurance company causes less disutility to the person paying (or people, distributing that over the company's clients) than imposing it on a single person. As for the judge's informing the jurors that insurance information is inadmissible, well, again, they can interpret that instruction as information about the presence of insurance and update accordingly. (Although that might not be accurate in the context of how judges give instructions, jurors need not know that.) Of course, it seems like they updated too much, since they increased their awards much more when p(insurance) increased but is less than 1, than they did when they learned that p(insurance)=1. So it's still probably partially irrational. But not an artifact of some kind of magical scarcity effect.

Comment by Paul_Gowder on Leave a Line of Retreat · 2008-02-27T22:59:48.000Z · LW · GW

I'm skeptical about the possibility of really carrying out this kind of visualization (or, more broadly, imaginary leap). Here's why.

I might be able to say that I can imagine the existence of a god, and what the world would be like if, say, it were the Christian one. But I can't imagine myself in that world -- in that world, I'm a different person. For in that world, either I hold the counterfactually true belief that there is such a god, or I don't. If I don't hold that belief, then my response to that world is the same as my response to this world. If I do hold it, well, how can I model that?

This point is related to a point that Eliezer made in the comments, that I think just absolutely nails the problem, for a narrower class of the true set of states for which the problem exists:

You can invent all kinds of Gods and demand that I visualize the case of their existence, or of their telling me various things, but you can't necessarily force me to visualize the case where I accept their statement that killing babies is a good idea - not unless you can argue it well enough to create a real moral doubt in my mind.

Exactly.

But I maintain that you can't model the existence of a God with the right properties (including omnipotence, omniscience, and omnibenevolence) without being able to model that acceptance.

And likewise, the woman who believed in the soul couldn't model her reaction to a world without a soul without being able to experience herself as a person who genuinely doesn't believe in a soul. But she can only have that experience by becoming such a person.

I think this is just a limitation of human psychology. Cf. Thomas Nagel's great article, What is it like to be a bat? The argument doesn't directly apply, but the intuition does.

Comment by Paul_Gowder on Buy Now Or Forever Hold Your Peace · 2008-02-05T16:28:15.000Z · LW · GW

(And by "expected utility" in the above comment, I meant "expected value" not taking into account risk attitude. One must be precise about such things.)

Comment by Paul_Gowder on Buy Now Or Forever Hold Your Peace · 2008-02-05T16:23:31.000Z · LW · GW

What if one thinks (as do I) that not only do prediction markets do badly, but so do I? If both me and the market aren't doing better than random, do I have positive expected utility for betting?

Also, I'm not sure how intrade's payoff calculation works -- how much does one stand to gain per dollar on a bet at those odds? I think I'm pretty risk-averse if I'm gambling $250.00 for a $10.00 gain.

Anyway. My cash-free prediction is Obama by 2 points in general.

Comment by Paul_Gowder on Extensions and Intensions · 2008-02-04T23:57:38.000Z · LW · GW

Silas, that's actually a pretty good way to capture some of the major theories about color -- ostensive definition for a given color solves a lot of problems.

But I wish Eliezer had pointed out that intensional definitions allow us to use kinds of reasoning that extensional definitions don't ... how do you do deduction on an extensional definition?

Also, extensional definitions are harder to interpersonally communicate using. I can wear two shirts, both of which I would call "purple," and someone else would call one "mauve" and the other "taupe" (or something like that -- I'm not even sure what those last two colors are). Whereas if we'd defined the colors on wavelengths of light, well, we know what we're talking about. It's harder to get more overlap between people on extensional rather than intensional definitions.

Comment by Paul_Gowder on Newcomb's Problem and Regret of Rationality · 2008-02-01T08:16:42.000Z · LW · GW

I do understand. My point is that we ought not to care whether we're going to consider all the possibilities and benefits.

Oh, but you say, our caring about our consideration process is a determined part of the causal chain leading to our consideration process, and thus to the outcome.

Oh, but I say, we ought not to care* about that caring. Again, recurse as needed. Nothing you can say about the fact that a cognition is in the causal chain leading to a state of affairs counts as a point against the claim that we ought not to care about whether or not we have that cognition if it's unavoidable.

Comment by Paul_Gowder on Newcomb's Problem and Regret of Rationality · 2008-02-01T07:28:13.000Z · LW · GW

Unknown: your last question highlights the problem with your reasoning. It's idle to ask whether I'd go and jump off a cliff if I found my future were determined. What does that question even mean?

Put a different way, why should we ask an "ought" question about events that are determined? If A will do X whether or not it is the case that a rational person will do X, why do we care whether or not it is the case that a rational person will do X? I submit that we care about rationality because we believe it'll give us traction on our problem of deciding what to do. So assuming fatalism (which is what we must do if the AI knows what we're going to do, perfectly, in advance) demotivates rationality.

Here's the ultimate problem: our intuitions about these sorts of questions don't work, because they're fundamentally rooted in our self-understanding as agents. It's really, really hard for us to say sensible things about what it might mean to make a "decision" in a deterministic universe, or to understand what that implies. That's why Newcomb's problem is a problem -- because we have normative principles of rationality that make sense only when we assume that it matters whether or not we follow them, and we don't really know what it would mean to matter without causal leverage.

(There's a reason free will is one of Kant's antimonies of reason. I've been meaning to write a post about transcendental arguments and the limits of rationality for a while now... it'll happen one of these days. But in a nutshell... I just don't think our brains work when it comes down to comprehending what a deterministic universe looks like on some level other than just solving equations. And note that this might make evolutionary sense -- a creature who gets the best results through a [determined] causal chain that includes rationality is going to be selected for the beliefs that make it easiest to use rationality, including the belief that it makes a difference.)

Comment by Paul_Gowder on Newcomb's Problem and Regret of Rationality · 2008-02-01T06:27:37.000Z · LW · GW

Eleizer: whether or not a fixed future poses a problem for morality is a hotly disputed question which even I don't want to touch. Fortunately, this problem is one that is pretty much wholly orthogonal to morality. :-)

But I feel like in the present problem the fixed future issue is a key to dissolving the problem. So, assume the box decision is fixed. It need not be the case that the stress is fixed too. If the stress isn't fixed, then it can't be relevant to the box decision (the box is fixed regardless of your decision between stress and no-stress). If the stress IS fixed, then there's no decision left to take. (Except possibly whether or not to stress about the stress, call that stress*, and recurse the argument accordingly.)

In general, for any pair of actions X and Y, where X is determined, either X is conditional on Y, in which case Y must also be determined, or not conditional on Y, in which case Y can be either determined or non-determined. So appealing to Y as part of the process that leads to X doesn't mean that something we could do to Y makes a difference if X is determined.

Comment by Paul_Gowder on Newcomb's Problem and Regret of Rationality · 2008-02-01T03:52:44.000Z · LW · GW

I don't know the literature around Newcomb's problem very well, so excuse me if this is stupid. BUT: why not just reason as follows:

  1. If the superintelligence can predict your action, one of the following two things must be the case:

a) the state of affairs whether you pick the box or not is already absolutely determined (i.e. we live in a fatalistic universe, at least with respect to your box-picking)

b) your box picking is not determined, but it has backwards causal force, i.e. something is moving backwards through time.

If a), then practical reason is meaningless anyway: you'll do what you'll do, so stop stressing about it.

If b), then you should be a one-boxer for perfectly ordinary rational reasons, namely that it brings it about that you get a million bucks with probability 1.

So there's no problem!

Comment by Paul_Gowder on OB Meetup: Millbrae, Thu 21 Feb, 7pm · 2008-02-01T03:45:50.000Z · LW · GW

There should be a "yes, but I'll be late" option. (I selected "maybe" as a proxy for that.)

(Speaking of late things, I think I owe you a surreply on the utilitarianism/specks debate... it might take a while, though. Really busy.)

Comment by Paul_Gowder on Rationality Quotes 7 · 2008-01-27T09:03:07.000Z · LW · GW

Ben: you're supposed to recoil. The point is that some things are pre-analytically evil. No matter how much we worry at the concept, slavery and genocide are still evil -- we know these things stronger than we know the preconditions for the reasoning process to the contrary -- I submit that there is simply no argument sufficiently strong to overturn that judgment.

Comment by Paul_Gowder on Circular Altruism · 2008-01-27T03:46:00.000Z · LW · GW

Unknown: I didn't deny that they're comparable, at least in the brute sense of my being able to express a preference. But I did deny that any number of distributed dust specks can ever end up to torture. And the reason I give for that denial is just that distributive problem. (Well, there are other reasons too, but one thing at a time.)

Comment by Paul_Gowder on Circular Altruism · 2008-01-26T07:24:00.000Z · LW · GW

Eliezer -- depends again on whether we're aggregating across individuals or within one individual. From a utilitarian perspective (see The Post That Is To Come for a non-utilitarian take), that's my big objection to the specks thing. Slapping each of 100 people once each is not the same as slapping one person 100 times. The first is a series of slaps. The second is a beating.

Honestly, I'm not sure if I'd have given the same answer to all of those questions w/o having heard of the dust specks dilemma. I feel like that world is a little too weird -- the thing that motivates me to think about those questions is the dust specks dilemma. They're not the sort of things practical reason ordinarily has to worry about, or that we can ordinarily expect to have well-developed intuitions about!

Comment by Paul_Gowder on Circular Altruism · 2008-01-26T05:58:00.000Z · LW · GW

Eliezer -- no, I don't think there is. At least, not if the dust specks are distributed over multiple people. Maybe localized in one person -- a dust speck every 10th/sec for a sufficiently long period of time might add up to a toe stub.

Comment by Paul_Gowder on Rationality Quotes 7 · 2008-01-26T05:52:36.000Z · LW · GW

Peter: Slavery. Genocide.

(Cf. Moore: "here is a hand.")

Comment by Paul_Gowder on Circular Altruism · 2008-01-26T02:14:00.000Z · LW · GW

Tcpkac: wonderful intuition pump.

Gary: interesting -- my sense of the nipple piercing case is that yes, there's a number of unwilling nipple piercings that does add up to 50 years of torture. It might be a number larger than the earth can support, but it exists. I wonder why my intuition is different there. Is yours?

Comment by Paul_Gowder on Circular Altruism · 2008-01-25T15:55:00.000Z · LW · GW

TGGP -- how about internal consistency? How about formal requirements, if we believe that moral claims should have a certain form by virtue of their being moral claims? Those two have the potential to knock out a lot of candidates...

Comment by Paul_Gowder on Circular Altruism · 2008-01-24T21:09:00.000Z · LW · GW

I've written and saved a(nother) response; if you'd be so kind as to approve it?

Comment by Paul_Gowder on Circular Altruism · 2008-01-23T23:49:00.000Z · LW · GW

I think I'm going to have to write another of my own posts on this (hadn't I already?), when I have time. Which might not be for a while -- which might be never -- we'll see.

For now, let me ask you this Eliezer: often, we think that our intuitions about cases provide a reliable guide to morality. Without that, there's a serious question about where our moral principles come from. (I, for one, think that question has its most serious bite right on utilitarian moral principles... at least Kant, say, had an argument about how the nature of moral claims leads to his principles.)

So suppose -- hypothetically, and I do mean hypothetically -- that our best argument for the claim "one ought to maximize net welfare" comes by induction from our intuitions about individual cases. Could we then legitimately use that principle to defend the opposite of our intuitions about cases like this?

More later, I hope.

Comment by Paul_Gowder on The Allais Paradox · 2008-01-19T09:20:38.000Z · LW · GW

I confess, the money pump thing sometimes strikes me as ... well... contrived. Yes, in theory, if one's preferences violate various rules of rationality (acyclicity being the easiest), one could conceivably be money-pumped. But, uh, it never actually happens in the real world. Our preferences, once they violate idealized axioms, lead to messes in highly unrealistic situations. Big deal.

Comment by Paul_Gowder on Rationality Quotes 3 · 2008-01-18T04:46:06.000Z · LW · GW

Hofstadter just gained a bunch of points with me.

Comment by Paul_Gowder on Rationality Quotes 1 · 2008-01-18T00:44:47.000Z · LW · GW

Ben: what kind of duties might there be other than moral ones?

Comment by Paul_Gowder on Rationality Quotes 1 · 2008-01-17T04:47:05.000Z · LW · GW

Leo, hmm... I see the point, but it's gotta be an error. It's a straightforward instance of the genetic fallacy to reason from "our moral intuitions have biological origins" to "therefore, it makes no sense to speak of 'moral duties.'" It might make no sense to speak of religious moral duties -- but surely that's because there's no god, and not because the source of our moral intuitions is otherwise. The quoted sentence seems to equivocate between religious claims of moral duty -- which was the topic of the rest of the surrounding paragraphs -- and [deontological?] claims about moral duty generally.

Comment by Paul_Gowder on Rationality Quotes 1 · 2008-01-16T18:45:27.000Z · LW · GW

Also, what is Harris's quote supposed to mean? (About the moral duty to save children, that is. Not the god one, which is wholly unobjectionable.) I want to interpret it as some kind of skepticism about normative statements, but if that's what he means, it's very oddly expressed. Perhaps it's supposed to be some conceptual analysis about "duty?"

I mean, one ought to understand a syllogism, just as one ought to save the drowning child... no?

Comment by Paul_Gowder on Rationality Quotes 1 · 2008-01-16T18:40:16.000Z · LW · GW

Memo to Jaynes: please don't generalize beyond statistics. Cough... mixed strategy equilibria in game theory.

Comment by Paul_Gowder on Beautiful Probability · 2008-01-16T18:32:16.000Z · LW · GW

Cyan, I've been mulling this over for the last 23 hours or so -- and I think you've convinced me that the frequentist approach has worrisome elements of subjectivity too. Huh. Which doesn't mean I'm comfortable with the the whole priors business either. I'll think about this some more. Thanks.

Comment by Paul_Gowder on Beautiful Probability · 2008-01-15T07:00:40.000Z · LW · GW

Cyan, that source is slightly more convincing.

Although I'm a little concerned that it, too, is attacking another strawman. At the beginning of chapter 37, it seems that the author just doesn't understand what good researchers do. In the medical example given at the start of the chapter (458-462ish), many good researchers would use a one-sided hypothesis rather than a two-sided hypothesis (I would), which would better catch the weak relationship. One can also avoid false negatives by measuring the power of one's test. McKay also claims that "this answer does not say how much more effective A is than B." But that's just false: one can get an idea of the size of the effect with either sharper techniques (like doing a linear regression, getting beta values and calculating r-squared) or just by modifying one's null hypothesis (i.e. demanding that a datum improve on control by X amount before it counts in favor of the alternative hypothesis).

Given all that, I'm going to withhold judgment. McKay's argument on the coin flip example is convincing on the surface. But given his history from the prior pages of understating the counterarguments, I'm not going to give it credence until I find a better statistician than I to give me the response, if any, from a "sampling theory" perspective.

Comment by Paul_Gowder on Beautiful Probability · 2008-01-15T03:06:20.000Z · LW · GW

Uh, strike the "how would the math change?" question -- I just read the relevant portion of Jaynes's paper, which gives a plausible answer to that. Still, I deny that an actual practicing frequentist would follow his logic and treat n as the random variable.

(ALSO: another dose of unreality in the scenario: what experimenter who decided to play it like that would ever reveal the quirky methodology?)

Comment by Paul_Gowder on Beautiful Probability · 2008-01-15T02:56:47.000Z · LW · GW

I have to say, the reason the example is convincing is because of its artificiality. I don't know many old-school frequentists (though I suppose I'm a frequentist myself, at least so far as I'm still really nervous about the whole priors business -- but not quite so hard as all that), but I doubt that, presented with a stark case like the one above, they'd say the results would come out differently. For one thing, how would the math change?

But the case would never come up -- that's the thing. It's empty counterfactual analysis. Nobody who is following a stopping rule as ridiculous as the one offered would be able to otherwise conduct the research properly. I mean, seriously. I think Benquo nailed it: the second researcher's stopping rule ought to rather severely change our subjective probability in his/her having used a random sample, or for that matter not committed any number of other research sins, perhaps unconsciously. And that in turn should make us less confident about the results.