Posts

Comments

Comment by Jadagul on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-16T22:40:12.505Z · LW · GW

Canon is fairly clear that Hogwarts is the only game in Britain. It also leads to glaring inconsistencies in scale which you just pointed out. (Rowling originally said that Hogwarts had about 700 students, and then fans started pointing out that that was wildly inconsistent with the school as she described it. And even that's too small to make things really work).

But the evidence, from HP7 (page 210 of my first-run American hardback copy):

Lupin is talking about Voldemort's takeover of Wizarding society, to Harry and the others.

"Attendance is now compulsory for every young witch and wizard," he replied. "That was announced yesterday. It's a change, because it was never obligatory before. Of course, nearly every witch and wizard in Britain has been educated at Hogwarts, but their parents had the right to teach them at home or send them abroad if they preferred. This way, Voldemort will have the whole Wizarding population under his eye from a young age."

"Most wizards" in Britain were educated at Hogwarts, and the exceptions were homeschooled or sent abroad. It's really hard to read that to imply that there's another British wizarding school anywhere.

Comment by Jadagul on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-16T09:28:03.331Z · LW · GW

There's another big pile of gold, about 7,000 tonnes, in the New York Fed--that's actually where a lot of foreign countries keep a large fraction of their gold supply. It's open to tourists and you can walk in and look at the big stacks of gold bars. It does have fairly impressive security, but that security could plausibly be defeated by a reasonably competent wizard.

Comment by Jadagul on Harry Potter and the Methods of Rationality discussion thread, part 26, chapter 97 · 2013-08-15T14:03:16.836Z · LW · GW

I believe this is a misreading; Winky was there, but the Dark Mark was cast by Barry Crouch Jr. From the climax of Book 4, towards the end of Chapter 35:

I wanted to attack them for their disloyalty to my master. My father had left the tent; he had gone to free the Muggles. Winky was afraid to see me so angry. She used her own brand of magic to bind me to her. She pulled me from the tent, pulled me into the forest, away from the Death Eaters. I tried to hold her back. I wanted to return to the campsite. I wanted to show those Death Eaters what loyalty to the Dark Lord meant, and to punish them for their lack of it. I used the stolen wand to cast the Dark Mark into the sky.

Comment by Jadagul on Stupid Questions Open Thread Round 2 · 2012-04-25T02:37:40.619Z · LW · GW

The claim wasn't that it happens too often to attribute to computation error, but that the types of differences seem unlikely to stem from computational errors.

Comment by Jadagul on Stupid Questions Open Thread Round 2 · 2012-04-22T23:22:26.886Z · LW · GW

You're...very certain of what I understand. And of the implications of that understanding.

More generally, you're correct that people don't have a lot of direct access to their moral intuitions. But I don't actually see any evidence for the proposition they should converge sufficiently other than a lot of handwaving about the fundamental psychological similarity of humankind, which is more-or-less true but probably not true enough. In contrast, I've seen lots of people with deeply, radically separated moral beliefs, enough so that it seems implausible that these all are attributable to computational error.

I'm not disputing that we share a lot of mental circuitry, or that we can basically understand each other. But we can understand without agreeing, and be similar without being the same.

As for the last bit--I don't want to argue definitions either. It's a stupid pastime. But to the extent Eliezer claims not to be a meta-ethical relativist he's doing it purely through a definitional argument.

Comment by Jadagul on Stupid Questions Open Thread Round 2 · 2012-04-22T06:19:17.821Z · LW · GW

This comment may be a little scattered; I apologize. (In particular, much of this discussion is beside the point of my original claim that Eliezer really is a meta-ethical relativist, about which see my last paragraph).

I certainly don't think we have to escalate to violence. But I do think there are subjects on which we might never come to agreement even given arbitrary time and self-improvement and processing power. Some of these are minor judgments; some are more important. But they're very real.

In a number of places Eliezer commented that he's not too worried about, say, two systems morality_1 and morality_2 that differ in the third decimal place. I think it's actually really interesting when they differ in the third decimal place; it's probably not important to the project of designing an AI but I don't find that project terribly interesting so that doesn't bother me.

But I'm also more willing to say to someone, ""We have nothing to argue about [on this subject], we are only different optimization processes." With most of my friends I really do have to say this, as far as I can tell, on at least one subject.

However, I really truly don't think this is as all-or-nothing as you or Eliezer seem to paint it. First, because while morality may be a compact algorithm relative to its output, it can still be pretty big, and disagreeing seriously about one component doesn't mean you don't agree about the other several hundred. (A big sticking point between me and my friends is that I think getting angry is in general deeply morally blameworthy, whereas many of them believe that failing to get angry at outrageous things is morally blameworthy; and as far as I can tell this is more or less irreducible in the specification for all of us). But I can still talk to these people and have rewarding conversations on other subjects.

Second, because I realize there are other means of persuasion than argument. You can't argue someone into changing their terminal values, but you can often persuade them to do so through literature and emotional appeal, largely due to psychological unity. I claim that this is one of the important roles that story-telling plays: it focuses and unifies our moralities through more-or-less arational means. But this isn't an argument per se and has no particular reason one would expect it to converge to a particular outcome--among other things, the result is highly contingent on what talented artists happen to believe. (See Rorty's Contingency, Irony, and Solidarity for discussion of this).

Humans have a lot of psychological similarity. They also have some very interesting and deep psychological variation (see e.g. Haidt's work on the five moral systems). And it's actually useful to a lot of societies to have variation in moral systems--it's really useful to have some altruistic punishers, but not really for everyone to be an altruistic punisher.

But really, this is beside the point of the original question, whether Eliezer is really a meta-ethical relativist, because the limit of this sequence which he claims converges isn't what anyone else is talking about when they say "morality". Because generally, "morality" is defined more or less to be a consideration that would/should be compelling to all sufficiently complex optimization processes. Eliezer clearly doesn't believe any such thing exists. And he's right.

Comment by Jadagul on Stupid Questions Open Thread Round 2 · 2012-04-21T19:27:48.710Z · LW · GW

Hm, that sounds plausible, especially your last paragraph. I think my problem is that I don't see any reason to suspect that the expanded-enlightened-mature-unfolding of our present usages will converge in the way Eliezer wants to use as a definition. See for instance the "repugnant conclusion" debate; people like Peter Singer and Robin Hanson think the repugnant conclusion actually sounds pretty awesome, while Derek Parfit thinks it's basically a reductio on aggregate utilitarianism as a philosophy and I'm pretty sure Eliezer agrees with him, and has more or less explicitly identified it as a failure mode of AI development. I doubt these are beliefs that really converge with more information and reflection.

Or in steven's formulation, I suspect that relatively few agents actually have Ws in common; his definition presupposes that there's a problem structure "implicitly defined by the machinery shared by X and Y which they both use to make desirability judgments". I'm arguing that many agents have sufficiently different implicit problem structures that, for instance, by that definition Eliezer and Robin Hanson can't really make "should" statements to each other.

Comment by Jadagul on Stupid Questions Open Thread Round 2 · 2012-04-21T09:43:39.147Z · LW · GW

I'm pretty sure Eliezer is actually wrong about whether he's a meta-ethical relativist, mainly because he's using words in a slightly different way from the way they use them. Or rather, he thinks that MER is using one specific word in a way that isn't really kosher. (A statement which I think he's basically correct about, but it's a purely semantic quibble and so a stupid thing to argue about.)

Basically, Eliezer is arguing that when he says something is "good" that's a factual claim with factual content. And he's right; he means something specific-although-hard-to-compute by that sentence. And similarly, when I say something is "good" that's another factual claim with factual content, whose truth is at least in theory computable.

But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meanings. Meta-Ethical Relativism, at least as summarized by wikipedia, describes this fact with the sentence "terms such as "good," "bad," "right" and "wrong" do not stand subject to universal truth conditions at all." Eliezer doesn't like that because in each speaker's language, terms like "good" stand subject to universal truth conditions. But each speaker speaks a slightly different language where the truth conditions on the word represented by the string "good" stands subject to a slightly different set of universal truth conditions.

For an analogy: I apparently consistently define "blonde" differently from almost everyone I know. But it has an actual definition. When I call someone "blonde" I know what I mean, and people who know me well know what I mean. But it's a different thing from what almost everyone else means when they say "blonde." (I don't know why I can't fix this; I think my color perception is kinda screwed up). An MER guy would say that whether someone is "blonde" isn't objectively true or false because what it means varies from speaker to speaker. Eliezer would say that "blonde" has a meaning in my language and a different meaning in my friends' language, but in either language whether a person is "blonde" is in fact an objective fact.

And, you know, he's right. But we're not very good at discussing phenomena where two different people speak the same language except one or two words have different meanings; it's actually a thing that's hard to talk about. So in practice, "'good' doesn't have an objective definition" conveys my meaning more accurately to the average listener than "'good' has one objective meaning in my language and a different objective meaning in your language."

Comment by Jadagul on Harry Potter and the Methods of Rationality discussion thread, part 9 · 2011-09-26T10:59:45.412Z · LW · GW

I was a grad student at Churchill, and we mostly ignored such things, but my girlfriend was an undergrad and felt compelled to educate me. I recall Johns being the rich kids, Peterhouse was the gay men (not sure if that's for an actual reason or just the obvious pun), and a couple others that I can't remember off the top of my head.

Comment by Jadagul on Harry Potter and the Methods of Rationality discussion thread, part 9 · 2011-09-18T21:00:48.958Z · LW · GW

It's mentioned, just not dwelled on. It's mentioned once in passing in each of the first two books:

Sorceror's Stone:

They piled so much homework on them that the Easter holidays weren't nearly as much fun as the Christmas ones.

Chamber of Secrets:

The second years were given something new to think about during the Easter holidays.

And so on. It's just that I don't think anything interesting ever happens during them.

Comment by Jadagul on The Fun Theory Sequence · 2009-01-25T14:10:13.000Z · LW · GW

It occurred to me at some point that Fun Theory isn't just the correct reply to Theodicy; it's also a critical component of any religious theodicy program. And one of the few ways I could conceive of someone providing major evidence of God's existence.

That is, I'm fairly confident that there is no god. But if I worked out a fairly complete version of Fun Theory, and it turned out that this really was the best of all possible worlds, I might have to change my mind.

Comment by Jadagul on Disappointment in the Future · 2008-12-01T17:24:07.000Z · LW · GW

I would agree with Karo, I think. I'm actually surprised by how accurate this list of predictions is; it's not at 50% but I'm not sure why we would expect it to be with predictions this specific. (I'm not saying he was epistemically justified, just that he's more accurate than I would have expected).

Following up on Eliezer's point, it seems like the core of his claims are: 1) computers will become smaller and people will have access to them basically 24/7. If you remember that even my cell phone, which is a total piece of crap and cost $10, would look like a computer to someone from 1999, this seems fairly accurate. 2) Keyboards will cease to be the primary method of interaction with computers. We're getting there--slowly--between improved voice recognition and the growth of tablet PCs. But we're not there yet. I wouldn't be surprised at all if this were true by 2020 (wouldn't be surprised if it weren't, either. I don't know how far off we are from speech recognition good enough that people can assume it will work). 3) People will start using computers for things that in 1999 had to be done in hard copy. This is starting to happen, but we're not there yet. Again, wouldn't surprise me either way in 2020. 4) People will be able to use computers as their primary means of interaction with the world. Some basement-dwelling geeks like myself aside, not quite true. People like dealing with other people. I think this is the least likely to be true ten years from now.

Comment by Jadagul on Magical Categories · 2008-08-25T21:45:53.000Z · LW · GW

Shane, the problem is that there are (for all practical purposes) infinitely many categories the Bayesian superintelligence could consider. They all "identify significant regularities in the environment" that "could potentially become useful." The problem is that we as the programmers don't know whether the category we're conditioning the superintelligence to care about is the category we want it to care about; this is especially true with messily-defined categories like "good" or "happy." What if we train it to do something that's just like good except it values animal welfare far more (or less) than our conception of good says it ought to? How long would it take for us to notice? What if the relevant circumstance didn't come up until after we'd released it?

Comment by Jadagul on No License To Be Human · 2008-08-22T07:50:20.000Z · LW · GW

This talk about metaethics is trying to justify building castles in the clouds by declaring the foundation to be supported by the roof. It doesn't deal with the fundamental problem at all - it makes it worse.

Caledonian, I don't want to speak for Eliezer. But my contention, at least, is that the fundamental problem is insoluble. I claim, not that this particular castle has a solid foundation, but that there exist no solid foundations, and that anywhere you think you've found solid earth there's actually a cloud somewhere beneath it. The fact that you're reacting so strongly makes me think you're interpreting Eliezer as saying what I believe. Similarly,

Why should we care about a moral code that Eliezer has arbitrarily chosen to call right? What relevance does this have to anything?

There's no particular reason we should care about a moral code Eliezer has chosen. You should care about the moral code you have arbitrarily chosen. I claim, and I think Eliezer would too, that there will be a certain amount of overlap because you're both human (just as you both buy into Occam because you're both human). But we couldn't give, say, a pebblesorter any reason to care about Eliezer's moral code.

Larry D'ana: Is anyone who does not believes in universally compelling arguments a relativist?

Is anyone who does not believe that morality is ontologically primitive a relativist?

Yeah, pretty much.

If there are no universally compelling arguments, then there's no universally compelling moral code. Which means that whatever code compels you has to compel relative to who you are; thus it's a relativist position.

Eliezer tries to get around this by saying that he has this code he can state (to some low degree of precision), and everyone can objectively agree on whether or not some action comports with this code. Or at least that perfect Bayesian superintelligences could all agree. (I'm not entirely sold on that, but we'll stipulate). I claim, though, that this isn't the way most people (including most of us) use the words 'morality' and 'right'; I think that if you want your usage to comport with everyone else's, you would have to say that the pebblesorters have 'a' moral code, and that this moral code is "Stack pebbles in heaps whose sizes are prime numbers."

In other words, in general usage a moral code is a system of rules that compels an agent to action (and has a couple other properties I haven't figured out how to describe without self-reference). A moral absolutist claims that there exists such a system of rules that is rightly binding and compelling to all X, where X is usually some set like "all human beings" or "all self-aware agents." (Read e.g. Kant who claimed that the characteristic of a moral rule is that it is categorically binding on all rational minds). But Eliezer and I claim that there are no universally compelling arguments of any sort. Thus in particular there are no universally compelling injunctions to act, and thus no absolute moral code. Instead, the injunction to act that a particular agent finds compelling varies with the identity of the agent; thus 'morality' is relative to the agent. And thus I'm a moral relativist.

Now, it's possible that you could get away with restricting X to "human beings"; if you then claimed that humans had enough in common that the same moral code was compelling to all of them, you could plausibly reclaim moral objectivism. But I think that claim is clearly false; Eliezer seems to have rejected it (or at least refused to defend it) as well. So we don't get even that degree of objectivity; the details of each person's moral code depend on that person, and thus we have a relative standard. This is what has Caledonian's knickers in such a twist.

Kenny: exactly. That's why we're morally relative.

Comment by Jadagul on No License To Be Human · 2008-08-21T06:48:14.000Z · LW · GW

Eliezer: Good post, as always, I'll repeat that I think you're closer to me in moral philosophy than anyone else I've talked to, with the probable exception of Richard Rorty, from whom I got many of my current views. (You might want to read Contingency, Irony, Solidarity; it's short, and it talks about a lot of the stuff you deal with here). That said, I disagree with you in two places. Reading your stuff and the other comments has helped me refine what I think; I'll try to state it here as clearly as possible.

1) I think that, as most people use the words, you're a moral relativist. I understand why you think you're not. But the way most people use the word 'morality,' it would only apply to an argument that would persuade the ideal philosopher of perfect emptiness. You don't believe any such arguments exist; neither do I. Thus neither of us think that morality as it's commonly understood is a real phenomenon. Think of the priest in War of the Worlds who tried to talk to the aliens, explaining that since we're both rational beings/children of God, we can persuade them not to kill us because it's wrong. You say (as I understand you) that they would agree that it's wrong, and just not care, because wrong isn't necessarily something they care about. I have no problem with any claim you've made (well, that I've made on your behalf) here; but at this point the way you're using the word 'moral' isn't a way most people would use it. So you should use some other term altogether.

2) I like to maintain a clearer focus on the fact that, if you care about what's right, I care about what's right_1, which is very similar to but not the same as what's right. Mainly because it helps me to remember there are some things I'm just not going to convince other people of (e.g. I don't think I could convince the Pope that God doesn't exist. There's no fact pattern that's wholly inconsistent with the property god_exists, and the Pope has that buried deep enough in its priors that I don't think it's possible to root it out). But (as of reading your comment on yesterday's post) I don't think we disagree on the substance, just on the emphasis.

Thanks for an engaging series of posts; as I said, I think you're the closest or second-closest I've ever come across to someone sharing my meta-ethics.

Comment by Jadagul on You Provably Can't Trust Yourself · 2008-08-21T06:44:22.000Z · LW · GW

Ah, thanks Eliezer, that comment explains a lot. I think I mostly agree with you, then. I suspect (on little evidence) that each one of us would, extrapolated, wind up at his own attractor (or at least at a sparsely populated one). But I have no real evidence for this, and I can't imagine off the top of my head how I would find it (nor how I would find contradictory evidence), and since I'm not trying to build fAI I don't need to care. But what you've just sketched out is basically the reason I think we can still have coherent moral arguments; our attractors have enough in common that many arguments I would find morally compelling, you would also find morally compelling (as in, most of us have different values but we (almost) all agree that the random slaughter of innocent three-year-olds is bad). Thanks for clearing that up.

Comment by Jadagul on You Provably Can't Trust Yourself · 2008-08-20T05:14:32.000Z · LW · GW

Especially given that exposure to different fact patterns could push you in different directions. E.g. suppose right now I try to do what is right_1 (subscripts on everything to avoid appearance of claim to universality). Now, suppose that if I experience fact pattern facts_1 I conclude that it is right_1 to modify my 'moral theory' to right_2. but if I experience fact pattern facts_2 I conclude that it is right_1 to modify to right_3.

Now, that's all well and good. Eliezer would have no problem with that, as long as the diagram commutes: that is, if it's true that ( if I've experienced facts_1 and moved to right_2, and then I experience facts_2, I will move to right_4), it must also be true that ( if I've experienced facts_2 and moved to right_3, and then experience facts_1, I will move to right_4).

I suppose that at least in some cases this is true, but I see no reason why in all cases it ought to be. Especially if you allow human cognitive biases to influence the proceedings; but even if you don't (and I'm not sure how you avoid it), I don't see any argument why all such diagrams should commute. (this doesn't mean they don't, of course. I invite Eliezer to provide such an argument).

I still hold that Eliezer's account of morality is correct, except his claim that all humans would reflectively arrive at the same morality. I think foundations and priors are different enough that, functionally, each person has his own morality.

Comment by Jadagul on Hot Air Doesn't Disagree · 2008-08-16T09:49:04.000Z · LW · GW

But Mario, why not? In J-morality it's wrong to hurt people, both because I have empathy towards people and so I like them, and because people tend to create net positive externalities. But that's a value judgment. I can't come up with any argument that would convince a sociopath that he "oughtn't" kill people when he can get away with it. Even in theory.

There was nothing wrong with Raskolnikov's moral theory. He just didn't realize that he wasn't a Napoleon.

Comment by Jadagul on The Bedrock of Morality: Arbitrary? · 2008-08-16T04:13:00.000Z · LW · GW

Eliezer, I think you come closer to sharing my understanding of morality than anyone else I've ever met. Places where I disagree with you:

First, as a purely communicative matter, I think you'd be clearer if you replaced all instances of "right" and "good" with "E-right" and "E-good."

Second, as I commented a couple threads back, I think you grossly overestimate the psychological unity of humankind. Thus I think that, say, E-right is not at all the same as J-right (although they're much more similar than either is to p-right). The fact that our optimization processes are close enough in many cases that we can share conclusions and even arguments doesn't mean that they're the same optimization process, or that we won't disagree wildly in some cases.

Simple example: I don't care about the well-being of animals. There's no comparison in there, and there's no factual claim. I just don't care. When I read the famous ethics paper about "would it be okay to torture puppies to death to get a rare flavor compound," my response was something along the lines of, "dude, they're puppies. Who cares if they're tortured?" I think anyone who enjoys torturing for the sake of torturing is probably mentally unbalanced and extremely unvirtuous. But I don't care about the pain in the puppy at all. And the only way you could make me care is if you showed that causing puppies pain came back to affect human well-being somehow.

Third, I think you are a moral relativist, at least as that claim is generally understood. Moral absolutists typically claim that there is some morality demonstrably binding upon all conscious agents. You call this an "attempt to persuade an ideal philosopher of perfect emptiness" and claim that it's a hopeless and fundamentally stupid task. Thus you don't believe what moral absolutists believe; instead, you believe different beings embody different optimization processes (which is the name you give to what most people refer to as morality, at least in conscious beings). You're a moral relativist. Which is good, because it means you're right.

Excuse me. It means you're J-right.

Comment by Jadagul on Moral Error and Moral Disagreement · 2008-08-14T03:31:00.000Z · LW · GW

Caledonian and Tim Tyler: there are lots of coherent defenses of Christianity. It's just that many of them rest on statements like, "if Occam's Razor comes into conflict with Revealed Truth, we must privilege the latter over the former." This isn't incoherent; it's just wrong. At least from our perspective. Which is the point I've been trying to make. They'd say the same thing about us.

Roko: I sent you an email.

Comment by Jadagul on Moral Error and Moral Disagreement · 2008-08-13T00:26:00.000Z · LW · GW

Doug raises another good point. Related to what I said earlier, I think people really do functionally have prior probability=1 on some propositions. Or act as if they do. If "The Bible is the inerrant word of God" is a core part of your worldview, it is literally impossible for me to convince you this is false, because you use this belief to interpret any facts I present to you. Eliezer has commented before that you can rationalize just about anything; if "God exists" or "The Flying Spaghetti Monster exists" or "reincarnation exists" is part of the machinery you use to interpret your experience, in a deep enough way, your experiences can't disprove it.

Comment by Jadagul on Moral Error and Moral Disagreement · 2008-08-11T06:22:54.000Z · LW · GW

Eliezer: for 'better' vs 'frooter,' of course you're right. I just would have phrased it differently; I've been known to claim that the word 'better' is completely meaningless unless you (are able to) follow it with "better at or for something." So of course, Jadagul_real would say that his worldview is better for fulfilling his values. And Jadagul_hypothetical would say that his worldview is better for achieving his values. And both would (potentially) be correct. (or potentially wrong. I never claimed to be infallible, either in reality or in hypothesis). But phrasing issues aside, I do believe that I think this happens more often than you think it happens.

Sebastian Hagen: That's actually a very good question. So a few answers. First is that I tend to go back and forth on whether by 'happiness' I mean something akin to "net stimulation of pleasure centers in brain," or to "achievement of total package of values" (at which point the statement nears tautology, but I think doesn't actually fall into it). But my moral code does include such statements as "you have no fundamental obligation to help other people." I help people because I like to. So I lean towards formulation 1; but I'm not altogether certain that's what I really mean.

Second is that your question, about the sociopath pill, is genuinely difficult for me. It reminds me of Nozick's experience machine thought experiment. But I know that I keep getting short-circuited by statements like, "but I'd be miserable if I were a sociopath," which is of course false by hypothesis. I think my final answer is that I'm such a social person and take such pleasure in people that were I to become a sociopath I would necessarily be someone else. That person wouldn't be me. And while I care about whether I'm happy, I don't know that I care about whether he is.

Of course, this all could be "I know the answer and now let me justify it." On the other hand, the point of the exercise is to figure out what my moral intuitions are...

Comment by Jadagul on Moral Error and Moral Disagreement · 2008-08-11T01:14:27.000Z · LW · GW

Steven: quite possibly related. I don't think they're exactly the same (the classic comic book/high fantasy "I'm evil and I know it" villain fits A2, but I'd describe him as amoral), but it's an interesting parallel.

Eliezer: I'm coming more and more to the conclusion that our main area of disagreement is our willingness to believe that someone who disagrees with us really "embodies a different optimization process." There are infinitely many self-consistent belief systems and infinitely many internally consistent optimization processes; while I believe mine to be the best I've found, I remain aware that if I held any of the others I would believe exactly the same thing. And that I would have no way of convincing the anti-Occam intelligence that Occam's Razor was a good heuristic, or of convincing the psychopath who really doesn't care about other people that he 'ought' to. So I hesitate to say that I'm right in any objective sense, since I'm not sure exactly what standard I'm pointing to when I say 'objective.'

And I've had extended moral conversations with a few different people that led to us, eventually, concluding that our premises were so radically different that we really couldn't have a sensible moral conversation. (to wit: I think my highest goal in life is to make myself happy. Because I'm not a sociopath making myself happy tends to involve having friends and making them happy. But the ultimate goal is me. Makes it hard to talk to someone who actually believes in some form of altruism).

Comment by Jadagul on I'd take it · 2008-07-02T10:00:54.000Z · LW · GW

Paul Crowley: remember that US markets are much larger than, say, the US economy. From the article:

It depends on the comparison. U.S. GDP is $12 trillion, the total value of traded securities (debt and equity) denominated in U.S. dollars is estimated to be more than $50 trillion, and the global value of traded securities is about $165 trillion.
And $10 trillion isn't where they are now, it's where they will be in four years or so. So while it's a bloody large amount of money, it's unlikely to be more than, say, 5% of traded securities on the market. And that doesn't include stuff like currency holdings.

Comment by Jadagul on What Would You Do Without Morality? · 2008-06-29T05:48:18.000Z · LW · GW

Eliezer: I'm finding this one hard, because I'm not sure what it would mean for you to convince me that nothing was right. Since my current ethics system goes something like, "All morality is arbitrary, there's nothing that's right-in-the-abstract or wrong-in-the-abstract, so I might as well try to make myself as happy as possible," I'm not sure what you're convincing me of--that there's no particular reason to believe that I should make myself happy? But I already believe that. I've chosen to try to be happy, but I don't think there's a good 'reason' for it.

On the other hand, maybe I right now am the end result you're looking for. In which case, yes, I do tip cabdrivers; no, I don't cheat; and usually I'd pull the kid off, if there weren't much risk to me.

Comment by Jadagul on Possibility and Could-ness · 2008-06-15T09:09:36.000Z · LW · GW

Joesph: I don't think I added more constraints, though it's a possibility. What extra constraints do you think I added?

As for not salvaging it, I can see why you would say that, but what word should be used to take its place? Mises commented somewhere in On Human Action that we can be philosophical monists and practical dualists; I believe that everything is ultimately reducible to (quasi?-)deterministic quantum physics, but that doesn't mean that's the most efficient way to analyze most situations. When I'm trying to catch a ball I don't try to model the effect of the weak nuclear force on the constituent protons. When I'm writing a computer program I don't even try to simulate the logic gates, much less the electromagnetic reactions that cause them to function. And when I'm dealing with people it's much more effective to say to myself, "given these options, what will he choose?" This holds even though I could in principle, were I omniscient and unbounded in computing power, calculate this deterministically from the quantum state of his brain.

Or, in other words, we didn't throw out Newtonian mechanics when we discovered Relativity. We didn't discard Maxwell when we learned about quantum electrodynamics. Why should this be different?

Comment by Jadagul on Possibility and Could-ness · 2008-06-15T01:23:54.000Z · LW · GW

Joseph Knecht: I think you're missing the point of Eliezer's argument. In your hypothetical, to the extent Eliezer-as-a-person exists as a coherent concept, yes he chose to do those things. Your hypothetical is, from what I can tell, basically, "If technology allows me to destroy Eliezer-the-person without destroying the outer, apparent shell of Eliezer's body, then Eliezer is no longer capable of choosing." Which is of course true, because he no longer exists. Once you realize that "the state of Eliezer's brain" and "Eliezer's identity" are the same thing, your hypothetical doesn't work any more. Eliezer-as-a-person is making choices because the state of Eliezer's brain is causing things to happen. And that's all it means.

Comment by Jadagul on Possibility and Could-ness · 2008-06-14T06:09:10.000Z · LW · GW

Eliezer: I'll second Hopefully Anonymous; this is almost exactly what I believe about the whole determinism-free will debate, but it's devilishly hard to describe in English because our vocabulary isn't constructed to make these distinctions very clearly. (Which is why it took a 2700-word blog post). Roland and Andy Wood address one of the most common and silliest arguments against determinism: "If determinism is true, why are you arguing with me? I'll believe whatever I'll believe." The fact that what you'll believe is deterministically fixed doesn't affect the fact that this argument is part of what fixes it.

Comment by Jadagul on Timeless Physics · 2008-05-27T10:22:26.000Z · LW · GW

Interestingly (at least, I think it's interesting), I'd always felt that way about time, before I learned about quantum mechanics. That's what a four-dimensional spacetime means, isn't it? And so science fiction stories that involve, say, changing the past have never made any sense to me. You can't change the past; it is. And no one can come from the future to change now, because the future is as well. Although now that I think about it more, I realize how this makes slightly more sense in this version of many-worlds than it does in a collapse theory.

Comment by Jadagul on The Quantum Arena · 2008-04-16T07:25:45.000Z · LW · GW

Eliezer: why uncountably infinite? I find it totally plausible that you need an infinite-dimensional space to represent all of configuration space, but needing uncountability seems, at least initially, to be unlikely.

Of course, it would be the mathematician who asks this question...

Comment by Jadagul on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T04:15:12.000Z · LW · GW

Sean: why is that "what utils do"? To the extent that we view utils as the semi-scientific concept from economics, they don't "just sum linearly." To economists utils don't sum at all; you can't make interpersonal comparisons of utility. So if you claim that utils sum linearly, you're making a claim of moral philosophy, and haven't argued for it terribly strongly.

Comment by Jadagul on The "Intuitions" Behind "Utilitarianism" · 2008-01-28T22:15:56.000Z · LW · GW

Eliezer: after wrestling with this for a while, I think I've identified at least one of the reasons for all the fighting. First of all, I agree with you that the people who say, "3^^^3 isn't large enough" are off-base. If there's some N that justifies the tradeoff, 3^^^3 is almost certainly big enough; and even if it isn't, we can change the number to 4^^^4, or 3^^^^3, or Busy Beaver (Busy Beaver (3^^^3)), or something, and we're back to the original problem.

For me, at least, the problem comes down to what 'preference' means. I don't think I have any coherent preferences over the idea of 3^^^3 dust specks. Note, I don't mean that I think my preferences are inconsistent, or poorly-formed, or that my intuition is bad. I don't think that talking about my preferences on that issue has any meaning.

Basically, I don't believe there's any objective standard of value. Even preferences like "I think as many people should die as painfully as possible" aren't wrong, per se; they just put you beyond the bounds of civilized society and make me have no desire to interact with you any more. So asking which of two circumstances is 'really better' doesn't have any meaning; 'better' only makes sense when you ask 'better to whom.' Which leads to two problems.

First is that the question tends to slip over to "which choice would you make." But once I start phrasing it in terms of me making a choice, all my procedural safeguards start kicking in. First, if you're a true deontologist your mental side constraints start jumping in. Even if you're sort-of utilitarian, like I am, the mental rules that say things like "we can't be sure that 3^^^3 people are actually going to suffer" and "helping to forge a society that considers torture acceptable leads to horrifying long-term consequences" kick in. I agree those are outside of the parameters of the original question; but the original question was ill-posed, and this is one of the places it slips to in translation.

But even if you avoid that, you still come to the question of what it means to prefer A over B, when you have no meaningful choice in the matter. I can't imagine a situation in which I could cause 3^^^3 people any coherent result. I'm not sure I believe there are or ever will be 3^^^3 moral agents. And do I have a coherent preference over circumstances that I will never know have occurred? Even if 3^^^3 people suffer, I'm not going to know that they do. It won't affect me, and I won't know that it affected anyone else, either.

Basically, moral questions that involve wildly unlikely or outright impossible scenarios don't tend to be terribly enlightening. If we lived in a world where we could reliably benefit unimaginably large numbers of people by causing vast pain to a few, maybe that would be okay. But since we don't, I think hypotheticals like this are more likely to short-circuit on the bounds of our extremely useful assumptions about the nature of the world than they are to tell us anything interesting.

Comment by Jadagul on "Science" as Curiosity-Stopper · 2007-09-04T06:42:17.000Z · LW · GW

Eliezer: have you really never heard the "10% of the brain" myth? Here's a link. You can get more by googling the phrase "ten percent brain."

Lots of people who believe in psychic phenomena will make arguments like, "studies show we only use ten percent of our brains. People with psychic powers are probably the ones who've figured out how to use more," or something like that.

And I agree that I've never heard the word 'science' used as a curiousity stopper. It doesn't make sense in context (as opposed to something like "this nifty gadget." Have you ever heard anyone answer a question with the word 'science!'?). The lightbulb was a better example, but also I think wrong: when I say electricity makes it work, I'm referencing a culturally understood bundle of information. And really, no one does ask that question in the way you mean, in our culture; everyone's seen lightbulbs before. The only way this makes sense is if you're talking to someone who's never seen electricity in action before, the answer 'electricity' is highly unlikely to satisfy them.

The general problem I have with this series of posts is that you seem to conflate three different phenomena, two of which are useful. The first is actual non-answers, a la Feynman's Wakalixes. The second is brief answers that are actually placeholders for larger discussions; 'electricity' is a good example of this. If your response about the light is "LEDs and batteries," that's just two words but it serves as an actual explanation if you know what those two things are. And third is rational ignorance; as I said earlier, you ask questions until either you understand or you decide that further understanding isn't worth the effort.

And finally, to be blunt, it's fine for you to say that your purpose here is to focus on writing speed without worrying about quality, and therefore our complaints that the quality isn't very high are beside the point; but it doesn't really give us any reason to hang around.

Comment by Jadagul on "Science" as Curiosity-Stopper · 2007-09-04T00:16:35.000Z · LW · GW

Eliezer: I think another factor is that different kinds of answers are differently useful. If you cast your spell on the train, I might come over and ask you how you did it. I can guarantee that "science" or "technology" wouldn't satisfy my curiousity (partly, I'm sure, because I'm a nerd and enjoy technology). But if you said, "It's this cool device I ordered from Sharper Image for $10,000," that would probably satisfy me, because it answers the relevant question. I can come up with mechanisms by which you could do things like that, though it would be expensive; if you tell me you bought a very expensive item, that both tells me "it fits into the types of explanation you're familiar with" and "if you want to do this too, here's how."

I think that in a lot of cases, the inquiry stopper is the answer that convinces us of one of two things: either that we now know what we need to know to use the phenomenon, or that any further explanation would go over our heads and/or confuse us.

Comment by Jadagul on The Futility of Emergence · 2007-08-27T23:14:42.000Z · LW · GW

Eliezer: Here's another example similar to ones other people have raised, a story I heard once, that might explain why I think it's an important and useful concept.

Supposedly, in the early nineties when the Russians were trying to transition to a capitalist economy, a delegation from the economic ministry went to visit England, to see how a properly market-based economy would work. The British took them on a tour, among other things, of an open-air fresh foods market. The Russians were shown around the market, and were appropriately impressed. Afterwards, one of the senior delegation members approached one of his escorts: "So, who sets the price for rice in this market?" The escort was puzzled a bit, and responded, "No one sets the price. It's set on the market." And the Russian responded, "Yes, yes, I know, of course that's the official line. But who really sets the price of rice?"

The Russian couldn't conceive that an organization as complex as the open air market could have assembled itself; he was sure someone must have designed it in order for it to work. It had to have been set up. But markets and prices are an emergent phenomenon; the price isn't set by one person and doesn't have any one cause. And yet the markets function.

Similarly, a lot of people seem to have a mental model of democratic institutions that says it's a non-emergent phenomenon: if you write a constitution and hold elections, you get a democracy with the rule of law. Others (including myself) claim that democracy and rule-of-law are emergent phenomena: if they don't exist, there's no specific set of actions a central actor can take that will cause them to exist. They exist because of millions of decentralized and uncoordinated actions of individuals without specific direction. If you hold the first view, projects like the establishment of the new Iraqi government make sense: we set up a government with a constitution and elections, so it should become a free democratic state. If you hold the second view, the project is insane: freedom and democracy require millions of individual and low-level cultural shifts that can't be imposed from above, so there's no way for us to turn the nation into a democracy. My point here isn't that one view is right or wrong, although I have a firm belief. My point is that it's highly relevant to our foreign policy to ask whether democracy is emergent or not.

Usually when you say, "You can't just impose X from above," you're claiming X is an emergent phenomenon; the hallmark of a non-emergent phenomenon is that it's possible for a single actor to take a series of actions that either cause or prevent it.

Comment by Jadagul on The Futility of Emergence · 2007-08-27T05:45:15.000Z · LW · GW

Eliezer: I generally like your posts, but I disagree with you here. I think that there's at least one really useful definition of the word emergence (and possibly several useless ones).

It's true, of course (at least to a materialist like me), that every phenomenon emerges from subatomic physics, and so can be called 'emergent' in that sense. But if I ask you why you made this post, your answer isn't going to be, "That's how the quarks interacted!" Our causal models of the world have many layers between subatomic particles and perceived phenomena. Emergence refers to the relationship between a phenomenon and its immediate cause.

So, for instance, suppose I'm on the interstate and I get caught in a traffic jam. I might wonder why there's a huge jam on the road. It's possible that there's a simple, straightforward explanation: "There's a ten-car pileup a mile further on, and five of the six lanes are shut down. That's why there's a traffic jam." Obviously we could get far more reductionist— both in terms of "why is there a pileup" and "why does a pileup cause a traffic jam"—but for the conceptual level we're operating on, the pileup is a full and complete answer. And thus the traffic jam isn't an 'emergent' phenomenon; it has one major identifiable cause.

In contrast, a lot of traffic jams 'just happen.' The previous sentence is false, strictly speaking; the jams come from somewhere. But you can't point to an individual cause of them; they arise from the local effects of millions of local actions taken by individual drivers. Removing any one of these actions wouldn't eliminate the jam; it's a cumulative product of all of them. So people searching for an explanation of why it takes two hours to dive ten miles in rush hour get really frustrated, because there's no good explanation to give them. And people trying to fix rush hour get even more frustrated, because there's no good angle to attack the problem from.

So emergence, in this sense, means that a phenomenon has many intertwined causes, rather than one or two identifiable and major causes. It turns out, of course, that most interesting phenomena are emergent (non-emergent phenomena are, by definition, boring, since their causes are straightforward). But "emergence" is useful as a shorthand for "the causes are complicated and interconnected, and I can't pick one out and tell you, 'here it is, this is why that happened.'" It's important not to get confused, and not to think an explanation of why we don't understand something is the same as an explanation of that thing. But as long as you remember that, it's a useful thing to remember.