Posts
Comments
For the record, Thom_Blake is thomblake.
I must ask - what is the purpose of 'overcoming bias' now that 'less wrong' is launched? Why post this here instead of there?
retired urologist,
There's a distinction to be made between altruism (ethical theory) and altruism (social science). The sense of altruism you use seems more to agree with the former. It seems like Eliezer prefers the latter. To summarize:
Altruism (ethical theory) is just like utilitarianism, except that good for oneself is entirely discounted.
Altruism (social sciences) is a 'selfless concern for others', in which one helps other people without conscious concern for one's personal interests (at least some of the time). It does not require that one abandon one's own interests in the pursuit of helping others all of the time.
Note that the latter is merely descriptive of behavior. Thus Eliezer can say "I behave altruistically" and "I am a utilitarian" (probably not direct quotes) simultaneously without pain of contradiction.
It's getting to the point where ethicists have to define 'ethical x' for all 'x' to distinguish it from its use in other fields.
Eliezer,
I prefer this style. It's a much more interesting and entertaining read. It has a 'wisdom of the ancients' feel which, while obviously meant to be ironic, has (I think) a greater chance of being remembered in 1000 years.
I'm not sure about the judges example. There have been certain judges who've taken sides on high-profile issues (like abortion or gay marriage) and consequently their reputation turned to mud amongst those on the other side of the issue.
EY, but you are a moral realist (or at least a moral objectivist, which ought to refer to the same thing). There's a fact about what's right, just like there's a fact about what's prime or what's baby-eating. It's a fact about the universe, independent of what anyone has to say about it. If we were human' we'd be moral' realists talking about what's right'. ne?
Anonymous, that sound you hear is probably people rushing to subscribe. http://www.rifters.com/crawl/?p=266 - note the comments.
Sebastian,
Here there is an ambiguity between 'bias' and 'value' that is probably not going to go away. EY seems to think that bias should be eliminated but values should be kept. That might be most of the distinction between the two.
Nick,
There is a tendency for some folks to distinguish between descriptive and normative statements, in the sense of 'one cannot derive an ought from an is' and whatnot. A lot of this comes from hearing about the "naturalistic fallacy" and believing this to mean that naturalism in ethics is dead. Naturalists in turn refer to this line of thinking as the "naturalistic fallacy fallacy", as the strong version of the naturalistic fallacy does not imply that naturalism in ethics is wrong.
As for the fallacy you mention, I disagree that it's a fallacy. It makes more sense to me to take "I value x" and "I act as though I value x" to be equivalent when one is being honest, and to take both of those as different from (an objective statement of) "x is good for me". This analysis of course only counts if one believes in akrasia - I'm really still on the fence on that one, though I lean heavily towards Aristotle.
Manon, thanks for pointing that out - I'd left that out of my analysis entirely. I too would like untranslatable 2. It doesn't change my answer though, as it turns out.
Nick,
Behavior isn't an argument (except when it is), but it is evidence. And it's akrasia when you say, "Man, I really think spending this money on saving lives is the right thing to do, but I just can't stop buying ice cream" - not when you say "buying ice cream is the right thing to do". Even if you are correct in your disagreement with Simon about the value of ice cream, that would be a case of Simon being mistaken about the good, not a case of Simon suffering from akrasia. And I think it's pretty clear from context that Simon believes he values ice cream more.
And it sounds like that first statement is an attempt to invoke the naturalistic fallacy fallacy. Was that it?
I prefer the ending where we ally ourselves with the babyeaters to destroy the superhappies. We realize that we have more in common with the babyeaters, since they have notions of honor and justified suffering and whatnot, and encourage the babyeaters to regard the superhappies as flawed. The babyeaters will gladly sacrifice themselves blowing up entire star systems controlled by the superhappies to wipe them out of existence due to their inherently flawed nature. Then we slap all of the human bleeding-hearts that worry about babyeater children, we come up with a nicer name for the babyeaters, and they (hopefully) learn to live with the fact that we're a valuable ally that prefers not to eat babies but could probably be persuaded given time.
P.S. anyone else find it ironic that this blog has measures in place to prevent robots from posting comments?
Julian,
And possibly billions of Huygens humans. Don't forget those.
Humanity could always offer to sacrifice itself. Compare the world where humanity compromises with both the Babyeaters and the Super Happy, versus one where we convince them to not compromise and instead make everybody Super Happy.
Of course, I'm just guessing, since I'm not a Utilitarian.
Rudd-O,
That's not the idea I'm getting at all (free retaliation, etc). It seems more to me that these people can't imagine intentionally hurting or being distrustful of each other, and so when they say 'rape', think 'tickle fight'.
spriteless,
That's what I was thinking. Perhaps the newcomer engineered this meetup somehow to see whether the two species are safe to contact.
This makes eudaimonist egoism seem simpler, more elegant by comparison. I don't need a stream of victims now, and I won't need them post-Singularity.
Doug S,
Indeed. The AI wasn't paying attention if he thought bringing me to this place was going to make me happier. My stuff is part of who I am; without my stuff he's quite nearly killed me. Even moreso when 'stuff' includes wife and friends.
But then, he was raised by one person so there's no reason to think he wouldn't believe in wrong metaphysics of self.
James,
I wonder the same thing. Given that reality is allowed to kill us, it seems that this particular dystopia might be close enough to good. How close to death do you need to be before unleashing the possibly-flawed genie?
Eliezer,
I must once again express my sadness that you are devoting your life to the Singularity instead of writing fiction. I'll cast my vote towards the earlier suggestion that perhaps fiction is a good way of reaching people and so maybe you can serve both ends simultaneously.
Julian,
Agreed. Utilitarians are not to be trusted.
kekeke
I don't find this surprising at all, other than that it occurred to a consequentialist. Being a virtue ethicist and something of a Romantic, it seems to me that the best world will be one of great and terrible events, where a person has the chance to be truly and tragically heroic. And no, that doesn't sound comfortable to me, or a place where I'd particularly thrive.
Jed, your comment (the second example, specifically) reminds me of the story about how the structure of DNA was discovered. Apparently the 'Eureka' moment actually came after the researchers obtained better materials for modeling.
Tilden is another roboticist who's gotten rich and famous off of unintelligent robots: BEAM robotics
Interesting idea... though I still think you're wrong to step away from anthropomorphism, and 'necessary and sufficient' is a phrase that should probably be corralled into the domain of formal logic.
And I'm not sure this adds anything to Sternberg and Salter's definition: 'goal-directed adaptive behavior'.
I've yet to hear of anyone turning back successfully, though I think some have tried, or wished they could.
It seems to be one interpretation of the Buddhist project
Regarding self, I tend to include much more than my brain in "I" - but then, I'm not one of those who thinks being 'uploaded' makes a whole lot of sense.
Anonymous: torture's inefficacy was well-known by the fourteenth century; Bernardo Gui, a famous inquisitor who supervised many tortures, argued against using it because it is only good at getting the tortured to say whatever will end the torture. I can't seem to find the citation, but here is someone who refers to it: http://www.ewtn.com/library/ANSWERS/INQUIS2.htm
Toby,
You should never, ever murder an innocent person who's helped you, even if it's the right thing to do
You should never, ever do X, even if if you are exceedingly confident that it is the right thing to do
I believe a more sensible interpretation would be, "You should have an unbreakable prohibition against doing X, even in cases where X is the right thing to do" - the issue is not that you might be wrong about it being the right thing to do, but rather that not having the prohibition is a bad thing.
pdf, the only reason that suggestion works is that we're not in the business of bombing headquarters at 2AM on a weekend. If both sides were scheduling bombings at 2AM, I'd bet they'd be at work at 2AM.
"Everyone has a right to their own opinion" is largely a product of its opposite. For a long period many people believed "If my neighbor has a different opinion than I do, then I should kill him". This led to a bad state of affairs and, by force, a less lethal meme took hold.
Russell, I don't think that necessarily specifies a 'cheap trick'. If you start with a rock on the "don't let the AI out" button, then the AI needs to start by convincing the gatekeeper to take the rock off the button. "This game has serious consequences and so you should really play rather than just saying 'no' repeatedly" seems to be a move in that direction that keeps with the spirit of the protocol, and is close to Silas's suggestion.
I'm with Kaj on this. Playing the AI, one must start with the assumption that there's a rock on the "don't let the AI out" button. That's why this problem is impossible. I have some ideas about how to argue with 'a rock', but I agree with the sentiment of not telling.
This doesn't seem to mesh with the Friendly AI goal of getting it perfectly right on the first try.
Do we accept some uncertainty and risk to do something extraordinary now, or do we take the slow, calm, deliberative course that stands a chance of achieving perfection?
Is there any chance of becoming a master of the blade without beginning to cut?
I think if history remembers you, I'd bet that it will be for the journey more than its end. If the interesting introspective bits get published in a form that gets read, then I'd bet it will be memorable in the way that Lao zi or Sun zi is memorable. In case the Singularity / Friendly AI stuff doesn't work out, please keep up the good work anyway.
scott clark,
I think your history is a bit off. The plan wasn't 'originally' for Luke to kill Vader, his father; it wasn't until midway through filming Empire (or at least, after the release of A New Hope) that Lucas decided that Vader was Luke's father.
Fer a bit thar I were thinkin' that ye'd be agreein' with that yellow-bellied skallywag Hanson. Yar, but the Popperians ha' it! A pint of rum fer ol' Eliezer!
Avast! But 'ought' ain't needin' to be comin' from another 'ought', if it be arrived at empirically. Yar.
Several places in the US did have regulations protecting the horse industry from the early automobile industry - I'm not sure what "the present system" refers to as opposed to that sort of thing.
But if there are repeatable psi experiments, then why hasn't anyone won the million dollars? (or even passed the relatively easy first round?)
You're forgetting the philosopher's dotted-line trick of making it clearer by saying it in a foreign language. "Oh, you thought I meant 'happiness' which is ill-defined? I actually meant 'eudaimonia'!"
@Eliezer I mostly agree with Caledonian here. I disagree with much of what you say, and it has nothing to do with being 'fooled'. Censoring the few dissenters who actually comment is not a good idea if you have any interest in avoiding an echo chamber. You're already giving off the Louis Savain vibe pretty hard.
Aristotelians may not be teaching physics courses (though I know of no survey showing that) but they do increasingly teach ethics courses. It makes sense to think of what qualities are good for a fox or good for a rabbit, and so one can speak about them with respect to ethics.
However, there is no reason to think that they disagree about ethics, since disagreement is a social activity that is seldom shared between species, and ethics requires actually thinking about what one has most reason to do or want. While it makes sense to attribute the intentional stance to such animals to predict their behavior (a la Dennett), that still leaves us with little reason to regard them as things that reason about morality.
That said, there is good reason to think that dogs, for instance, have disagreements about ethics. Dogs have a good sense of what it takes to be a good dog, and will correct each other's behavior if it falls out of line (as mentioned w.r.t. wolves above).