Comment by Thom_Blake on Off Topic Thread: May 2009 · 2009-06-11T18:26:43.416Z · LW · GW

For the record, Thom_Blake is thomblake.

Comment by Thom_Blake on Another Call to End Aid to Africa · 2009-04-03T20:39:00.000Z · LW · GW

I must ask - what is the purpose of 'overcoming bias' now that 'less wrong' is launched? Why post this here instead of there?

Comment by Thom_Blake on Formative Youth · 2009-02-25T18:08:00.000Z · LW · GW

retired urologist,

There's a distinction to be made between altruism (ethical theory) and altruism (social science). The sense of altruism you use seems more to agree with the former. It seems like Eliezer prefers the latter. To summarize:

Altruism (ethical theory) is just like utilitarianism, except that good for oneself is entirely discounted.

Altruism (social sciences) is a 'selfless concern for others', in which one helps other people without conscious concern for one's personal interests (at least some of the time). It does not require that one abandon one's own interests in the pursuit of helping others all of the time.

Note that the latter is merely descriptive of behavior. Thus Eliezer can say "I behave altruistically" and "I am a utilitarian" (probably not direct quotes) simultaneously without pain of contradiction.

It's getting to the point where ethicists have to define 'ethical x' for all 'x' to distinguish it from its use in other fields.

Comment by Thom_Blake on Wise Pretensions v.0 · 2009-02-20T18:58:14.000Z · LW · GW


I prefer this style. It's a much more interesting and entertaining read. It has a 'wisdom of the ancients' feel which, while obviously meant to be ironic, has (I think) a greater chance of being remembered in 1000 years.

Comment by Thom_Blake on Pretending to be Wise · 2009-02-20T15:03:56.000Z · LW · GW

I'm not sure about the judges example. There have been certain judges who've taken sides on high-profile issues (like abortion or gay marriage) and consequently their reputation turned to mud amongst those on the other side of the issue.

Comment by Thom_Blake on Epilogue: Atonement (8/8) · 2009-02-07T20:08:00.000Z · LW · GW

EY, but you are a moral realist (or at least a moral objectivist, which ought to refer to the same thing). There's a fact about what's right, just like there's a fact about what's prime or what's baby-eating. It's a fact about the universe, independent of what anyone has to say about it. If we were human' we'd be moral' realists talking about what's right'. ne?

Comment by Thom_Blake on Epilogue: Atonement (8/8) · 2009-02-06T16:18:02.000Z · LW · GW

Anonymous, that sound you hear is probably people rushing to subscribe. - note the comments.

Comment by Thom_Blake on True Ending: Sacrificial Fire (7/8) · 2009-02-06T14:34:00.000Z · LW · GW


Here there is an ambiguity between 'bias' and 'value' that is probably not going to go away. EY seems to think that bias should be eliminated but values should be kept. That might be most of the distinction between the two.

Comment by Thom_Blake on True Ending: Sacrificial Fire (7/8) · 2009-02-05T22:45:00.000Z · LW · GW


There is a tendency for some folks to distinguish between descriptive and normative statements, in the sense of 'one cannot derive an ought from an is' and whatnot. A lot of this comes from hearing about the "naturalistic fallacy" and believing this to mean that naturalism in ethics is dead. Naturalists in turn refer to this line of thinking as the "naturalistic fallacy fallacy", as the strong version of the naturalistic fallacy does not imply that naturalism in ethics is wrong.

As for the fallacy you mention, I disagree that it's a fallacy. It makes more sense to me to take "I value x" and "I act as though I value x" to be equivalent when one is being honest, and to take both of those as different from (an objective statement of) "x is good for me". This analysis of course only counts if one believes in akrasia - I'm really still on the fence on that one, though I lean heavily towards Aristotle.

Comment by Thom_Blake on True Ending: Sacrificial Fire (7/8) · 2009-02-05T21:36:01.000Z · LW · GW

Manon, thanks for pointing that out - I'd left that out of my analysis entirely. I too would like untranslatable 2. It doesn't change my answer though, as it turns out.

Comment by Thom_Blake on True Ending: Sacrificial Fire (7/8) · 2009-02-05T17:38:49.000Z · LW · GW


Behavior isn't an argument (except when it is), but it is evidence. And it's akrasia when you say, "Man, I really think spending this money on saving lives is the right thing to do, but I just can't stop buying ice cream" - not when you say "buying ice cream is the right thing to do". Even if you are correct in your disagreement with Simon about the value of ice cream, that would be a case of Simon being mistaken about the good, not a case of Simon suffering from akrasia. And I think it's pretty clear from context that Simon believes he values ice cream more.

And it sounds like that first statement is an attempt to invoke the naturalistic fallacy fallacy. Was that it?

Comment by Thom_Blake on True Ending: Sacrificial Fire (7/8) · 2009-02-05T16:22:27.000Z · LW · GW

I prefer the ending where we ally ourselves with the babyeaters to destroy the superhappies. We realize that we have more in common with the babyeaters, since they have notions of honor and justified suffering and whatnot, and encourage the babyeaters to regard the superhappies as flawed. The babyeaters will gladly sacrifice themselves blowing up entire star systems controlled by the superhappies to wipe them out of existence due to their inherently flawed nature. Then we slap all of the human bleeding-hearts that worry about babyeater children, we come up with a nicer name for the babyeaters, and they (hopefully) learn to live with the fact that we're a valuable ally that prefers not to eat babies but could probably be persuaded given time.

P.S. anyone else find it ironic that this blog has measures in place to prevent robots from posting comments?

Comment by Thom_Blake on True Ending: Sacrificial Fire (7/8) · 2009-02-05T14:39:45.000Z · LW · GW


And possibly billions of Huygens humans. Don't forget those.

Comment by Thom_Blake on Three Worlds Decide (5/8) · 2009-02-03T19:56:00.000Z · LW · GW

Humanity could always offer to sacrifice itself. Compare the world where humanity compromises with both the Babyeaters and the Super Happy, versus one where we convince them to not compromise and instead make everybody Super Happy.

Of course, I'm just guessing, since I'm not a Utilitarian.

Comment by Thom_Blake on Interlude with the Confessor (4/8) · 2009-02-02T17:24:18.000Z · LW · GW


That's not the idea I'm getting at all (free retaliation, etc). It seems more to me that these people can't imagine intentionally hurting or being distrustful of each other, and so when they say 'rape', think 'tickle fight'.

Comment by Thom_Blake on War and/or Peace (2/8) · 2009-01-31T20:05:41.000Z · LW · GW


That's what I was thinking. Perhaps the newcomer engineered this meetup somehow to see whether the two species are safe to contact.

Comment by Thom_Blake on Higher Purpose · 2009-01-23T14:34:48.000Z · LW · GW

This makes eudaimonist egoism seem simpler, more elegant by comparison. I don't need a stream of victims now, and I won't need them post-Singularity.

Comment by Thom_Blake on Failed Utopia #4-2 · 2009-01-22T19:44:00.000Z · LW · GW

Doug S,

Indeed. The AI wasn't paying attention if he thought bringing me to this place was going to make me happier. My stuff is part of who I am; without my stuff he's quite nearly killed me. Even moreso when 'stuff' includes wife and friends.

But then, he was raised by one person so there's no reason to think he wouldn't believe in wrong metaphysics of self.

Comment by Thom_Blake on Failed Utopia #4-2 · 2009-01-21T20:54:28.000Z · LW · GW


I wonder the same thing. Given that reality is allowed to kill us, it seems that this particular dystopia might be close enough to good. How close to death do you need to be before unleashing the possibly-flawed genie?

Comment by Thom_Blake on Failed Utopia #4-2 · 2009-01-21T14:12:33.000Z · LW · GW


I must once again express my sadness that you are devoting your life to the Singularity instead of writing fiction. I'll cast my vote towards the earlier suggestion that perhaps fiction is a good way of reaching people and so maybe you can serve both ends simultaneously.

Comment by Thom_Blake on Sympathetic Minds · 2009-01-19T14:43:21.000Z · LW · GW


Agreed. Utilitarians are not to be trusted.


Comment by Thom_Blake on Eutopia is Scary · 2009-01-12T15:07:04.000Z · LW · GW

I don't find this surprising at all, other than that it occurred to a consequentialist. Being a virtue ethicist and something of a Romantic, it seems to me that the best world will be one of great and terrible events, where a person has the chance to be truly and tragically heroic. And no, that doesn't sound comfortable to me, or a place where I'd particularly thrive.

Comment by Thom_Blake on Sustained Strong Recursion · 2008-12-05T23:49:01.000Z · LW · GW

Jed, your comment (the second example, specifically) reminds me of the story about how the structure of DNA was discovered. Apparently the 'Eureka' moment actually came after the researchers obtained better materials for modeling.

Comment by Thom_Blake on Selling Nonapples · 2008-11-13T21:57:29.000Z · LW · GW

Tilden is another roboticist who's gotten rich and famous off of unintelligent robots: BEAM robotics

Comment by Thom_Blake on Efficient Cross-Domain Optimization · 2008-10-28T17:08:17.000Z · LW · GW

Interesting idea... though I still think you're wrong to step away from anthropomorphism, and 'necessary and sufficient' is a phrase that should probably be corralled into the domain of formal logic.

And I'm not sure this adds anything to Sternberg and Salter's definition: 'goal-directed adaptive behavior'.

Comment by Thom_Blake on Which Parts Are "Me"? · 2008-10-22T19:29:34.000Z · LW · GW

I've yet to hear of anyone turning back successfully, though I think some have tried, or wished they could.

It seems to be one interpretation of the Buddhist project

Regarding self, I tend to include much more than my brain in "I" - but then, I'm not one of those who thinks being 'uploaded' makes a whole lot of sense.

Comment by Thom_Blake on Ethics Notes · 2008-10-22T11:34:57.000Z · LW · GW

Anonymous: torture's inefficacy was well-known by the fourteenth century; Bernardo Gui, a famous inquisitor who supervised many tortures, argued against using it because it is only good at getting the tortured to say whatever will end the torture. I can't seem to find the citation, but here is someone who refers to it:

Comment by Thom_Blake on Ethical Injunctions · 2008-10-21T14:32:16.000Z · LW · GW


You should never, ever murder an innocent person who's helped you, even if it's the right thing to do

You should never, ever do X, even if if you are exceedingly confident that it is the right thing to do

I believe a more sensible interpretation would be, "You should have an unbreakable prohibition against doing X, even in cases where X is the right thing to do" - the issue is not that you might be wrong about it being the right thing to do, but rather that not having the prohibition is a bad thing.

Comment by Thom_Blake on Ethical Inhibitions · 2008-10-20T13:59:48.000Z · LW · GW

pdf, the only reason that suggestion works is that we're not in the business of bombing headquarters at 2AM on a weekend. If both sides were scheduling bombings at 2AM, I'd bet they'd be at work at 2AM.

Comment by Thom_Blake on Dark Side Epistemology · 2008-10-19T02:26:00.000Z · LW · GW

"Everyone has a right to their own opinion" is largely a product of its opposite. For a long period many people believed "If my neighbor has a different opinion than I do, then I should kill him". This led to a bad state of affairs and, by force, a less lethal meme took hold.

Comment by Thom_Blake on Shut up and do the impossible! · 2008-10-09T16:48:11.000Z · LW · GW

Russell, I don't think that necessarily specifies a 'cheap trick'. If you start with a rock on the "don't let the AI out" button, then the AI needs to start by convincing the gatekeeper to take the rock off the button. "This game has serious consequences and so you should really play rather than just saying 'no' repeatedly" seems to be a move in that direction that keeps with the spirit of the protocol, and is close to Silas's suggestion.

Comment by Thom_Blake on Shut up and do the impossible! · 2008-10-09T13:58:06.000Z · LW · GW

I'm with Kaj on this. Playing the AI, one must start with the assumption that there's a rock on the "don't let the AI out" button. That's why this problem is impossible. I have some ideas about how to argue with 'a rock', but I agree with the sentiment of not telling.

Comment by Thom_Blake on Make an Extraordinary Effort · 2008-10-07T17:17:16.000Z · LW · GW

This doesn't seem to mesh with the Friendly AI goal of getting it perfectly right on the first try.

Do we accept some uncertainty and risk to do something extraordinary now, or do we take the slow, calm, deliberative course that stands a chance of achieving perfection?

Is there any chance of becoming a master of the blade without beginning to cut?

Comment by Thom_Blake on On Doing the Impossible · 2008-10-06T16:19:44.000Z · LW · GW

I think if history remembers you, I'd bet that it will be for the journey more than its end. If the interesting introspective bits get published in a form that gets read, then I'd bet it will be memorable in the way that Lao zi or Sun zi is memorable. In case the Singularity / Friendly AI stuff doesn't work out, please keep up the good work anyway.

Comment by Thom_Blake on Use the Try Harder, Luke · 2008-10-02T18:15:11.000Z · LW · GW

scott clark,

I think your history is a bit off. The plan wasn't 'originally' for Luke to kill Vader, his father; it wasn't until midway through filming Empire (or at least, after the release of A New Hope) that Lucas decided that Vader was Luke's father.

Comment by Thom_Blake on Say It Loud · 2008-09-19T18:14:38.000Z · LW · GW

Fer a bit thar I were thinkin' that ye'd be agreein' with that yellow-bellied skallywag Hanson. Yar, but the Popperians ha' it! A pint of rum fer ol' Eliezer!

Comment by Thom_Blake on The Sheer Folly of Callow Youth · 2008-09-19T14:52:02.000Z · LW · GW

Avast! But 'ought' ain't needin' to be comin' from another 'ought', if it be arrived at empirically. Yar.

Comment by Thom_Blake on Raised in Technophilia · 2008-09-17T14:02:17.000Z · LW · GW

Several places in the US did have regulations protecting the horse industry from the early automobile industry - I'm not sure what "the present system" refers to as opposed to that sort of thing.

Comment by Thom_Blake on Psychic Powers · 2008-09-12T20:35:23.000Z · LW · GW

But if there are repeatable psi experiments, then why hasn't anyone won the million dollars? (or even passed the relatively easy first round?)

Comment by Thom_Blake on Qualitative Strategies of Friendliness · 2008-08-30T03:36:05.000Z · LW · GW

You're forgetting the philosopher's dotted-line trick of making it clearer by saying it in a foreign language. "Oh, you thought I meant 'happiness' which is ill-defined? I actually meant 'eudaimonia'!"

Comment by Thom_Blake on Mirrors and Paintings · 2008-08-24T19:49:44.000Z · LW · GW

@Eliezer I mostly agree with Caledonian here. I disagree with much of what you say, and it has nothing to do with being 'fooled'. Censoring the few dissenters who actually comment is not a good idea if you have any interest in avoiding an echo chamber. You're already giving off the Louis Savain vibe pretty hard.

Comment by Thom_Blake on Hot Air Doesn't Disagree · 2008-08-16T15:44:27.000Z · LW · GW

Aristotelians may not be teaching physics courses (though I know of no survey showing that) but they do increasingly teach ethics courses. It makes sense to think of what qualities are good for a fox or good for a rabbit, and so one can speak about them with respect to ethics.

However, there is no reason to think that they disagree about ethics, since disagreement is a social activity that is seldom shared between species, and ethics requires actually thinking about what one has most reason to do or want. While it makes sense to attribute the intentional stance to such animals to predict their behavior (a la Dennett), that still leaves us with little reason to regard them as things that reason about morality.

That said, there is good reason to think that dogs, for instance, have disagreements about ethics. Dogs have a good sense of what it takes to be a good dog, and will correct each other's behavior if it falls out of line (as mentioned w.r.t. wolves above).