Posts

Comments

Comment by Kip_Werking on Nonperson Predicates · 2008-12-27T03:06:53.000Z · LW · GW

Note that there's a similar problem in the free will debate:

Incompatilist: "Well, if a godlike being can fix the entire life story of the universe, including your own life story, just by setting the rules of physics, and the initial conditions, then you can't have free will."

Compatibilist: "But in order to do that, the godlike being would have to model the people in the universe so well, that the models are people themselves. So there will still be un-modeled people living in a spontaneous way that wasn't designed by the godlike being. (And if you say that the godlike being models the models, too, the same problem arises in another iteration; you can't win that race, incompatibilist; it's turtles all the way down.")

Incompatibilist: I'm not sure that's true. Maybe you can have models of human behavior that don't themselves result in people. But even if that's true, people don't create themselves from scratch. Their entire life stories are fixed by their environment and heredity, so to speak. You may have eliminated the rhetorical device used to make my point; but the point itself remains true.

At which point, the two parties should decide what "free will" even means.

Comment by Kip_Werking on Nonperson Predicates · 2008-12-27T03:00:17.000Z · LW · GW

The "problem" seems based on several assumptions:

  1. that there is objectively best state of the world, to which a Friendly should steer the universe
  2. pulling the plug on a Virtual Universe containing persons is wrong
  3. there is something special about "persons," and we should try to keep them in the universe and/or make more of them

I'm not sure any of these are true. Regarding 3, even if there is an X that is special, and that we should keep in the universe, I'm not sure "persons" is it. Maybe it is simpler: "pleasure-feeling-stuff" or "happiness-feeling-stuff." Even if there is a best state of the universe, I'm not sure that are any persons in it, at all. Or perhaps only one.

In other words, our ethical views, (to the extent that godlike minds can sustain any) might find that "persons" are coincidental containers for ethically-relevant-stuff, and not the ethically-relevant-stuff itself.

The notion that we should try to maximize the number of people in the world, perhaps in order to maximize the amount of happiness in the world, has always struck me as taken the Darwinian carrot-on-the-stick one step too far.

Comment by Kip_Werking on The Bedrock of Morality: Arbitrary? · 2008-08-17T22:25:00.000Z · LW · GW

Michael Anissimov, August 14, 2008 at 10:14 PM asked me to expound.

Sure. I don't want to write smug little quips without explaining myself. Perhaps I'm wrong.

It's difficult to engage Eliezer in debate/argument, even in a constructive as opposed to adversarial way, because he writes so much material, and uses so many unfamiliar terms. So, my disagreement may just be based on an inadequate appreciation of his full writings (e.g. I don't read every word he posts on overcomingbias; although I think doing so would probably be good for my mind, and I eagerly look forward to reading any book he writes).

Let me just say that I'm a skeptic (or "anti-realist") about moral realism. I think there is no fact of the matter about what we should or should not do. In this tradition, I find the most agreement with Mackie (historically) and Joshua Greene at Harvard (today). I think Eliezer might benefit greatly from reading both of them. You can find Greene's Ph.d thesis here:

http://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-Dissertation.pdf

It's worth reading in entirety.

Why am I a moral skeptic? Before I give good reasons, let me suggest some possibly bad ones: it's a "shocking" and unpopular position. And I certainly love to be a gadfly. So, if Eliezer and I do have a real disagreement here, it may be drawn along the same lines we have with the free will debate: Eliezer seems to have strong compatibilist leanings, and I'm more inclined towards non-realism about free will. Thus, Eliezer may be inclined to resist shocking or uncomfortable truths, or I may be overly eager to find them. That's one possible reason for my moral skepticism.

I certainly believe that any philosophical investigations which lead people to generally safe and comfortable positions, in which common sense is vindicated, should give us pause. And people who see their role as philosopher as vindicating common sense, and making cherished beliefs safe for the world, are dishonoring the history of philosophy, and doing a disservice to themselves and the world. To succeed in that project, fully at least, one must engage in the sort of rationalization Eliezer has condemned over and over.

Now let me give my good reasons:

P1. An essential aspect of what it means for something to be morally right is that the something is not just morally right because everyone agrees that the something is. Thus, everyone agrees that if giving to charity, or sorting pebbles, is morally right, it is not just right because everyone says that it is. It is right in some deeper sense.

P2. But, all we have to prove that giving to charity, etc., is right, is that everyone thinks it is (to the extent they do, which is not 100%).

You might say: well, giving to charity increases the sum amount of happiness in the world, or is more fair, or follows some Kantian rule. But, then again, we ask: why? And the only answer seems to be that everyone agrees that happiness should be maximized, or fairness maximized, or that rule followed. But, as we said when we started, the fact that everyone agreed wasn't a good enough reason.

So we're left with reasons which we already agree are not good enough. We can only get around this through fancy rationalization, and in particular by forgetting P1.

Eliezer offers his own reasons for believing something is right:

"The human one, of course; not because it is the human one, but because it is right. I do not know perfectly what is right, but neither can I plead entire ignorance."

What horribly circular logic is that? It's right because it's right?

The last few words present a link to another article. And there you find quotes like these:

"Why not accept that, ceteris paribus, joy is preferable to sorrow?"

"You might later find some ground within yourself or built upon yourself with which to criticize this - but why not accept it for now? Not just as a personal preference, mind you; but as something baked into the question you ask when you ask "What is truly right"?"

"Are you willing to relinquish your Socratean ignorance?"

This is special pleading. It is hand waving. It is the sort of insubstantial, waxing poetic that pastors use to captivate their audiences, and young men use to romance young women. It is a sweet nothing. It should make you feel like you're being dealt with by a used car salesman; that's how I feel when I read it.

The question isn't "why not prefer joy over sorrow?" That's a wild card that can justify anything (just flip it around: "why not prefer sorrow over joy?"). You might not find a decisive reason against preferring joy to sorry, but that's just because you're not going to find a decisive reason to believe anything is right or wrong. Any given thing might make the world happier, or follow a popular rule, but what makes that "right"? Nothing. The problem above, involving P1 and P2, does not go away.

The content of morality is not baked into the definitions of words in our moral vocabulary, either (as Eliezer implies when he writes: "you will have problems with the meaning of your words, not just their plausibility"---another link). Definitions are made by agreement and, remember, P1 says that something can't be moral just because everyone agrees that the something is moral. The language of morality just refers to what we should do. The words themselves, and their definitions, are silent about what the content of that morality is, what the things are that we should actually do.

So I seem to disagree with Eliezer quite substantially about morality, and in a similar way to how we disagree about free will.

Finally, I can answer the question: what scares me about Eliezer's view? Certainly not that he loves joy and abhors suffering so much. Believe me when I say, about his mission to make the universe one big orgasm: godspeed.

Rather, it's his apparent willingness to compromise his rationalist and critical thinking principles in the process. The same boy who rationalized a way into believing there was a chocolate cake in the asteroid belt, should know better than to rationalize himself into believing it is right to prefer joy over sorrow.

What he says sounds nice, and sexy, and appealing. No doubt many people would like for it to be true. As far as I can tell, it generally vindicates common sense. But at what cost?

Joy feels better than sorrow. We can promote joy instead of sorrow. We will feel much better for doing so. Nobody will be able to criticize us for doing the wrong thing. The world will be one big orgasm. Let's satisfy ourselves with that. Let's satisfy ourselves with the merely real.

Comment by Kip_Werking on The Bedrock of Morality: Arbitrary? · 2008-08-15T01:55:22.000Z · LW · GW

I find Eliezer's seemingly-completely-unsupported belief in the rightness of human benevolence, as opposed to sorting pebbles, pretty scary.

Comment by Kip_Werking on Could Anything Be Right? · 2008-07-18T19:55:01.000Z · LW · GW

"I can't abjure my own operating system."

We don't need to get into thorny issues involving free will and what you can or can't do.

Suffice it to say that something's being in our DNA is neither sufficient nor necessary for it to be moral. The tablet and our DNA are relevantly similar in this respect.

Comment by Kip_Werking on Could Anything Be Right? · 2008-07-18T15:21:45.000Z · LW · GW

I should add: when discussing morality, I think it's important to give the anti-realist's position some consideration (which doesn't seem to happen in the post above). See Joshua Greene's The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It, and J.L. Mackie's Ethics: Inventing Right and Wrong.

Comment by Kip_Werking on Could Anything Be Right? · 2008-07-18T15:03:47.000Z · LW · GW

As far as I can tell, Eliezer is concluding that he should trust part of his instincts about morality because, if he doesn't, then he won't know anything about it.

There are multiple arguments here that need to be considered:

  1. If one doesn't know anything about morality, then that would be bad; I wanna know something about morality, therefore it's at least somewhat knowable. This argument is obviously wrong, when stated plainly, but there are hints of it in Eliezer's post.

  2. If one doesn't know anything about morality, then that can't be morality, because morality is inherently knowable (or knowable by definition). But why is morality inherently knowable. I think one can properly challenge this idea. It seems to be prima facie plausible that morality, and/or its content, could be entirely unknown, at least for a brief period of time.

  3. If one doesn't know anything about morality, then morality is no different than a tablet saying "thou shalt murder." This might be Eliezer's primary concern. However, this is a concern about arbitrariness, and not a concern about knowability. The two concerns seem to me to be orthogonal to each other (although I'd be interested to hear reasons why they are not). An easy way to see this is to recognize that the subtle intuitions Eliezer wants to sanction as "moral", are just as arbitrary as the "thou shall murder" precept on the tablet. That is, there seems to be no principled reason for regarding one, and not the other, as non-arbitrary. In both cases, the moral content is discovered, and not chosen, one just happens to be discovered in our DNA, and not in a tablet.

So, in view of all three arguments, it seems to me that morality, in the strong sense Eliezer is concerned with, might very well be unknowable, or at least is not in principle always partly known. (And we should probably concern ourselves with the strong sense, even if it is more difficult to work with, if our goal is to be an AI to rewrite the entire universe according to our moral code of choice, whatever that may turn out to be.) This was his original position, it seems, and it was motivated by concerns about "mere evolution" that I still find quite compelling.

Note that, if I understand Eliezer's view correctly, he currently plans on using a "collective volition" approach to friendly AI, whereby the AI will want to do whatever very-very-very-very smart future versions of human beings want it to do (this is a crude paraphrasing). I think this would resolve the concerns I raise above: such a smart AI would recognize the rightness or wrongness of any arguments against his view, like those I raise above, as well as countless other arguments, and respond appropriately.

Comment by Kip_Werking on Will As Thou Wilt · 2008-07-09T02:28:02.000Z · LW · GW

This is one of my favorite quotes (and one of only two I post on my facebook page, the other being "The way to love something is to realize that it might be lost", which is cited at the top of the scarcity chapter in Cialdini's Influence).

I'm not sure if I interpret it the same way as Schopenhauer (who was batsh** crazy as far as I can tell), but I take it to mean this:

Control bottoms out. In the race between A, "things influencing/determining how you decide/think/act" and B, "your control over these things that influence/determine how you decide/think/act", A will always win. The desire for infinite control, control that doesn't bottom out, that bootstraps itself out of nothingness (what some people have associated with free will), is doomed to frustration.

[In fact, Einstein cites exactly this quote in explaining why he didn't believe in free will: "In human freedom in the philosophical sense I am definitely a disbeliever. Everybody acts not only under external compulsion but also in accordance with inner necessity. Schopenhauer's saying, that "a man can do as he will, but not will as he will," has been an inspiration to me since my youth up, and a continual consolation and unfailing well-spring of patience in the face of the hardships of life, my own and others'. This feeling mercifully mitigates the sense of responsibility which so easily becomes paralyzing, and it prevents us from taking ourselves and other people too seriously; it conduces to a view of life in which humour, above all, has its due place."]

Shopenhauer draws the line between action and will: we choose how we act, given our will, but we don't choose how we will. Many would take issue with that. But it doesn't really matter where you draw the line, the point is that eventually the line will be drawn. Someone might say: "oh, I choose how I will!" And then Schopenhauer might say (I like to think): "oh really, and what is this choice based on? Did you choose that?"

To some people, the fact that we don't have this ultimate control (free will, if you like) is obvious. "Of course we don't have that kind of free will, it's obviously non-existent, because it's logically impossible." But not all necessary truths are obvious, and most people are happy to believe in logical impossibilities---just pick up a philosophy of religion book and read about the many paradoxes associated with a perfectly loving, just, omnipresent, and omnipotent (etc.) God.

Note also that Schopenhauer's insight has a consequence: because everything we do, our entire lives, can be traced back to things entirely outside of our control, it follows that a sufficiently powerful and intelligent being could design our entire lives before we are born. Our entire life story, down to the last detail, could have been predetermined and preprogrammed (assuming the universe is deterministic in the right way). Most people don't realize how interesting Schopenhauer's insight, or at least the kernal of truth I think it captures, is, until you phrase it in those dramatic terms.

Comment by Kip_Werking on What Would You Do Without Morality? · 2008-06-29T06:05:33.000Z · LW · GW

I'm already convinced that nothing is right or wrong in the absolute sense most people (and religions) imply.

So what do I do? Whatever I want. Right now, I'm posting a comment to a blog. Why? Not because it's right. Right or wrong has nothing to do with it. I just want to.

Comment by Kip_Werking on The Ultimate Source · 2008-06-16T01:00:54.000Z · LW · GW

"You might as well say that you can't possibly choose to run into the burning orphanage, because your decision was fully determined by the future fact that the child was saved."

I don't see how that even begins to follow from what I've said, which is just that the future is fixed2 before I was born. The fixed2 future might be that I choose to save the child, and that I do so. That is all consistent with my claim; I'm not denying that anyone chooses anything.

"If you are going to talk about causal structure, the present screens off the past."

If only that were true! Unfortunately, even non-specialists have no difficulty tracing causal chains well into the past. The present might screen off the past if the laws of physics are were asymmetrical (if multiple pasts could map onto the same future)---but this is precisely what you deny in the same comment. The present doesn't screen off the past. A casual observation of a billiards game shows this: ball A causes ball B to move, hitting ball C, which causes ball C to move, hitting ball D, etc. (Caledion makes the same point above).

I'm not sure how long your willing to keep the dialogue going (as Honderich says "this problems gets a hold on you" and doesn't let go), but I appreciate your responses. There's a link from the Garden of Forking Paths here now, too.

Comment by Kip_Werking on The Ultimate Source · 2008-06-15T22:13:27.000Z · LW · GW

Eliezer,

The subtle ambiguity here is between two meanings of "is fixed":

  1. going from a state of being unfixed to a state of being fixed
  2. being in a state of being fixed

I think you were are interpreting me to mean 1. I only meant 2, and that's all that I need. That the future is fixed2, before I am born, is what disturbs people, regardless of when the moment of fixing1 happens (if any).

KTW

Comment by Kip_Werking on The Ultimate Source · 2008-06-15T21:43:15.000Z · LW · GW

Eliezer,

"I am not attached to the phrase "free will", though I do take a certain amount of pride in knowing exactly which confusion it refers to, and even having saved the words as still meaning something. Most of the philosophical literature surrounding it - with certain exceptions such as your own work! - fails to drive at either psychology or reduction, and can be discarded without loss."

Your modesty is breathtaking!

"Fear of being manipulated by an alien is common-sensically in a whole different class from fear of being deterministic within physics. You've got to worry about what else the alien might be planning for you; it's a new player on the board, and a player who occupies an immensely superior position."

Sure, they are not identical. But they are relevantly similar, because whether the-world-before-you-were-born or God/aliens/machine did the work, it wasn't you and that's what people want, they want to be doing whatever it is that the-world-before-you-were-born or God/aliens/machine did. At least, they want it to be the case that they are not vulnerable to the whims of these entities. If God/alien/machine might be saintly or malicious, and design my life accordingly, the thousand monkeys of natural selection banging away on their typewriters hardly makes people feel better about free will.

"Your entire life destiny is deterministic(ally branching) given the past, but it was not written before you were born."

I didn't say "written", I said "fixed". And it clearly is fixed. Given determinism, there is only one future, and that future is fixed/settled/decided/unchangeable - however you want to say it - given the laws of nature and initial state.

"So you can see why I might want to rescue even "free will" and not just the sensation of freedom; what people fear, when they fear they do not have free will, is not the awful truth."

Well, this is an empirical claim, and data may one day decide it. It seems to me that your view is:

  1. People think of free will as that power which prevents it from being the case that state A of the universe determines state C regardless of what person B, in between does.

While I think:

  1. People think of free will as that power which prevents it from being the case that our destinies are fixed before we are born.

Regarding 1, I think non-specialists (and specialists) can make huge mistakes when thinking about free will. But I don't think this is one of them. I don't think anybody worries about it being the case that:

"If I stop typing this post right now, the post will still get typed, because, damn it, I don't have free will! No matter what, the post will get typed. I could go take a shower, wash my hair, and drive to Maryland, but those keys will still be magically clicking away. And that terrifies me! I hope I have free will, so I can prevent that from happening."

Nobody thinks that, just as nobody thinks "I have a desire to pick up an apple, I'm not sure exactly where it came from, but it sure is powerful (perhaps not so powerful as to leave no doubt in my mind about whether I will pick it up---if that detail concerns you), but powerful enough, and look, lo and behold, I am exercising my local control over the apple, to satisfy my desire, whereever it came from, and now I am picking it up! What should I call this marvelous, wonderful power? Let's call it free will." Nobody said that either.

The one thing people have said, since the Greeks, through the Middle Ages, through the Scientific Revolution, and onward is: if determinism is true, my life destiny is fixed before I am born, and if indeterminism is true, that doesn't help. I sure I hope I have free will, so I can prevent this from being the case.

But I don't have any data to support these assertions about what people think when they worry about free will and use the term. I don't think anybody has that data, and the controversy may not be resolved until someone does (and perhaps not even then).

Comment by Kip_Werking on The Ultimate Source · 2008-06-15T15:21:14.000Z · LW · GW

Eliezer,

You may be referring to my draft paper "THE VIEW FROM NOWHERE THROUGH A DISTORTED LENS: THE EVOLUTION OF COGNITIVE BIASES FAVORING BELIEF IN FREE WILL". I don't think I've bothered to keep the paper online, but I remember you having read at least part of it, and the latest draft distinguishes between "actual control" and "novelist control". I believe earlier drafts referred to "control" and "control*".

I'm really glad to see someone as bright as you discussing free will. Here are some comments on this post:

  1. Like you, I think "The phrase means far too many things to far too many people, and you could make a good case for tossing it out the window." And like you, I nevertheless find myself strongly pulled towards one view in the debate, and writing page after page defending it. Maybe saying that free will is poorly defined just isn't enough fun to satisfy me.

On the Garden of Forking Paths I said something to the effect (can't find the post now): Mathematicians, because they strictly define their terms, have no difficulty admitting when a problem is too vague to have a solution. Just look at the list of Hilbert's 23 problems:

http://en.wikipedia.org/wiki/Hilbert's_problems

Many of them, like the 4th and 21st problems, are resolved and the answer is "we don't know! you have to be more precise! what exactly are you asking?"

Philosophers do not seem to have the same ability. I can't think of a single problem, involving any of philosopher's favorite fuzzy words like "God", "soul", "evil", "consciousness", "right", "wrong", "knowledge", where philosophers have said, with consensus, "actually, we figured out that the particular question doesn't have an answer, because you have to be more precise with your terms." And philosophers don't like to argue about terms that refer uncontroversially (or much less controversially) to things we can inspect in the real world, like the laptop on which I'm writing this post. They prefer to argue things that remain arguable.

(It makes me wonder whether philosophers have perverse incentives, like in the medical profession, to actually not solve problems, but keep them alive and worked on.)

  1. Personally, I lean towards no-free-will views. And, in doing that, I defend what I call a cognitive-biases+semantic-ambiguity view. The semantic ambiguity part is, as I just discussed, the idea that "free will" is too vague to work with.

[On this note, we shouldn't just stop when we come to this conclusion, and defend our pet-favorite-definition, or lack thereof, without convincing anybody else. If we say "free will" is poorly defined, and nobody believes us, because they all prefer their favorite definitions of free will, with which their positions in the debate win, we won't get anywhere. Instead, what are needed, I think, are large scale studies/surveys investigating how people use 'free will', and what they think the term means. Such studies should show, if we are right, that there is enormous variation in how people use the term, and what they think it means, and that people hardly use the term at all anyway. Then we would have knock-down evidence that should persuade many or most of the (more reasonable) philosophers working on this topic.]

The other part of my view, the cognitive biases view, is the part that pulls me to no-free-will-ism. This is what I discuss in my paper, mentioned above, about novelist control. I remember you rightly accusing me of having thrown "the kitchen sink" at the problem. While there is certainly a kernal of truth to that, and I would like to rewrite several paragraphs in the paper, I stand by most of what I wrote, and note in my defense that I only discuss about 15 of the approximately 100 biases listed on Wikipedia---I tried to leave much of the sink alone.

And while I see that you discuss a few cognitive biases / confusing sensations related to "free will", you don't mention ones I would consider important: the fundamental attribution error, the illusion of control, the just world phenomenon, and positive-outcome bias, etc.

  1. My pet definition of free will. You seem to have your own favorite definition of free will, with which compatibilism wins (and an extreme one at that, based on your comment in the other post about a person still being responsible despite just being instantiated a couple of seconds ago to commit some good/bad deed). Although I think the meaning of "free will" should be determined by how people tend to use the term, I have my own favorite definition, on which we don't have free will. I prefer my definition to yours for at least the following reasons:

A. On your definition, free will is something that people uncontroversially have. Nobody ever doubted that people have the sort of local control you discuss. Nobody ever doubted that people are more like rocks than computers. So, compatibilist definitions of free will are boring, and odd, to me for at least that reason.

In contrast, although it would be absurd for people to believe they have novelist control or something like it, it is not absurd to believe that people often believe absurdities, especially positive, anthropocentric ones about themselves, their special possessions, powers, and abilities, and their place in the universe. This is the same species that believed the sun revolved around the earth, a loving God created us and wants us to worship him, that we all possess immaterial souls, etc.

Thus, if you're willing to say that God, souls, etc., do not exist, but draw the line and say "wait a minute, I'm willing to deny the existence of all of these other absurdities, but I'm not going to give you free will. [Maybe adding: that cuts too close]. I'm even willing to redefine the term, as Dennett does, before admitting defeat", then you fit Tamler Sommer's wonderful observation that "[p]hilosophers who reject God, Cartesian dualism, souls, noumenal selves, and even objective morality, cannot bring themselves to do the same for the concepts of free will and moral responsibility." There seems to be some tension here.

B. On my pet definition of free will, the one I came into the debate with, and strongly feel pulled towards, free will is that power which solved an apparent problem: that my entire life destiny was fixed, before I was born, by circumstances outside of my control. This is what disturbed me (or alleviated me, depending on my mood, I suppose), when I first considered the problem. And, more importantly, this is what I think motivated most people, today and throughout history, when discussing free will. Going on the way back to the Greeks, then to Augustine and the Middle Ages, through the scientific revolution, when people were talking about free will, they were generally talking about this problem: that our fate is fixed before we are born (at least if the world is deterministic, as seemed plausible for so long and even today; and if it isn't deterministic, that doesn't seem to help).

In other words, when people were talking about free will, they were not considering the uncontroversial, local control and powers they have. Nobody said "hmm, even if an alien created me five seconds ago to pick up this apple, and implanted within me a desire to pick up this apple, and therefore now I have that desire, and look, lo and behold, I am picking up the apple. What should I call this amazing, beautiful, wonderful power? I know, let's call it free will!" Admitting that this is a bit of a straw man, but with a good point behind it, I submit that nobody ever talked about free will in a way even remotely close to this.

The point is this: you, Eliezer (and Dennett, McKenna etc.) might be cool customers, but the idea of an alien/God/machine creating me five seconds ago, implanting within me a desire/value to pick up an apple, and then having the local control to act on that desire/value SCARES THE LIVING FU** OUT OF PEOPLE—and not just because of the alien/God/machine.

Nobody, except for a handful of clever intellectuals like yourself, ever thought that free will was supposed to be consistent with situations like that. Rather, my strong suspicion (the reason I lean towards "free will doesn't exist" instead of "what is free will? tell me what it means and I'll tell you if it exists") is that "free will" was designed and intended to protect us from exactly and precisely that vulnerability.

Of course, nothing can protect us from that vulnerability. We can't build our own lives/characters, even with a time machine; we're denied by logic even more than physics. So free will never developed a clear definition. In accordance with the law of conjunction, the more philosophers said about free will (or God), the more details crafty philosophers were able to knock out. And so the terms shed more and more of itself (like the Y chromosome) until it was little more than LISP token: that thing that protects us from our fates being fixed before we're born. How? "Shhhhh. Silly child, we're not supposed to ask such questions.

This is at least a rough sketch of where I stand on the free will debate, one of the few intellectual topics on which I feel knowledgeable enough to really engage you. I work a lot, and don't read about free will as much as I used to, but this is my current position. I think we just need more data.

Comment by Kip_Werking on Penguicon & Blook · 2008-03-13T21:56:55.000Z · LW · GW

I died laughing.

Comment by Kip_Werking on Posting on Politics · 2008-01-01T19:10:26.000Z · LW · GW

Not even Raul Paul?

Comment by Kip_Werking on Think Like Reality · 2007-05-02T23:03:04.000Z · LW · GW

This is a great post. I just want to add: we might fail to understand physics and mass murderers for different reasons. When a terrorist slams a jet into a skyscraper, someone can say "I don't understand why that person did that? It's bizarre." But they seem to fail in understand because victims have a biased recall of transgressions (according to the work by Baumeister on the myth of pure evil). Perpetrators seem to actually have more accurate and complete understandings of transgressions. This is one of my favorite findings from social science.

In contrast, we seem to think physics is bizarre for different reasons.

Comment by Kip_Werking on Knowing About Biases Can Hurt People · 2007-04-06T03:42:42.000Z · LW · GW

As someone who seems to have "thrown the kitchen sink" of cognitive biases at the free will problem, I wonder if I've suffered from this meta-bias myself. I find only modest reassurance in the facts that: (i) others have agreed with me and (ii) my challenge for others to find biases that would favor disbelief in free will has gone almost entirely unanswered.

But this is a good reminder that one can get carried away...

Comment by Kip_Werking on Just Lose Hope Already · 2007-02-25T02:44:41.000Z · LW · GW

Good point. Robin's comment, and Eliezer's post, reminds me of this excellent article at The Situationist:

http://thesituationist.wordpress.com/2007/02/20/dispositionist-situational-characters/

Comment by Kip_Werking on Just Lose Hope Already · 2007-02-25T01:49:31.000Z · LW · GW

Excellent post. And very relevant, after Valentine's Day.