Posts

Comments

Comment by David_J._Balan on Underconstrained Abstractions · 2008-12-15T19:13:38.000Z · LW · GW

This reminds me of the bit in Steven Landsburg's (excellent) book "The Armchair Economist" in which he makes the point that data on what happens on third down in football games is a very poor guide to what would happen on third down if you eliminated fourth down.

Comment by David_J._Balan on Inner Goodness · 2008-11-14T18:18:27.000Z · LW · GW

It seems to me that what's internal about morality is the buy-in, the acceptance that I ought to care about the other fella at all. But much of what remains is external in the sense that the specific rules of morality to which I subject myself are (at least to a large extent) the product of objective reasoning.

Comment by David_J._Balan on The Ritual · 2008-10-22T03:50:42.000Z · LW · GW

I like the idea of a fictional sequence involving a rationality master and students. But I can't stand the Jeffreyssai character. He's just so intolerably smug and self-satisfied, very much in the mold of some of the martial arts instructors I had when I was young. More recently I took boxing classes, and the teacher was like Mickey from the Rocky movies. Much better persona; Jeffreyssai should take note.

Comment by David_J._Balan on Rationality Quotes 15 · 2008-09-20T17:22:32.000Z · LW · GW

After I explained "percentile", he said "One in three hundred", so I laughed briefly and said "Yes."

The "Yes" part is fine. The "I laughed briefly" part would be better done away with.

Comment by David_J._Balan on Is Fairness Arbitrary? · 2008-08-18T19:18:48.000Z · LW · GW

My sister used to be a teacher in a special education school. She would sometimes let some kids do things that other kids weren't allowed to do; a kid particularly prone to some kind of negative reaction to an otherwise mandatory activity might be allowed not to participate (I don't recall exactly). When the other kids protested that it wasn't fair, she would reply: "fair is when everyone gets what they need, not when everyone gets the same." Not totally satisfactory, but in my mind not totally bogus either. How hungry each person is does have some bearing on what's a fair division of the pie.

Comment by David_J._Balan on Inseparably Right; or, Joy in the Merely Good · 2008-08-12T01:30:04.000Z · LW · GW

It seems to me like the word "axioms" belongs in here somewhere.

Comment by David_J._Balan on The Gift We Give To Tomorrow · 2008-07-18T03:26:14.000Z · LW · GW

Of course the feeling of love had to evolve, and of course it had to evolve from something that was not love. And of course the value of the love that we feel is not woven into the fabric of the universe; it's only valuable to us. But it's still a very happy thing that love exists, and it's also sort of a lucky thing; it is not woven into the fabric of the universe that intelligent beings (or any beings for that matter) have to have anything that feels as good as love does to us. This luck may or may not be "surprising" in the sense that it may or may not be the case that the evolution of love (or something else that feels as good to the one who feels it) is highly likely conditional on evolving intelligence. I don't know the actual answer to this, but the point is that I can at least conceive of a sense in which in which the existence of love might be regarded as surprising.

BTW, the same point can be made about the religious (specifically Protestant) origins of the Enlightenment. The Enlightenment wasn't always there, and it didn't fall from the sky, so it had to have its origins in something that wasn't the Enlightenment. To the extent that Protestantism had some attributes that made it fertile soil for the Enlightenment to grow from, great. But that doesn't make old-timey Protestantism liberal or good, and it certainly doesn't entitle contemporary* Protestantism to a share of the credit for the Enlightenments' achievements.

Comment by David_J._Balan on The Genetic Fallacy · 2008-07-13T16:29:18.000Z · LW · GW

It is for this reason that Robert Aumann, super-smart as he is, should be entirely ignored when he opines about the Palestinian-Israeli conflict.

http://www.overcomingbias.com/2007/07/theyre-telling-.html

Comment by David_J._Balan on Where Recursive Justification Hits Bottom · 2008-07-13T02:34:29.000Z · LW · GW

It's clear that there are some questions to which there are (and likely never will be) fully satisfactory answers, and it is also clear that there is nothing to be done about it but to soldier on and do the best you can (see http://www.overcomingbias.com/2007/05/doubting_thomas.html). However, there really are at least a few things that pretty much have to be assumed without further examination. I have in mind the basic moral axioms like the principle that the other guy's welfare is something that you should be at all concerned about in the first place.

Comment by David_J._Balan on No Universally Compelling Arguments · 2008-06-29T20:58:42.000Z · LW · GW

I seem to recall that there is a strand of philosophy that tries to figure out what unproven axioms would be the minimum necessary foundation on which to build up something like "conventional" morality. They felt the need to do this precisely because of the multi-century failure of philosophers to come up with basis for morality that was unarguable "all the way down" to absolute first principles. I don't know anything about AI, but it sounds like what Eliezer is talking about here has something of the same flavor.

Comment by David_J._Balan on Ghosts in the Machine · 2008-06-26T03:21:44.000Z · LW · GW

Along the lines of some of the commenters above, it's surely not telling Eliezer anything he doesn't already know to say that there are lots of reasons to be scared that a super-smart AI would start doing things we wouldn't like even without believing that an AI is necessarily a fundamentally malevolent ghost that will wriggle out of whatever restraints we put it in.

Comment by David_J._Balan on OB Meetup: Millbrae, Thu 21 Feb, 7pm · 2008-02-01T18:27:11.000Z · LW · GW

Thursday, February 21st at 7:00 pm? Why bother? Surely the singularity thing will have happened by then:)

Comment by David_J._Balan on Every Cause Wants To Be A Cult · 2007-12-12T19:38:00.000Z · LW · GW

Eliezer's reminder that even rationalists are human, and so are subject to human failings such as turning a community into a cult, is welcome. But it's a big mistake to dismiss explanations such as "Perhaps only people with innately totalitarian tendencies would try to become the world's authority on everything." There is a huge degree of heterogeneity across people in every relevant metric, including a tendency toward totalitarianism. I can't imagine that anyone disputes this. And if the selection process for being in a certain position tends to advantage people with those tendencies, so that they are selected into them, that might well explain a large part of how people in those positions behave.

Comment by David_J._Balan on The Robbers Cave Experiment · 2007-12-10T19:03:27.000Z · LW · GW

Why the hating on summer camp? The good ones are wonderful.

Comment by David_J._Balan on Applause Lights · 2007-09-11T18:53:06.000Z · LW · GW

The democracy booster probably meant that people with little political power should not be ignored. And that's not an empty statement; people with little political power are ignored all the time.

Comment by David_J._Balan on The Futility of Emergence · 2007-08-27T14:13:34.000Z · LW · GW

I'm pretty out of my depth here, but I'll echo what some people have said above. Before people started scientifically doing either one, would it have been obvious that a simple model would be very successful at predicting the behavior of, say, subatomic particles but would be very unsuccessful at predicting the weather? That is, it seems like there really are some phenomena where it is more true and others where it is less true that predictions can be generally and successfully made using straightforward intuitive models. It seems like the label "emergent" is just a (useful) label for the stuff where this can't be done.

Comment by David_J._Balan on Semantic Stopsigns · 2007-08-25T21:57:26.000Z · LW · GW

I think some theists would say that the "who made God" question is a semantic stop sign, but that this is OK. That is, they would say that they are not capable in probing into the question any further, but that the leaders of their religion (with the help of the sacred texts) are capable of doing so, and they bring back from the other side the answer that the religion is true and everything is OK.

As for liberal democracy, it's clearly an error to assert without further argument that liberal democracy will solve all future problems. But it is not a mistake to say that it is far and away the most successful thing that humans have ever come up with, and so that it is the best framework in which to try to address future problems.

Comment by David_J._Balan on Hindsight bias · 2007-08-23T17:23:02.000Z · LW · GW

Of course it's always hard to know what truth is in situations like this, but there appears to be evidence that the people who were actually in charge of preventing terrorism were actively worried about something much like what actually happened, and were ignored by their superiors.

Comment by David_J._Balan on Religion's Claim to be Non-Disprovable · 2007-08-08T15:54:39.000Z · LW · GW

Eliezer, you shouldn't have chased Anna away.

Comment by David_J._Balan on Two More Things to Unlearn from School · 2007-07-13T00:35:57.000Z · LW · GW

I'm very sympathetic to the idea contained in the post. In fact, I used to say something similar in my graduate student gig as a freshman orientation guy. But teaching and learning real thought are hard. Can everybody teach it and learn it? Is there any sense in which what goes on now is not optimal, but is (or at least is not too far from) constrained optimal?

Comment by David_J._Balan on Scope Insensitivity · 2007-05-14T03:26:05.000Z · LW · GW

The same idea goes for insisting that the charity you donate to is actually good at its mission. If you get your warm glow from the image of yourself as a good person, and if your dollars follow your glow, then competition among charitable organizations will take the form of trying to get good at triggering that self-image. If you get your glow from results, and if your dollars follow that, then charities will have much better incentives.

Comment by David_J._Balan on Just Lose Hope Already · 2007-02-26T21:34:22.000Z · LW · GW

There is a story for very young children called "The Carrot Seed" where a little boy plants a carrot seed and waters it and pulls out the weeds around it, and keeps on doing so even though everyone keeps telling him nothing will grow. At the end, of course, a giant carrot comes up. I've always had mixed feelings about reading that story. On the one hand, you don't want to send the message that things are true just because you believe, and that evidence to the contrary doesn't matter. On the other hand, you do want to innoculate the kid against excessive self-doubt and against taking too seriously people who, out of malice or out of instinctual aversion to different ideas, or because the idea of someone else succeeding is an implicit rebuke to them for not having tried, love to tell people what they can't do.

Comment by David_J._Balan on Politics is the Mind-Killer · 2007-02-19T19:19:53.000Z · LW · GW

There is no doubt that politics gets people fired up, which makes dispassionate reasoning about it hard. On the other hand, politics is important, which makes dispassionate reasoning about it important as well. There is nothing wrong with deciding that this particular blog will not focus on politics. But to the extent that we do want to talk about politics here, I don't think the trick of finding some neutral historical example to argue about is going to work. First, historical examples that are obscure enough not to arouse passions one way or the other are exactly those things that most people don't know much about. Second, it's usually pretty obvious which side in the "neutral" example corresponds to the arguer's preferred side in the contemporary example, so the arguer is likely to just adopt that position, and then claim to have derived it from first principles based on a neutral example. I agree that neutral exercises can have some usefulness as they might be helpful in uncovering subtle biases in people who are sincerely trying to avoid them, but it won't get rid of the flamers.