## Posts

## Comments

**gerg**on Simpson's Paradox · 2011-01-13T17:56:59.967Z · score: 3 (3 votes) · LW · GW

Second, I don't believe you. I say it's always smarter to use the partitioned data than the aggregate data. If you have a data set that includes the gender of the subject, you're always better off building two models (one for each gender) instead of one big model. Why throw away information?

If you believe the OP's assertion

Similarly, for just about any given set of data, you can find some partition which reverses the apparent correlation

then it is demonstrably false that your strategy always improves matters. Why do you believe that your strategy is better?

**gerg**on Rationality Quotes: January 2011 · 2011-01-10T10:09:01.976Z · score: 0 (0 votes) · LW · GW

Ha - that post refers to Diax's Rake, which is what happened to spur me to find the Thucydides quote in the first place!

In other news, I've invented this incredible device I call a "wheel".

**gerg**on Rationality Quotes: January 2011 · 2011-01-03T21:08:12.670Z · score: 13 (15 votes) · LW · GW

It is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not desire.

-- Thucydides

**gerg**on Rationality Quotes: December 2010 · 2010-12-09T02:48:28.937Z · score: 0 (0 votes) · LW · GW

Interesting nuance. You have taken "loses" to mean "defeated", presumably leading to "and therefore updated"; I agree that this is by no means an automatic process. But I took "loses" to mean "is less accurate" (which of course makes my interpretation more tautological).

**gerg**on Rationality Quotes: December 2010 · 2010-12-03T16:21:35.690Z · score: 6 (8 votes) · LW · GW

My first reading of this quote was essentially "the map loses to the terrain". I interpreted "theory" as "our beliefs" and "practice" as "reality".

**gerg**on Have no heroes, and no villains · 2010-11-12T07:57:02.975Z · score: 0 (0 votes) · LW · GW

Possibly, yes; but reading a discussion about a topic I don't know anything about is hard, so I'm less likely to get anything out of it, despite the fact that it is there in what you wrote. I'm claiming that the additional "distracting" material would actually serve as a hook to get the reader interested in putting effort into understanding the point of the post.

**gerg**on Have no heroes, and no villains · 2010-11-09T01:35:42.640Z · score: 1 (1 votes) · LW · GW

This post, which concentrated on people's commentary about a field of inquiry, could have been improved by including some summary of the field being commented on.

**gerg**on Politics as Charity · 2010-09-27T16:27:10.479Z · score: 1 (1 votes) · LW · GW

I'd need to read it again, with pen and paper, to gain an understanding of why the Student-t distribution is the right thing to compute. At the very least I can say this: the probability of one's vote tilting the election is certainly higher in very close elections (as measured beforehand by polls, say) than in an election such as Obama-McCain 2008. The article you quoted suggests the difference in probabilities is much higher than I anticipated. (Unless my calculation, which models the closest possible election, is incorrect.)

Edited to add: Okay, I've incorporated the probability p that the coin lands heads into the calculation. Even when p=50.05% instead of 50% (closer than any presidential election since Garfield/Hancock), the chance of one vote tilting the election drops by over four orders of magnitude. So for practical purposes, my initial calculation is irrelevant. - At least this was a good lesson in bias: this argument was easy to find, once Wei's comment got me to consider the alternative in the first place.

**gerg**on Politics as Charity · 2010-09-26T22:25:53.111Z · score: 0 (0 votes) · LW · GW

Jane estimates the probability of her vote tilting the presidential election at 1 in 1,000,000; Eric estimates the probability of his vote tilting the presidential election at 1 in 100,000,000. I find both of these estimates orders of magnitude too low.

Eric presumably is modeling the election by saying that with 100,000,000 voters (besides himself), there are 100,000,001 outcomes of their votes, only one of which is a tie which his vote will break. But his conclusion that the odds of deciding the election are about 1 in 100,000,000 assumes that all of these outcomes are equally probable, which is a hard-to-defend assumption.

If every other voter is flipping a fair coin to determine their vote, for example, then the probability of a tied vote is exactly 100,000,000! / [ (50,000,000!)^2 * 2^100,000,000], which is approximately 1/12,500. Moreover, I estimate that a solid 40% of the voters will vote Republican no matter what, and a solid 40% will vote Democrat no matter what. If the other 20,000,000 voters flip their fair coins, now the probability of a tied vote is approximately 1/5,600.

This model is oversimplified, of course, because factors that tend to bias individual votes (such as the current economy) will tend to bias many votes in the same direction. Still, I am much more confident in a 1-in-10,000 chance to affect the presidential election outcome than I am in 1-in-100,000,000.

I also agree with Kaj's comment that my vote influences other people to vote, which would make the odds of affecting the outcome better still.

**gerg**on Rationality Lessons in the Game of Go · 2010-08-22T02:51:44.864Z · score: 0 (0 votes) · LW · GW

Don't worry: I don't know the rules of Go; I went to the site linked; and I could only find a link to a link to a video tutorial, not a list of rules, so I stopped trying.

**gerg**on Bayes' Theorem Illustrated (My Way) · 2010-06-04T08:35:11.748Z · score: 2 (2 votes) · LW · GW

A presentation critique: psychologically, we tend to compare the relative *areas* of shapes. Your ovals in Figure 1 are scaled so that their *linear* dimensions (width, for example) are in the ratio 2:5:3; however, what we see are ovals whose areas are in ratio 4:25:9, which isn't what you're trying to convey. I think this happens for later shapes as well, although I didn't check them all.

**gerg**on Navigating disagreement: How to keep your eye on the evidence · 2010-04-26T05:52:13.425Z · score: 3 (3 votes) · LW · GW

If my estimate is 1000, and someone else's is 300, that's too big a discrepancy to explain by minor variations. It casts doubt on the assumption of identical thermometers. Assuming that I only have the other people's estimates, and there's no opportunity for discussion, I'll search for reasons why we might have come up with completely different answers, but if I find no error in my own, I'll discard all such outliers.

What if *everyone* else's estimate is between 280 and 320? Do you discard your own estimate if it's an outlier? Does the answer depend on whether you can find an error in your reasoning?

**gerg**on Navigating disagreement: How to keep your eye on the evidence · 2010-04-26T05:46:22.497Z · score: 1 (1 votes) · LW · GW

I'm mathematically interested in this procedure; can you please provide a reference?

**gerg**on Frequentist Magic vs. Bayesian Magic · 2010-04-10T20:25:30.128Z · score: 1 (1 votes) · LW · GW

good thing I didn't go with the username "this.is.she"!

**gerg**on Frequentist Magic vs. Bayesian Magic · 2010-04-10T01:38:32.803Z · score: -4 (10 votes) · LW · GW

This is a belated reply to

cousin_it's2009 post Bayesian Flame

Not to be a grammar Nazi, but I believe it should be **cousin_its**....

**gerg**on Individual vs. Group Epistemic Rationality · 2010-03-04T19:32:46.593Z · score: 3 (3 votes) · LW · GW

Nominating adversarial legal systems as role models of rational groups, knowing how well they function in practice, seems a bit misplaced.

**gerg**on Test Your Calibration! · 2009-11-13T03:06:20.503Z · score: 5 (5 votes) · LW · GW

Part of the output of your quizzes is a line of the form "Your chance of being well calibrated, relative to the null hypothesis, is 50.445538580926 percent." How is this number computed?

I chose "25% confident" for 25 questions and got 6 of them (24%) right. That seems like a pretty good calibration ... but 50.44% chance of being well calibrated relative to null doesn't seem that good. Does that sentence mean that an observer, given my test results, would assign a 50.44% probability to my being well calibrated and a 49.56% probability to my not being well calibrated? (or to my randomly choosing answers?) Or something else?

**gerg**on Let them eat cake: Interpersonal Problems vs Tasks · 2009-10-07T17:55:40.533Z · score: 16 (18 votes) · LW · GW

Taskifaction doesn't destroy romance any more than it destroys music or dance.

This one sentence alone is worth my upvote for its sheer truth. (Although

Sucking at stuff is not sublime.

is a close second.)

**gerg**on Why Real Men Wear Pink · 2009-08-06T18:33:28.123Z · score: 7 (7 votes) · LW · GW

Poor kids had ghetto clothes first; rich kids had the clothes second, but ghetto *fashion* first.