Posts
Comments
Please avoid the biased default of Alice (female) being the assistant and Bob (male) being the higher-ranking person. Varying names in general is desirable, not only to avoid these pitfalls, but also to force ourselves to recognize that we tend to choose stereotypically white names that are not even representative of our own communities, much less the global community.
Second, I don't believe you. I say it's always smarter to use the partitioned data than the aggregate data. If you have a data set that includes the gender of the subject, you're always better off building two models (one for each gender) instead of one big model. Why throw away information?
If you believe the OP's assertion
Similarly, for just about any given set of data, you can find some partition which reverses the apparent correlation
then it is demonstrably false that your strategy always improves matters. Why do you believe that your strategy is better?
Ha - that post refers to Diax's Rake, which is what happened to spur me to find the Thucydides quote in the first place!
In other news, I've invented this incredible device I call a "wheel".
It is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not desire.
-- Thucydides
Interesting nuance. You have taken "loses" to mean "defeated", presumably leading to "and therefore updated"; I agree that this is by no means an automatic process. But I took "loses" to mean "is less accurate" (which of course makes my interpretation more tautological).
My first reading of this quote was essentially "the map loses to the terrain". I interpreted "theory" as "our beliefs" and "practice" as "reality".
Possibly, yes; but reading a discussion about a topic I don't know anything about is hard, so I'm less likely to get anything out of it, despite the fact that it is there in what you wrote. I'm claiming that the additional "distracting" material would actually serve as a hook to get the reader interested in putting effort into understanding the point of the post.
This post, which concentrated on people's commentary about a field of inquiry, could have been improved by including some summary of the field being commented on.
I'd need to read it again, with pen and paper, to gain an understanding of why the Student-t distribution is the right thing to compute. At the very least I can say this: the probability of one's vote tilting the election is certainly higher in very close elections (as measured beforehand by polls, say) than in an election such as Obama-McCain 2008. The article you quoted suggests the difference in probabilities is much higher than I anticipated. (Unless my calculation, which models the closest possible election, is incorrect.)
Edited to add: Okay, I've incorporated the probability p that the coin lands heads into the calculation. Even when p=50.05% instead of 50% (closer than any presidential election since Garfield/Hancock), the chance of one vote tilting the election drops by over four orders of magnitude. So for practical purposes, my initial calculation is irrelevant. - At least this was a good lesson in bias: this argument was easy to find, once Wei's comment got me to consider the alternative in the first place.
Jane estimates the probability of her vote tilting the presidential election at 1 in 1,000,000; Eric estimates the probability of his vote tilting the presidential election at 1 in 100,000,000. I find both of these estimates orders of magnitude too low.
Eric presumably is modeling the election by saying that with 100,000,000 voters (besides himself), there are 100,000,001 outcomes of their votes, only one of which is a tie which his vote will break. But his conclusion that the odds of deciding the election are about 1 in 100,000,000 assumes that all of these outcomes are equally probable, which is a hard-to-defend assumption.
If every other voter is flipping a fair coin to determine their vote, for example, then the probability of a tied vote is exactly 100,000,000! / [ (50,000,000!)^2 * 2^100,000,000], which is approximately 1/12,500. Moreover, I estimate that a solid 40% of the voters will vote Republican no matter what, and a solid 40% will vote Democrat no matter what. If the other 20,000,000 voters flip their fair coins, now the probability of a tied vote is approximately 1/5,600.
This model is oversimplified, of course, because factors that tend to bias individual votes (such as the current economy) will tend to bias many votes in the same direction. Still, I am much more confident in a 1-in-10,000 chance to affect the presidential election outcome than I am in 1-in-100,000,000.
I also agree with Kaj's comment that my vote influences other people to vote, which would make the odds of affecting the outcome better still.
Don't worry: I don't know the rules of Go; I went to the site linked; and I could only find a link to a link to a video tutorial, not a list of rules, so I stopped trying.
A presentation critique: psychologically, we tend to compare the relative areas of shapes. Your ovals in Figure 1 are scaled so that their linear dimensions (width, for example) are in the ratio 2:5:3; however, what we see are ovals whose areas are in ratio 4:25:9, which isn't what you're trying to convey. I think this happens for later shapes as well, although I didn't check them all.
If my estimate is 1000, and someone else's is 300, that's too big a discrepancy to explain by minor variations. It casts doubt on the assumption of identical thermometers. Assuming that I only have the other people's estimates, and there's no opportunity for discussion, I'll search for reasons why we might have come up with completely different answers, but if I find no error in my own, I'll discard all such outliers.
What if everyone else's estimate is between 280 and 320? Do you discard your own estimate if it's an outlier? Does the answer depend on whether you can find an error in your reasoning?
I'm mathematically interested in this procedure; can you please provide a reference?
good thing I didn't go with the username "this.is.she"!
This is a belated reply to cousin_it's 2009 post Bayesian Flame
Not to be a grammar Nazi, but I believe it should be cousin_its....
Nominating adversarial legal systems as role models of rational groups, knowing how well they function in practice, seems a bit misplaced.
Part of the output of your quizzes is a line of the form "Your chance of being well calibrated, relative to the null hypothesis, is 50.445538580926 percent." How is this number computed?
I chose "25% confident" for 25 questions and got 6 of them (24%) right. That seems like a pretty good calibration ... but 50.44% chance of being well calibrated relative to null doesn't seem that good. Does that sentence mean that an observer, given my test results, would assign a 50.44% probability to my being well calibrated and a 49.56% probability to my not being well calibrated? (or to my randomly choosing answers?) Or something else?
Taskifaction doesn't destroy romance any more than it destroys music or dance.
This one sentence alone is worth my upvote for its sheer truth. (Although
Sucking at stuff is not sublime.
is a close second.)
Poor kids had ghetto clothes first; rich kids had the clothes second, but ghetto fashion first.