Posts
Comments
I worry about "predictive" history classes being even more like indoctrination than current ones, if implemented with tests on obscure historical examples as you suggest. Explicitly teaching students about general historical lessons which extend to the future can easily turn into politics. There are strong incentives to pick and choose historical examples which generalize to the lessons the teacher or administration supports politically.
Current history classes at least have the strength that they teach students how to conduct research about known, factual questions. Even if they are only "studying to the test," students usually have to spend some portion of time writing research papers which demonstrate an understanding, based on evidence, of some historical phenomenon. For students not interested in STEM, this is usually the only serious training they will get in searching for evidence and interpreting it.
Even now, with history classes mainly focused on learning banal facts, I notice students often try to write research papers that say what the teacher wants to hear. If the subject of their research was instead the more politicized question of "which general patterns are at play here and are useful for future prediction and decision-making" this guess-the-teacher's-password effect could be supercharged.
Thank you for writing this post. This is a phenomenon I've also noticed, and it applies not just to arguing but to anything to do with reasoning about groups of people. Mistakes of the type "mistakenly attributed characteristic of group to person" are common. As you said in a comment, the way that we group people is usually very lossy. This is especially frustrating for those who have to deal with the same mistaken assumption being made about them often. Making inferences about specific people based on group generalizations is useful sometimes, but acting on them wrongly often has steep costs in misunderstanding and conflict. It's good to be reminded to keep close track of where you're making this type of inference.
Thank you for the suggestion! I moved the context out of the footnote.
I agree that most of the bets here are accurate indications of probability; most are in the range of 20-1000 dollars, and also there is a culture of honesty that seems like it would prevent offering a bet that didn't reflect someone's probability of an event while representing that it did.
The most common case I see where betting odds and probability don't match is with really small values.
It seems to be encouraged here to spontaneously do friendly betting in person. From my experience, this usually involves pretty small values (0-10 dollars) paid in cash. If I won a small bet like that I might buy a bag of chips, or something, but if I won twice as much I wouldn't buy two bags of chips, and it probably wouldn't be worth the effort to save it, so I would mostly just forget..
Another time this comes up is when people bet with fairly high odds ratios, for example $100 to $1. This ends up with pretty low winnings on the $1 side because the alternative is the $100 being way higher, less of a casual amount to bet. A lot of times this might as well be $100 to $0. the median hourly wage is like $17 (in the US) so it's not even worth 4 minutes of time to get the $1 into your account. what you mostly win is pride.
I also have seen some cases on here of people discussing really big bets, with amounts in the ten thousands/higher, and assuming that the betting odds will still correspond directly to probability.
Mostly I'm trying to say that it seems pretty automatic here to equate between betting odds and probability, and really there are some very common circumstances where this is not the case. (in the spirit of https://www.lesswrong.com/posts/zhRgcBopkR5maBcau/always-know-where-your-abstractions-break)