Posts
Comments
A mistake is not a thing, in and of itself, it's just the entire space of possible games outside the very narrow subset that lead to victory.
Minor nitpick, surely you mean possible moves, rather than possible games? The set of games that lead to defeat is necessarily symmetrical with the set that lead to victory, aside from the differences between black and white.
Is the Prisoners' Dilemma really the right metaphor here? I don't really get what the defector gains. Sure, I like them better for being so accommodating, but meanwhile they're paying the costs of giving me what I want, and if they try to invoke some kind of quid pro quo than all the positive feelings go out the window when I find out they were misleading me.
Silver's model already at least attempts to account for fundamentals and reversion to the mean, though. You could argue that the model still puts too much weight on polls over fundamentals, but I don't see a strong reason to prefer that over the first interpretation of just taking it at face value.
Nate Silver's model also moved toward Obama, so it's probably reflecting something real to some extent.
Decius is right that there aren't really spoilers, but I would argue that your time would be better spent reading HP:MOR than the discussion.
Something tells me that the note would be more likely to say something like "DO NOT MESS WITH TIME".
At least for myself , I first heard of Eliezer via the HPMOR TV Tropes page. There's a good chance I would have read the sequences sooner or later even if I hadn't (my brother found them independently and recommended them), but it definitely helped.
And I wouldn't say I was an idiot before, but twenty minutes of conversation with myself from a couple years ago might change my mind. And of course it's hard to tell how much of the difference is LW's influence and how much is just a matter of being older and wiser.
I would say that he was making at least the argument that "this level of responsibility is something you should adopt if you want to be a hero", and probably the more general argument that "people should adopt this attitude toward responsibility.
"We need to switch to alternative energies such as wind, solar, and tidal. The poor are lazy ... Animal rights"
I don't think these fit. Regardless of whether you agree with them, they are specific assertions, not general claims about reasoning with consistently anti-epistemological effects.
At what point will you check the Karma value? The end of the year?
We think we'd be better at running the world because we think rationalists should be better at pretty much everything that benefits from knowing the truth. If we didn't believe that we wouldn't be (aspiring) rationalists. And just because we couldn't do it perfectly doesn't mean we're not better than the alternatives.
Sorry for the confusion.
It was meant as a joint position of the insane people and myself, but on further consideration I'm abandoning it.
However, I don't think it's that unlikely that e.g. racial differences are fairly minimal if they exist at all, at least in terms of genetic rather than cultural/environmental/whatever differences. To the best of my knowledge, races aren't all that distinct on a genetic level, so I wouldn't call it "overwhelmingly improbably" that they would turn out to be close to indistinguishable in terms of intelligence.
That might be wishful thinking at play, but it seems sound to me. Not to say that it's not worth doing a serious investigation of the possibility that there really are such differences.
I mostly agree with you; I was just stating my impression of the attitudes of those raising the objections in the first place (note the quotation marks). And to be fair to them, it's really more, "believing this would cause other people to act horribly, so let's keep them from believing it."
It could also be a moral value in your utility function, in which case what looks like bias mostly falls under wishful thinking.
We are left deciding what's good and evil because if we don't, who will tell us? And even if someone did, how could we trust them? The nature of morality is such that everyone has to decide for themself, at least to the extent of deciding who to listen to. If a god has some higher purpose, they should explain it to us, and if they can't explain it in a way that makes us agree it's not right.
Just because feedback loops happen doesn't mean they're a good thing even when they happen to animals. We should be exempt under EY's definition of should, and anyone who disagrees is either using a different definition or is just not worth arguing with.
I think Eliezer is missing the main cause of the uproar in cases like this. The stance of the uproarers is not that "If this was true, it would be horrible, so let's not believe it." It's more like, "believing this is true would cause people to act horribly, so let's not believe it."
Claims of innate racial and sexual differences in intelligence have historically been baseless rationalizations that attempt to justify oppressing the group in question. So now when anyone raises the question, they are shouted down because they are tarred with the same brush. The objectors are not saying that if true group intelligence differences would be worse than individual intelligence differences. but that saying there are group differences is worse than saying that there are individual differences, because while individual differences clearly exist, group differences probably don't and are usually postulated by people who are motivated to invent them. This may be irrational, but not in the way this post focuses on.
My parents taught me about God the same way that they did about Santa and the Tooth Fairy, and I don't think it did me any harm. I decided for myself that God didn't exist before I figured out that my parents were atheists too, but I don't have any especially strong memories of figuring out any of the three.
The last, long one is basically saying shut up and multiply. Going with your gut intuitions might make you feel better about your decisions, but it won't really get you a better outcome. In your own life, if you want to pay a premium for that feeling of (unjustified) confidence that's one thing, but when other people's lives are at stake you have to be cold about it if you want to do what's really right.
The second and third are about how rationality for its own sake is futile. Rationality is good because it makes you better at what really matters. Your goal is to win; being rational is just the best way to do that.
The first one I believe relates back to the last. The point of altruism is not to make yourself feel warm and fuzzy, it's to help others. So don't go saving the cute puppies that you can see when you can get utils much cheaper somewhere else.
This is all my own opinion, I apologize if I misinterpreted any of it.
But if there are no moral truths, then there's nothing morally wrong with being factually wrong, so who cares if you're factually wrong?