Posts
Comments
I'm pretty sure that programming and reading about programming are much better ways of improving at programming, than reading about rationality is.
right, that's what motivated the post. I feel like spending time learning "domain specific knowledge" is much more effective than "general rationality techniques". like even if you want to get better at three totally different things over the course of a few years, the time spent on the general technique (that could help all three) might not help as much as on exclusively specific techniques.
still, I tend to have faith in abstractions/generality, as my mind has good long-term memory and bad short-term memory. I guess this is... a crisis of faith, if you will. in "recursive personal cognitive enhancement" (lol).
Haskell <3
Rational!Jesus
We have the next HPMOR.
can you explain? Sounds interesting.
anecdote: David Ingram (who claims to be enlightened) came to a cogsci lab at my school, and was able to perceive some normally-imperceptible "subliminal" visual stimuli (i.e. X milliseconds long flash or whatever). I heard it from a friend who administered the test, I don't have the raw data or an article, grains of salt and all that.
"... must relinquish bodily pain, embarrassment, and romantic troubles."
that's worse than letting billions of children be tortured to death every year. that's worse than dying from a supernova. that's worse than dying from mass suicide. that's worse than dying because you can't have sex with geniuses to gain their minds and thus avert the cause of death that you die from.
you really think existence without pain is that bad? you really they are not "true humans".
what about the 3WC humans? are they not "true humans" either. only us?
what about those with CIP? what about cold people? are they not "true humans"?
do you think there should be less but non-zero pain in our minds? how much?
ignore the loophole. explain why this superhappy ending is worse than the supernova ending.
literally unbelievable.
"You can never have enough big white belts, remember that."
maybe that's why he hasn't killed harry after hearing the prophecy.
he needs help in finding his horcrux.
which he won't, because space is huge.
quirrel wants to protect himself from death. and gains the power to do it.
I didn't cry in "Humanism." I didn't cry in "Stanford Prison Experiment." I didn't even cry when Hermione died. But this chapter finally did it for me.
me too :)
I didn't cry in "Humanism." I didn't cry in "Stanford Prison Experiment." I didn't even cry when Hermione died. But this chapter finally did it for me.
me too :)
why would voldemort need to possess harry? he's already stronger and smarter. and immortal.
well, it's tone too.
e.g. say sauvine.com had said: "this is why i think the scientists who believe in global warming have formed a BEC..."
i bet people would downvote, but i doubt they would label them as a troll.
as 5-HTP is metabolized to melatonin, i wonder how much of the effect comes from melatonin itself.
right: Worst Argument In The World.
tl;dr
Roots of Empathy says caring for babies nurtures empathy.
you can only horcrux matter, not "minds".
what? no. maybe only strong "compassionate"/"nurturing" females can keep groups of hundreds together without fragmentation.
even if poison were cheap, every fight has a risk. better to neither fight nor flee.
"we irrationally find present costs more salient than future costs"
Present Bias is not always irrational!
it can be rationalized (as in, "find rational cause" not "make up excuse") as hedging against uncertainty. the future is never certain. our predictions about the future aren't even probable. if you save your money instead of spending it, you might lose it all to madoff. if you don't use that giftcard to some restaurant, your tastes might change and it won't be worth anything.
in fact, Geometric Discouting maximizes average (undiscounted utility) if, every moment in time, there is some probability that you will transition to a state where you won't ever be able to get more utility. i think of it as the Apocalypse. then the discount is less about preference and more about an uncertain future.
even better, let's say you know THAT there is some "Apocalypse probability", but not WHAT it is. put a beta distribution on it, a natural prior on probabilities. then every day, when you wake up (i.e. the Coin Of Fates lands heads), it's a little more likely that the daily apocalypse is less likely (e.g. think about how unlikely flipping a fair coin 365 times is, you need to be a fool to not lower your estimate of the tails odds). update by bayes, you get laplace's rule, and Hyperbolically Discounted reward. it's like the Anthropic Principle.
i had to put in the math there to say that present bias can be rational and logical, and this can be shown formally and precisely. but really, it comes from common sense. just because a behavioral economist tells you that they'll give you money tomorrow (and you know he's telling the truth, since unlike psychology, the journals won't accept deceptive experiments), doesn't mean you'll get the money (the world changes, e.g. they forget or err in mailing the check), and it doesn't mean you'll want the money (you change, e.g. you win the lottery). shit happens. people change.
having said all that, it's safe to say that most of present bias is irrational. this is obvious from the frequent feelings of frustration with our present problems and anger against our past self for not solving them. at least, for me.
it's just i've been smelling this Fetish lately for hating heuristics, biases, and intuition. but really, these things work really well much of the time for many tasks. and that's often the first thing we hear in informed discussions, but i think people get caught up and forget about it (not saying lukeprog did, just making a big deal about one word he used).
(it's like Lazy Evaluation. haskell is often fast despite, not because of, it. but sometimes, you really didn't need to do something, and since everything is like a generator, you save big on computation.)
anyway, great post! (i stopped reading it halfway through because of the silliness of reading the internets to procrastinate my chores, and finished after :) i need to keep rereading it and thinking about it until i can figure out a way to remember and implement these things in my own mind.
ps check out "Strotz Meets Allais: Diminishing Impatience and the Certainty Effect"
i think by 'emergence' you just mean 'implication'
have you succeeded in chaining these "one-inference-steps"?
that is, have you found you can take people with different beliefs / less domain knowledge, in casual conversation, and quickly explain things one inference at a time? i've found that i can only pull a few of those, even if they follow and are delightfully surprised by each one, else i start sounding too weird.
i like what you said about fiction perceived as distant reality. "long long ago in a galaxy far far away".
indeed.
if we decouple the cost of caching into "was true but is false" and "was never true", it may be that one dominates the other in likelihood. so maybe, the most efficient solution to the "cached thought" problem is not rethinking things, but ignoring most things by default. this, however, has the opportunity cost of false negatives.
i've personally found that i am very dependent on cached thoughts when learning/doing something new (not necessarily bad). like breadth over depth. what i do is try to force each cached thought to have a contradictory, or at least very different, twin.
e.g. though i have never coded in it, if i hear "C++", i'll (try to) think both "not worth it, too unsafe and errorprone" and "so worth it, speed and libraries". whenever i don't have enough data to have a strong opinion, i must say that i am ok with caching thoughts, as long as i know they are cached and i try to cache "contradictory twins" together.
awesome post, eliezer. you sound like quirrel.
precision is hard. you know, until i started studying math, i didn't even know what "be precise" really means.
i'm involved with a startup. there's so much well-intentioned bullshit and it's the founders who harm themselves more than any user or any investor.
i watched the video, and felt something was wrong, and then i read your article, you dissected it mercilessly, and nailed it precisely.
""And," her voice said, "if you want to break school rules or something, you can ask me about it, I promise I won't just say no.""
perhaps eliezer's is not outlining but "fixing" her faults. by the end of ch75, hermione seems to have experienced a crisis of faith and become more morally harry.
harry carries around a small boulder as ring. the transfiguration could be finite incantem'd before battle. although quirrel did say that most magic battles are actually ambushes.
not me. there was consent and the capacity for consent, so the kiss was wistful at worst.