Posts
Comments
Hal, sorting by "popular" doesn't seem to do what you think it does (what it actually does I can't figure out). Sort by "Top" to get ordered by score.
Also, anecdotally, I can tell the difference between "agree" and "high quality" to some extent. I've up-voted several comments I disagreed with because I thought they presented their idea well.
The general rule in groups with reasonably intelligent discussion and community moderation, once a community consensus is reached on a topic, is that:
- Agreement with consensus, well articulated, will be voted up strongly
- Disagreement with consensus, well articulated, will be voted up and start a lengthy discussion
- Agreement with consensus, expressed poorly, will be voted up weakly or ignored
- Disagreement with consensus, expressed poorly, will be voted down viciously
People who complain about groupthink are typically in the habit of doing #4 and then getting upset because they don't get easy validation of their opinions the way people who agree inarticulately do.
As an example on LW, consider Annoyance, who does both #2 and #4 with some regularity and gets wildly varying comment scores because of it.
I agree, it would be interesting to have more results than just right vs. wrong.
Exempli gratia, with Scoble, one prediction was arguably half-right (minus the stylus pen, the iPhone essentially qualifies), one that was correct at a later date (as Joe said), one that is marked correct already (RSS becoming mainstream), and one that is simply wrong (re: friendfeed).
I also agree that selection bias could skew results badly, but the idea overall is excellent.
Also, props for the disclaimer on the page! "Past performance is no guarantee of future results." Almost hofstadterian levels of indirect self-reference.
This probably cuts both ways, actually; the other common reaction is wholesale rejection of childhood experience and values, especially if it causes too much inconvenience or cognitive dissonance later in life. e.g., how many people with strong political opinions hold them because their parents held equally strong, opposed views?
Anecdotally speaking it seems to me that, for instance, more staunch atheists come from conservative/very orthodox/fundamentalist religious households than one would reasonably expect from chance; whereas children of, say, the type of Christians who only think about God on Christmas and Easter tend to pretty reliably be "Holiday-only Christians" as well.
Zargon, I think the time given was how long it would take from beginning the feedback loop to the actual supernova, and they began the process the moment they arrived. If they could have destroyed the star immediately, they would have done so, but with the delay they encouraged as many people as possible to flee.
At least, that's how it sounded to me.
This comparison brings to mind a possible... experiment, of sorts. Create two blogs, anonymously, and otherwise unconnected to one's prior writings. Prepare a series of posts, communicating the same concepts, with different degrees of emotion, rhetorical flourish, and eloquence. Promote the two blogs in an identical manner, but never in the same place as each other.
Then, at the end of some length of time, one could compare metrics, such as number of readers and comments left, frequency of agreement/disagreement in comments, or possibly degree of communication accuracy through some means (soliciting guest posts on the blog's theme, degree of comprehension in comments, &c.?)
What might the results of such an experiment be? I suspect the consensus here would be to expect that the "flashier" blog would get more readers, comments, and agreement, but a lower median comprehension level.
" Certainly arrogance does not help in overcoming bias."
On the contrary; when dealing with deep-seated, common, and possibly hard-wired cognitive biases, I'd say it actually requires a certain degree of hubris to even consider attempting to overcome such.
Good heavens, Eliezer. Rationality is Serious Business. Grow up and stop acting like you... enjoy it or something!
I'm pretty sure Eliezer is familiar with 4chan, given his explicit reference to it a couple weeks ago...
The weird hive mind culture and memetic cesspit that is /b/ is actually interesting as well, in its own way, if one can tolerate looking at it (I heard that "lieking mudkipz" helps).
That's a bit too strong, I think.
I am my mind, of course. My body is just a convenient support system for the mind. But I'm not indifferent to it, much as I am not indifferent to the computer I am typing this post on--it's a useful tool, to which I hold a certain sentimental attachment, and with which my mind can accomplish things.
Another possibly relevant aspect of crypto is that it is fantastically, painfully difficult; due partly to the real world consequences, and partly to the inherent challenge of the maths involved.
As a first hypothesis, perhaps the degree of difficulty makes the entire field exclusive enough that people have less need to hide behind obfuscating verbiage and let their ability to handle the maths speak for itself; i.e., they feel they have less to prove, in the same manner as high-status academics in other fields.
This would imply some degree of inverse correlation between the inherent rigor and difficulty of a field and the formality expected from the average researcher. Are, say, physicists or engineers typically more informal than, say, psychologists or economists? (disclaimer: this is not an attack on practicioners of the latter fields, just an observation that they are more removed from easily tested real world consequences, meaning it's easier to bluff)
There's nothing wrong with "empirical" research in computer programs, especially with complex systems. If you can get something that is closer to what you want, you can study its behavior and analyze the results, looking for patterns or failures in order to design a better version.
I know Eliezer hates the word "emergent", but the emergent properties of complex systems are very difficult to theorize about without observation or simulation, and with computer programs there's precious little difference between those and just running the damn program. Could you design a glider gun after reading the rules of Conway's game of life, without ever having run it?
It's no way to write a safely self-modifying AI, to be sure, but it might be a valid research tool with which to gain insight on the overall problem of AI.
Robin, fiction also has the benefit of being more directly accessible; i.e., people who would or could not read explicit academic argument can read a short story that grapples with moral issues and get a better sense of the conflict then they would otherwise. Even with the extremely self-selected audience of this blog, compare the comments the story got vs. many other posts.
And while of course the story was influenced by Eliezer's beliefs, the amount of arguing about the endings suggests that it was not so cut and dry as simply "moving readers toward his beliefs".
Tiiba: Somewhere between a Gundam nerd and a literature professor, I expect. Since the main real differences between the two in our current world are 1) lit profs get more cultural respect 2) people actually enjoy Gundam, the combination makes a fair amount of sense.
Reading the comments, I find that I feel more appreciation for the values of the Superhappies than I do for the values of some OB readers.
This probably mostly indicates that Eliezer's aliens aren't all that terribly alien, I suppose.
There seems to be a fairly large contingent of humanity who regard self-determination as the most significant terminal value to roughly the same single-minded extent that the Babyeaters view baby eating; including a willingness to sacrifice every other moral value to very large degrees in its favor. I assume many of the suicides fell into this group.
While not universal among humanity as baby eating is among the Babyeaters, the concept should have been fairly explicit in at least some of the cultural material transmitted. I wonder, were the Superhappies being willfully oblivious to this value, considering the extent to which they willingly violate it?
michael vassar: The situation is more symmetrical than that, I think.
The babyeaters, I imagine, don't like suffering either. That is, I doubt they would inflict suffering on their children outside of the winnowing, and would likely act to prevent suffering where possible. But, while suffering is certainly bad, it would be far worse to violate the much higher moral value of eating the young--that imperative is far greater than some suffering, no matter how great, isn't it?
Humans, of course, don't like suffering. They certainly wouldn't inflict it needlessly. But eliminating a little bit of suffering isn't necessarily worth what it would take--altering ourselves, perhaps, and losing our humanity in the process. And besides, who are they to decide for us? Humanity's moral right to self-determination is far more important than some minor suffering... right?
His voice, as opposed to other people's voices, I assume; i.e., the Confessor's warning was not transmitted.
Arguably unclear wording, though.
Just send the aliens a clip of Monty Python's "Every Sperm is Sacred" and convince them we're so advanced we kill thousands of children before they're even conceived. Problem solved! ;)
Eliezer, so how do you account for chess being fun?
Also, I think you're failing to account for the fun of various forms of "metagaming". Among at least some players and in some games, a large amount of the enjoyment comes not from finding out about the skills, or acquiring them, or even using them--instead, it comes from planning and setting goals within the framework provided. When the enjoyment lies in the planning, I'm not convinced that the usual heuristic of "more choices = less fun" is applicable.
Note that this won't apply in cases where the resource is not limited (i.e., you can get every skill eventually) or when choices are not permanent (buying and selling equipment or items, instead of taking skills). Limited, irreplaceable resources combined with limited information is what will lead to the agonizing Emile mentioned, at least in my own experience.
As a matter of comparison, look at something like Magic: The Gathering, where you have three levels of abstraction in play: 1) The game -- drawing and playing cards, beating your opponent. 2) A solo meta-game -- planning your deck 3) A competitive meta-game -- figuring out what other players' decks look like ...and the first level is always the least important, and the third is the most important in tournament play.
Note that this combines very high surprise value in the most immediate level with no surprises and total, perfect information in the second.
@Stuart Armstrong: First of all, the strongest influence on future success in society is whether or not one is already successful (most easily accomplished by having successful parents). One would also expect some percentage of non-rationalists to succeed anyways simply through chance. Assuming that non-rationalists substantially outnumber rationalists, it isn't terribly surprising to see more of the former among successful people. Rather than looking at how many successful people are rationalists, it would be more informative to look at rational people and see how many become more successful over their lives compared to average. Or, you could try and estimate the likelihoods of being rational, being successful, and being rational given success, then apply Bayes' law...
Also, if rationalists seem more skilled at avoiding failure than at winning, perhaps that merely suggests that failure is more predictable than success?
As with jsalvati's comment, upon reading this post I was convinced that either I was hallucinating, that almost the entirety of this post had appeared elsewhere before, or that I've read so much of Eliezer's writing that I'm anticipating his arguments in detail before he makes them.
Fortunately, it turns out that the second (and least disturbing) option was correct, in that a substantial amount of this post already existed at: [link deleted]
Not that I'm complaining, but it was seriously bothering me that I knew I'd heard this stuff before, but Google said it hadn't previously been posted on Overcoming Bias, and I was starting to doubt my sanity...