Don't Revere The Bearer Of Good Info
post by CarlShulman · 2009-03-21T23:22:50.348Z · LW · GW · Legacy · 72 commentsContents
72 comments
Follow-up to: Every Cause Wants To Be A Cult, Cultish Countercultishness
One of the classic demonstrations of the Fundamental Attribution Error is the 'quiz study' of Ross, Amabile, and Steinmetz (1977). In the study, subjects were randomly assigned to either ask or answer questions in quiz show style, and were observed by other subjects who were asked to rate them for competence/knowledge. Even knowing that the assignments were random did not prevent the raters from rating the questioners higher than the answerers. Of course, when we rate individuals highly the affect heuristic comes into play, and if we're not careful that can lead to a super-happy death spiral of reverence. Students can revere teachers or science popularizers (even devotion to Richard Dawkins can get a bit extreme at his busy web forum) simply because the former only interact with the latter in domains where the students know less. This is certainly a problem with blogging, where the blogger chooses to post in domains of expertise.
Specifically, Eliezer's writing at Overcoming Bias has provided nice introductions to many standard concepts and arguments from philosophy, economics, and psychology: the philosophical compatibilist account of free will, utility functions, standard biases, and much more. These are great concepts, and many commenters report that they have been greatly influenced by their introductions to them at Overcoming Bias, but the psychological default will be to overrate the messenger. This danger is particularly great in light of his writing style, and when the fact that a point is already extant in the literature, and is either being relayed or reinvented, isn't noted. To address a few cases of the latter: Gary Drescher covered much of the content of Eliezer's Overcoming Bias posts (mostly very well), from timeless physics to Newcomb's problems to quantum mechanics, in a book back in May 2006, while Eliezer's irrealist meta-ethics would be very familiar to modern philosophers like Don Loeb or Josh Greene, and isn't so far from the 18th century philosopher David Hume.
If you're feeling a tendency to cultish hero-worship, reading such independent prior analyses is a noncultish way to diffuse it, and the history of science suggests that this procedure will be applicable to almost anyone you're tempted to revere. Wallace invented the idea of evolution through natural selection independently of Darwin, and Leibniz and Newton independently developed calculus. With respect to our other host, Hans Moravec came up with the probabilistic Simulation Argument long before Nick Bostrom became known for reinventing it (possibly with forgotten influence from reading the book, or its influence on interlocutors). When we post here we can make an effort to find and explicitly acknowledge such influences or independent discoveries, to recognize the contributions of Rational We, as well as Me.
Even if you resist revering the messenger, a well-written piece that purports to summarize a field can leave you ignorant of your ignorance. If you only read the National Review or The Nation you will pick up a lot of political knowledge, including knowledge about the other party/ideology, at least enough to score well on political science surveys. However, that very knowledge means that missing pieces favoring the other side can be more easily ignored: someone might not believe that the other side is made up of Evil Mutants with no reasons at all, and might be tempted to investigate, but ideological media can provide reasons that are plausible yet not so plausible as to be tempting to their audience. For a truth-seeker, beware of explanations of the speaker's opponents.
This sort of intentional slanting and misplaced trust is less common in more academic sources, but it does occur. For instance, top philosophers of science have been caught failing to beware of Stephen J. Gould, copying his citations and misrepresentations of work by Arthur Jensen without having read either the work in question or the more scrupulous treatments in the writings of Jensen's leading scientific opponents, the excellent James Flynn and Richard Nisbett. More often, space constraints mean that a work will spend more words and detail on the view being advanced (Near) than on those rejected (Far), and limited knowledge of the rejected views will lead to omissions. Without reading the major alternative views to those of the one who introduced you to a field in their own words or, even better, neutral textbooks, you will underrate opposing views.
What do LW contributors recommend as the best articulations of alternative views to OB/LW majorities or received wisdom, or neutral sources to put them in context? I'll offer David Chalmers' The Conscious Mind for reductionism, this article on theistic modal realism for the theistic (not Biblical) Problem of Evil, and David Cutler's Your Money or Your Life for the average (not marginal) value of medical spending. Across the board, the Stanford Encyclopedia of Philosophy is a great neutral resource for philosophical posts.
Offline Reference:
Ross, L. D., Amabile, T. M. & Steinmetz, J. L. (1977). Social roles, social control, and biases in social-perceptual processes. Journal of Personality and Social Psychology, 35, 485-494.
72 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T23:40:41.545Z · LW(p) · GW(p)
I was actually quite amazed to find how far Gary Drescher had gotten, when someone referred me to him as a similar writer - I actually went so far as to finish my free will stuff before reading his book (am actually still reading) because after reading the introduction, I decided that it was important for the two of us to write independently and then combine independent components. Still ended up with quite a lot of overlap!
But I think it probably is unfair to judge Drescher as being at all representative of what ordinary philosophers can do. Drescher is an AI guy. And the comments on the back of his book seem to indicate that he was writing in a mode that philosophical readers found startling and new.
Drescher is not alternative mainstream philosophy. Drescher is alternative Yudkowsky.
I've referred to Drescher and SEP a few times. The main reason I don't refer more to conventional philosophy is that it doesn't seem very good as a field at distinguishing good ideas from bad ones. If you can't filter your good ideas and present them in a non-needlessly-complicated fashion, is there much point in pointing a reader to it?
But I've taken into account that Greene was able to rescue Roko where I could not, and I've promoted him on my list of things to read.
Replies from: CarlShulman, TheAncientGeek↑ comment by CarlShulman · 2009-03-21T23:55:43.568Z · LW(p) · GW(p)
"But I think it probably is unfair to judge Drescher as being at all representative of what ordinary philosophers can do."
I agree, and I didn't do so (I used Dennett-type compatibilism in my list of representative views that you conveyed). Even when you do something exceptionally good independently, it can help to defuse affective death spirals to make clear that it's not quite unique.
"If you can't filter your good ideas and present them in a non-needlessly-complicated fashion, is there much point in pointing a reader to it?"
This is an authorial point of view. Readers need heuristics to confirm that this is what is going on, and not something less desirable, for particular authors and topics. If they can randomly check some of your claims against the leading rival views and see that the latter are weak, that's useful.
↑ comment by TheAncientGeek · 2014-05-02T15:39:20.440Z · LW(p) · GW(p)
Alternatively, philosophy is good at finding arguments ffor a wide variety of ideas, which is evidence against the meta level idea that there is a simplistic distinction between good and bad ideas. What is the evidence for that meta level idea?
comment by pjeby · 2009-03-22T00:25:51.807Z · LW(p) · GW(p)
I don't think I agree with your conclusion. It seems to assume that ideas are somehow representation-independent -- and in practical programming as well as practical psychology, that idea is a non-starter.
Or to put it another way, someone who can state a point more eloquently than its originator knows something that its originator does not. Sure, the communicator shouldn't get all the credit... but more than a little is due.
Replies from: wedrifid, army1987↑ comment by A1987dM (army1987) · 2012-04-21T09:07:02.340Z · LW(p) · GW(p)
Cf. Feynman stating that since he wasn't able to explain the spin--statistics theorem at a freshman level, that meant he hadn't actually fully understood it himself.
comment by AnnaSalamon · 2009-03-22T01:19:50.783Z · LW(p) · GW(p)
How much non-Eliezer stuff is there on the practical "how to" of rationality, e.g. on techniques for improving one's accuracy in the manner that Something to protect, Leave a line of retreat, The bottom line, and taking care not to rehearse the evidence might improve one's accuracy?
EDIT: Sorry to add to the comment after Carl's response. I had the above list in there already, but omitted an "http://", which caused the second half of my sentence to somehow not show up.
Replies from: Eliezer_Yudkowsky, CarlShulman↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-15T14:18:33.741Z · LW(p) · GW(p)
There's Robyn Dawes's Rational Choice in an Uncertain World which is highly similar in spirit, intent, and style to the kind of defeat-the-bias writing in the Sequences. (And it's quoted accordingly when I borrow something.)
↑ comment by CarlShulman · 2009-03-22T01:23:31.120Z · LW(p) · GW(p)
It's balkanized, but a lot of psychologists have written on overcoming the biases they study, e.g. the Implicit Attitude Test researchers suggesting that people keep pictures of outstanding individuals from groups for which they have a negative IAT around, or Cialdini talking about how to avoid being influenced by the social psychology effects he discusses in Influence.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2009-03-22T01:29:41.173Z · LW(p) · GW(p)
Okay, yes, I've read some of that. But how much of rationality were you practicing before you ran into Eliezer's work? And where did you learn it? Also, are there other attempts at general textbooks?
Double also, are there sources of rationality "how to" content you'd recommend whose content I probably haven't learned from Eliezer's posts, besides the academic heuristics and biases literature?
Replies from: CarlShulman, ciphergoth↑ comment by CarlShulman · 2009-03-22T01:37:35.040Z · LW(p) · GW(p)
I read decision theory, game theory, economics, evolutionary biology, epistemology, and psychology (including the heuristics and biases program), then tried to apply them to everyday life.
I'm not aware of any general rationality textbooks or how-to guides, although there are sometimes sections discussing elements in guides for other things. There are pop science books on rationality research, like Dan Ariely's Predictably Irrational, but they're rarely 'how-to' focused to the same extent as OB/LW.
↑ comment by Paul Crowley (ciphergoth) · 2009-03-22T11:52:25.989Z · LW(p) · GW(p)
Yes - where else is there an attempt to put together a coherent and in some sense complete account of what rationality is and how to reach it?
comment by gjm · 2009-03-22T22:40:34.126Z · LW(p) · GW(p)
The article on theistic modal realism is ingenious. (One-sentence summary: God's options when creating should be thought of as ensembles of worlds, and most likely he'd create every world that's worth creating, so the mere fact that ours is far from optimal isn't strong evidence that it didn't arise by divine creation.)
I don't find the TMR hypothesis terribly plausible in itself -- my own intuitions about what a supremely good and powerful being would do don't match Kraay's -- but of course a proponent of TMR could always just reject my intuitions as I'd reject theirs.
However, I think the TMR hypothesis should be strongly rejected on empirical grounds.
It is notable -- and this is one element of a typical instance of the Argument From Evil -- that our world appears to be governed by a bunch of very strict laws, which it obeys with great precision in ways that make substantial divine intervention almost impossible. It seems that there are many many many more possible worlds in which this property fails than in which it holds, simply because the more scope there is for intervention the more ways there are for things to happen. Therefore, unless the sort of lawlikeness we observe is so extraordinarily valuable that tiny changes in it make a world far less likely to be worth creating, we should expect that "most" worlds in the TMR ensemble would be much less lawlike than ours: e.g., we might expect prayers to be commonly answered in clearly detectable ways. So how come we're in such an atypical world?
Generalizing: I think we should expect that for most measures of goodness X, worlds with higher values of X should be dramatically more numerous in the TMR ensemble unless increasing X reduces the number/measure of possible worlds much more drastically than for most other choices of X. (Because when you increase X, you get the chance to reduce Y or Z or ... a bit. More choices.) Therefore, we should expect that for measures of goodness X where "better" doesn't imply "much more constrained" most worlds (hence, in particular, ours, with high probability) should have values of X that are close to optimal, or at least far from marginally acceptable. This doesn't seem to be true.
It seems to me that counter-arguments to these are likely to be basically the same as counter-arguments to the original argument from evil.
The other thing about TMR is that it undermines any version of theism that expects God to behave as if he cares about us. If TMR is right then, any time God has the option of doing something to make your life better, then he forks the universe vastly many ways and tries out every possible option (including lots of ways of doing nothing, and even ways of deliberately making things worse for you) apart from ones that make the whole universe not worth while. As mentioned above, it seems to me that this should make us expect that visible divine intervention should be pretty common, but in any case it's not terribly inspiring. A bit like having a "friend" who, any time she interacts with you, rolls dice and chooses a random way of behaving subject only to the constraint that it doesn't cause the extinction of all human life. Similarly, you've got no reason to trust any alleged divine revelation unless its wrongness would be so awful as to make the world not worth creating. (These arguments are again closely parallel to ones that come up with the ordinary argument from evil, in response to responses that basically take the form of radical skepticism.)
Replies from: Alexei↑ comment by Alexei · 2010-08-06T22:02:04.090Z · LW(p) · GW(p)
Granted that god exists and cares about us and he can change the world, even in tiny aspects, it's very likely god will use those small aspects as a base to create the perfect world (kind of like AI FOOM). It follows that any world where god has some kind of minimum control will converge to the perfect world. Given that we are not in the perfect world, we can assume god does not have the minimum level of control.
comment by Andy_McKenzie · 2009-03-22T04:10:12.445Z · LW(p) · GW(p)
Excellent post. Having just read The Adapted Mind (and earlier the moral animal), I can see where Eliezer got a lot of his stuff on evolutionary psychology from.
However, all authors must walk a thin rope between appeasing the Carl Shulman's of the world who have read everything and introducing some background for naive readers beyond telling them to simply catch up on their own. I think he in general does a good job of erring on the side of more complexity, which is what I appreciate, so I of course forgive him. :)
A niche that a good author might consider filling is actually including the numbers of the experiments they reference, ie, the experimental scores and their standard errors, etc. It might turn off the innumerate but I think that pure numbers and effect sizes are grossly under reported by science writers.
Replies from: CarlShulman↑ comment by CarlShulman · 2009-03-22T04:38:36.658Z · LW(p) · GW(p)
"However, all authors must walk a thin rope between appeasing the Carl Shulman's of the world who have read everything and introducing some background for naive readers beyond telling them to simply catch up on their own."
I don't need to be appeased, and I strongly endorse the project of providing that introduction. My post was about ways for readers and authors to manage some of the drawbacks of success.
I agree that sample and effect sizes are grossly under-reported, often concealing that an experiment with a sexy conclusion had almost no statistical power, or that an effect was statistically significant but too small to matter. It seems possible for this to become a general science journalism norm, but only if a word-efficient and consistent format for in-text description can be devised, like conflict-of-interest acknowledgments.
Replies from: pnrjulius, wedrifid↑ comment by pnrjulius · 2012-05-28T23:21:49.304Z · LW(p) · GW(p)
How about this? "The study had 27 participants, 15 women and 12 men. The difference between men and women was on average 2 points on a questionnaire ranging from 0 to 100 points." This clearly explains the (small) sample size and (weak) effect size without requiring any complicated statistics.
Replies from: Benja↑ comment by Benya (Benja) · 2012-08-16T19:10:47.594Z · LW(p) · GW(p)
Actually, it doesn't tell you the effect size, since it doesn't include information about how much individuals in each group differ from each other. If the difference between the group means is 2 points and the standard deviation in each group is 5 points, that's the same effect size (in the technical Cohen sense) as if the difference is 10 points and the standard deviation is 25 points.
I think a useful way to report data like this might be a variation on, "If you chose one of the women and random and one of the men at random, the probability that the woman would have a higher score would be 53%."
Aaaand in order not to completely miss the point of the original article, ETA: I'm not sure how much of that suggestion is my own thinking, but I was certainly influenced by reading about the binomial effect size display which solves a related problem in a similar way, and after I had the idea myself I came across something very similar in Rebecca Jordan-Young's Brainstorm (p.52, and endnote 4 on p.299): mental rotation ability is "considered to be the largest and most reliable gender difference in cognitive ability"; using data from a meta-analysis, she notes that if you tried to guess someone's gender based on their score in a mental rotation test, using your best strategy you'd get it right 60% of the time. (I checked that math a while ago and got the same result, assuming normal distributions with equal variances in each group, with Cohen's d=.56; the meta-analysis is Voyer, Voyer & Bryden, 1995.)
It's annoying that IIRC, "guess the gender" and "in a random pair, who has the higher score" don't give the same number, though. Average readers will probably just see a percentage in each case and derive some measure of affect from the number, whichever interpretation you give.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-21T23:57:30.478Z · LW(p) · GW(p)
EDIT: I agree with your conclusion, but...
(Checks Don Loeb reference.)
While, unsurprisingly, we end up adding to the same normality, I would not say that these folks have the same metaethics I do. Certainly Greene's paper title "The Terrible, Horrible, No Good, Very Bad Truth About Morality" was enough to tell me that he probably didn't have exactly the same metaethics and interpretation I did. I would not feel at all comfortable describing myself as a "moral irrealist" on the basis of what I've seen so far.
Drescher one-boxes on Newcomb's Problem, but doesn't seem to have invented quite the same decision theory I have.
I don't think Nick ever claimed to have invented the Simulation Argument - he would probably be quite willing to credit Moravec.
On many other things, I have tried to use standard terminology where I actually agree with standard theories, and provide a reference or two. Where I am knowingly being just a messenger, I do usually try to convey that. But you may be reading too much into certain similarities that also have important points of difference or further development.
EDIT2: I occasionally notice the problem you point to, and write a blog post telling people to read more textbooks. Probably this is not enough. I'll try to reach a higher standard in any canonicalized versions.
Replies from: thomblake, steven0461, CarlShulman, Roko↑ comment by thomblake · 2009-04-02T14:12:49.633Z · LW(p) · GW(p)
I think the biggest issue here is your tendency to not cite sources other than yourself, which is an immediate turn-off to academics. To an academic, it suggests the following questions (amongst others): If your ideas are so good, why hasn't anyone else thought of them? Doesn't anyone else have an opinion on this - do you have a response to their arguments? Are you actually doing work in your field without having read enough to cite those who agree or disagree with you?
(I know this isn't a new issue, but it seems it bears repeating.)
Replies from: wedrifid, TheAncientGeek↑ comment by wedrifid · 2010-08-09T02:36:39.569Z · LW(p) · GW(p)
Other questions that are implicitly asked:
- Why are you not signalling in group status?
- Why are you not signalling alliance with me or my allies by inventing excuses to refer to us?
- Are you an outsider trying to claim our territory in cognitive space?
- Are you talking about topics that are reserved for those with higher status in our group than we assign you?
↑ comment by TheAncientGeek · 2014-05-02T15:51:03.308Z · LW(p) · GW(p)
This point could count against any amateur philosopher.
What is more pertinent: why insist you are doing better than the professionals? You should assume you are making ,mistakes and reinvemtimg wheels.
Why not learn the standard jargon? You may not have the time or inclination to learn the whole subject, but the jargon is the most .valuable thing to learn, because it enables you to communicate with professinals who can help you. If you are able to admit to yourself that, as an amateur, you might need help.
There are some failure modes that arepart and parcel of being an amateur, and some further ones that take you into crank territory.
↑ comment by steven0461 · 2009-03-22T01:06:48.571Z · LW(p) · GW(p)
I took a look at Greene's dissertation when Roko started pushing it, but I don't think Greene's views are much like Eliezer's at all. Specifically he doesn't seem to emphasize what Eliezer calls the "subjectively objective" quality of morality, or the fact that people may be mistaken as to what their morality says. Correct me if I'm wrong.
I agree with the rest of the original post.
Replies from: CarlShulman↑ comment by CarlShulman · 2009-03-22T01:41:43.470Z · LW(p) · GW(p)
I agree about the difference of emphasis but I don't think they have a major substantive disagreement on those issues. You can check with Owain Evans, who knows him.
↑ comment by CarlShulman · 2009-03-22T00:04:28.002Z · LW(p) · GW(p)
Greene doesn't really think it's horrible, just that people mistakenly think it's horrible and recoil from irrealism about XML 'rightness tags' on actions because they think it would mean that they should start robbing and murdering. Nick does acknowledge Moravec on his website now, after being informed about it (he wasn't aware before that).
Perhaps I shouldn't have covered both being a messenger and acknowledgment of related independent work in the same post.
Replies from: Roko↑ comment by Roko · 2009-03-22T00:29:53.381Z · LW(p) · GW(p)
Greene doesn't really think it's horrible, just that people mistakenly think it's horrible and recoil from irrealism
yes, I detected a hint of irony in the title. The thesis is that it isn't actually that horrible, rather that people don't want to face up to the truth, and it is because of this somewhat irrational fear that even considering the possibility of antirealism is avoided.
↑ comment by Roko · 2009-03-22T00:26:04.424Z · LW(p) · GW(p)
Certainly Greene's paper title "The Terrible, Horrible, No Good, Very Bad Truth About Morality" was enough to tell me that he probably didn't have exactly the same metaethics and interpretation I did.
Have you read it? It takes about a day and a half to read, and I think that he points out an error with the position that you took in the "p-right" etc discussions on OB. Would it be off topic for me to do a post on this?
Other than that, he takes the same position you do. I recommend that you read his dissertation, and then email him to discuss the application of this set of ideas to transhumanism/singularity. He would probably be interested.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-22T00:37:36.719Z · LW(p) · GW(p)
In this case, I'd actually say email me first with a quickie description.
Replies from: CarlShulman↑ comment by CarlShulman · 2009-03-22T00:46:11.553Z · LW(p) · GW(p)
Roko exaggerates. It's only 377 pages and written in an accessible style.
It summarizes the ethical literature on moral realism, and takes the irrealist view that XML tags on actions don't exist, and that even if they did exist we wouldn't care about them. It then goes into the psychology literature (Greene does experimental philosophy, e.g. finding that people misinterpret utility as having diminishing marginal utility in contravention to experimental instructions), e.g. Haidt's work on social intuitionism, to explain why it is that we think there are these moral properties 'out there' when there aren't any. Lastly, he argues that we can get on with pursuing our concerns (reasoning about conflicts between our concerns, implications, instrumental questions, etc), but suggests that awareness of the absence of XML tags can help us to better understand and deal with those with differing moral views.
Replies from: Eliezer_Yudkowsky, PhilGoetz↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-22T01:46:49.179Z · LW(p) · GW(p)
people misinterpret utility as having diminishing marginal utility in contravention to experimental instructions
This explains a LOT.
Replies from: CarlShulman↑ comment by CarlShulman · 2009-03-22T04:13:48.065Z · LW(p) · GW(p)
http://www.wjh.harvard.edu/~jgreene/GreeneWJH/Greene-Baron-JBDM-01.pdf
Enjoy.
↑ comment by PhilGoetz · 2009-03-22T01:17:47.352Z · LW(p) · GW(p)
XML tags on actions don't exist, and that even if they did exist we wouldn't care about them.
?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-03-22T01:26:52.375Z · LW(p) · GW(p)
There is no objective truth about which actions are the right ones, no valuation inherent in the actions themselves. And even if there was, even if you could build a right-a-meter and check which actions are good, you won't care about what it says, since it's still you that draws the judgment.
comment by bentarm · 2009-03-22T22:33:34.294Z · LW(p) · GW(p)
Is the idea here to counsel us against some sort of halo effect? Eliezer Yudkowsky has told me a lot of interesting things about heuristics and biases, and about how intelligence works, but I shouldn't let this affect my judgement too much if he recommends a movie?
Or is it more than that - just that I should be careful when reading anything by Eliezer, and take into account the fact that I'm probably slightly too inclined to trust it, because I've liked what came before? Because then of course, we have the issue that I should be more likely to trust an author who is usually right - and this just says that I should be careful not to trust them too much more.
Replies from: ciphergoth, CarlShulman↑ comment by Paul Crowley (ciphergoth) · 2009-03-22T23:21:25.144Z · LW(p) · GW(p)
For much of what EY is setting out, trust isn't an appropriate relationship to have with it. You trust that he's not misrepresenting the research or his knowledge of it, and you have a certain confidence that it will be interesting, so if an article doesn't seem rewarding at first you're more likely to put work in to squeeze the goodness out. But most of it is about making an argument for something, so the caution is not to trust it at all but to properly evaluate its merits. To trust it would be to fail to understand it.
↑ comment by CarlShulman · 2009-03-23T00:37:25.470Z · LW(p) · GW(p)
"Because then of course, we have the issue that I should be more likely to trust an author who is usually right - and this just says that I should be careful not to trust them too much more."
Right.
comment by haig · 2009-03-22T10:51:55.719Z · LW(p) · GW(p)
I like EY's writings, but don't hold them up as gospel. For instance, I think this guy's summary of Bayes Theorem (http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem) is much more readable and succinct than EY's much longer (http://yudkowsky.net/rational/bayes) essay.
comment by Entraya · 2014-02-20T17:52:07.812Z · LW(p) · GW(p)
The reason i love Elizier is how many people he must have attracted this art of rationality, and that without him and this site i wouldn't even know where to begin or where to look, and how he is one of surprisingly few people able to convey the information in such tasty little bits. He may not be the smartest in his field, and may 'just' be passing on things he learned from others, but his work is super valuable, for he does what the others don't. Also, Methods of Rationality happens to be on my top 3 list of Greatest Pieces of Writing IMO, so that adds a bit. I just have to separate and clear the lines between the two kinds of reverence, which can be surprisingly difficult if you don't pay attention to it which this post reminded me of, so thanks
Replies from: Lumifer↑ comment by Lumifer · 2014-02-20T19:13:44.823Z · LW(p) · GW(p)
The reason i love Elizier
Do you love him enough to spell his name right..?
Replies from: Entraya, polymathwannabe, Vulture↑ comment by Entraya · 2014-05-02T14:45:55.064Z · LW(p) · GW(p)
I would like it if you didn't linger so much on a mere spelling mistake. I had no muscle memory for how to spell this entirely foreign name. Eliezer Yudkowski; i hope you are satisfied, for thy name is surely glorious and worthy of praise.
I've also first discovered the mail-notification system, hence why it took me so long to respond
↑ comment by polymathwannabe · 2014-02-20T20:13:37.095Z · LW(p) · GW(p)
I Googled "Elizier Yudkowsky" and the first suggestion was an OKCupid profile. Talk about loving him!
↑ comment by Vulture · 2014-02-20T21:26:09.692Z · LW(p) · GW(p)
I might have found your comment witty if you hadn't also downvoted. Don't be a jerk.
(My apologies if it wasn't your downvote. Obviously I'm pretty confident, though)
Replies from: Lumifer↑ comment by Lumifer · 2014-02-20T21:56:48.738Z · LW(p) · GW(p)
Sigh. I very rarely downvote any comments in the subthreads I participate in. I did not vote, up or down, on any comment in this subthread.
Want to recalibrate your confidence? :-P
Replies from: Vulture↑ comment by Vulture · 2014-02-20T21:59:01.785Z · LW(p) · GW(p)
Gladly. In retrospect, my comment was obnoxious even if it had been right. In the future I'll try to realize this without being wrong first. edit: Out of curiousity, why do you usually not vote in threads you're participating in?
Replies from: Lumifer↑ comment by Lumifer · 2014-02-21T07:35:24.628Z · LW(p) · GW(p)
why do you usually not vote in threads you're participating in?
If I am already talking to people, I can explain my likes and dislikes in words -- without using the crude tool of votes. For me comments and votes are two alternate ways of expressing my attitude, it is rare that I want to use both.
Besides, it feels more "proper", in the vaguely ethical way, to not up- or down-vote people with whom I am conversing. Not that I think it should be a universal rule, that's just a quirk of mine.
Replies from: Vaniver↑ comment by Vaniver · 2014-02-21T21:39:39.571Z · LW(p) · GW(p)
Besides, it feels more "proper", in the vaguely ethical way, to not up- or down-vote people with whom I am conversing. Not that I think it should be a universal rule, that's just a quirk of mine.
I have no qualms against upvoting people that I'm responding to or who have responded to me, but I have a much higher threshold for downvoting responses to my comments and posts, both to try to compensate for the human tendency to get defensive and to increase the probability the conversation is pleasant.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2014-02-22T00:42:42.337Z · LW(p) · GW(p)
Agreed with all of this, but the last bit makes me curious... does downvoting someone who is involved in an exchange with a third party decrease the probability that the conversation is pleasant for the two of them?
Replies from: Vulturecomment by MichaelAnissimov · 2009-05-14T22:00:48.355Z · LW(p) · GW(p)
I recently read Greene's essay and I thought it was a nice buttressing of ideas that I was originally exposed to in 2001 while reading "Beyond anthropomorphism". The challenge with Eliezer's earlier writing is that it is too injected with future shock to be comfortable for most non-transhumanists to read. The challenge with Eliezer's more recent writing is that it is too long for a blog format and much more suited for a book, which forces people to focus on the one thing.
The title of Greene's thesis is tongue-in-cheek. Based on my understanding of Eliezer's conception of morality, I would definitely call him irrealist.
Replies from: Nonecomment by Roko · 2009-03-22T00:48:10.319Z · LW(p) · GW(p)
The work of Jon Haidt is very enlightening.
This evening I had the pleasure of reading his Edge article on the benefits of religion, where he takes on some prominent new atheists - Myers, Sam Harris, etc. I quote:
Replies from: Eliezer_YudkowskyWhen hurricane Katrina struck, religious groups across the country organized quickly to send volunteers and supplies. Like fraternities, religions may generate many positive externalities, including charity, social capital (based on shared trust), and even team spirit (patriotism). If all religious people lost their faith overnight and abandoned their congregations, I think the net results would probably be bad, at least in America where (in contrast to European nations) our enormous size, short history, great diversity, and high mobility make it harder for us to overcome individualism and feel that we are all part of one community. In conclusion, I believe that Enlightenment 2.0 requires Morality 2.0: more cognizant of the limitations of reason, more open to multilevel approaches in which groups are sometimes units of analysis, and more humble in its assertion that the individualist and contractualist morality of the scientific community is right, and is right for everyone.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-22T03:56:38.723Z · LW(p) · GW(p)
No, Enlightenment 2.0 requires rationalist task forces as tightly-knit, dedicated, and fast-responding as religious task forces, better coordinated and better targeted, maybe even more strongly motivated, to do every good thing that religion ever did and more.
IMHO.
Replies from: Roko↑ comment by Roko · 2009-03-22T11:52:01.736Z · LW(p) · GW(p)
I think that Haidt underestimates the power of irrationality as a force for evil and chaos, which is a point that people like you make very well. The point he makes well is the power of religion to bring out our "inner bee" and just make us co-operate.
This underlines a point I made earlier about the power of generalists. If Richard Dawkins, Josh Greene, Jon Haidt, Marvin Minsky, Gary Drescher, and Tversky and Kahnemann could put all of their brains together into one big head, they'd have all of your insights plus more.
But they're separate, isolated specialists, so the world had to wait for a generalist. IMO modern academia's largest problem is its specialization.
comment by Nominull · 2009-03-21T23:33:11.439Z · LW(p) · GW(p)
It's really helpful to have good info borne to me, though, in a readable and engaging fashion. For some reason I never wound up reading the Stanford Encyclopedia of Philosophy, but I did read Eliezer's philosophical zombie movie script.
Replies from: CarlShulman↑ comment by CarlShulman · 2009-03-21T23:42:31.015Z · LW(p) · GW(p)
You can appreciate the communication of the info, just don't blur your valuation of the info itself with your valuations of the communication and communicator.
comment by JulianMorrison · 2009-03-22T11:59:23.361Z · LW(p) · GW(p)
That pointer to Gary Drescher is much appreciated. Eliezer's explanations about determinism and QM make me feel "aha, now it's obvious, how could it be any other way", but I hate single-sourcing knowledge.
comment by billswift · 2009-03-22T00:14:13.151Z · LW(p) · GW(p)
Just a brief mention since we're supposed to avoid AI for a while, but it is too relevant to this post to totally ignore: I just finished J Storrs Hall's "Beyond AI", the overlap and differences with Eliezer's FAI are very interesting, and it is a very readable book.
EDIT: You all might notice I did write "overlap and differences"; I noticed the differences, but I do think they are interesting; not least because they seem similar to some of Robin's criticisms of Eliezer's FAI.
Replies from: MichaelGR, Eliezer_Yudkowsky, Eliezer_Yudkowsky↑ comment by MichaelGR · 2009-03-22T19:16:18.518Z · LW(p) · GW(p)
I've read it too, but made the mistake of reading it right after Godel, Escher, Bach. Hard to compare.
What surprised me most was how much of the things written in a book published in 2007 were more or less the same as those in a book published in 1979. I expected more new promising developments since then, and that was a bit of a downer.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-22T00:24:11.253Z · LW(p) · GW(p)
I think I see a lot more difference between my own work and others' work than some of my readers may.
Replies from: PhilGoetz, CarlShulman↑ comment by PhilGoetz · 2009-03-22T00:37:18.250Z · LW(p) · GW(p)
I think that's inevitable, if for no other reason that someone reading two treatments of one subject that they don't completely understand is likely to interpret them in a correlated way. They may make similar assumptions in both cases; or they may understand the one they read first, and try to interpret the one they read second in a similar way.
↑ comment by CarlShulman · 2009-03-22T00:38:04.981Z · LW(p) · GW(p)
Hall gives a passable history of AI, acts as a messenger for a lot of standard AI ideas, including the Dennett compatibilist account of free will and some criticisms of nonreductionist accounts of consciousness, and acts as a messenger for a stew of social science ideas, e.g. social capital and transparent motivations, although the applicability of the latter is often questionable. Those sections aren't bad.
It's only when he gets to considering the dynamics of powerful intelligences and offers up original ideas that he makes glaring errors. Since that's your specialty, those mistakes stand out as horribly egregious, while casual readers might miss them or think them outweighed by the other sections of the book.
I see differences between you and Drescher, or you and Greene, both in substance (e.g. some clear errors in Drescher's book when he discusses the ethical value of rock-minds, neglecting the possibility that happy experiences of others could figure in our utility functions directly, rather than only through game theoretic interactions with powerful agents) and in presentation/formalization/frameworks.
We could try to quantify percentage overlap in views on specific questions.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-22T00:23:35.147Z · LW(p) · GW(p)
This is a good example of why I don't bother to cite what others perceive as "related work", frankly.
comment by Liron · 2009-03-22T06:47:30.478Z · LW(p) · GW(p)
The "textbooks" link is broken.
Replies from: CarlShulman↑ comment by CarlShulman · 2009-03-22T06:55:19.637Z · LW(p) · GW(p)
Fixed.
comment by [deleted] · 2012-05-27T19:28:59.989Z · LW(p) · GW(p)
I have never really regarded EY as anything other than the guy who wrote a bunch of good ideas in one place. The ideas are good on their own merits and after being made aware that Quine(?) invented that "Philosophy = Psychology" thing I have had some healthy 'he's right but probably not original.' And really, who cares? He is right, but don't shoot the messenger, ad hominem is still ad hominem even if it is positive. Empty agreements are as bad as empty dismissals.
Isn't this intuitively obvious? Or am I just very, very rational?
comment by [deleted] · 2011-10-05T19:10:24.328Z · LW(p) · GW(p)
Dead link: http://philosophy.wisc.edu/info/2006%20Metaethics%20Workshop/loeb.doc
comment by lsparrish · 2010-08-09T03:01:29.033Z · LW(p) · GW(p)
Terry Pratchett is another good person who seems to want to go out on his own terms.