Secure Your Beliefs
post by lukeprog · 2011-02-12T16:53:24.060Z · LW · GW · Legacy · 48 commentsContents
48 comments
When I was 12, my cousin Salina was 15. She was sitting in the back seat of a car with the rest of her family when a truck carrying concrete pipes came around the turn. The trucker had failed to secure his load properly, and the pipes broke loose. One of them smashed into Salina's head. My family has never wept as deeply as we did during the slideshow at her funeral.
The trucker didn't want to kill Salina. We can't condemn him for murder. Instead, we condemn him for negligence. We condemn him for failing to care enough for others' safety to properly secure his load. We give out the same condemnation to the aircraft safety inspector who skips important tests on his checklist because it's cold outside. That kind of negligence can kill people, and people who don't want their loved ones harmed have strong reasons to condemn such a careless attitude.
Social tools like praise and condemnation can change people's attitudes and desires. I was still a fundamentalist Christian when I went to college, but well-placed condemnation from people I respected changed my attitude toward gay marriage pretty quickly. Most humans care what their peers think of them. That's why public praise for those who promote a good level of safety, along with public condemnation for those who are negligent, can help save lives.
Failure to secure a truck load can be deadly. But failure to secure one's beliefs can be even worse.
Again and again, people who choose to trust intuition and anecdote instead of the replicated scientific evidence about vaccines have caused reductions in vaccination rates, which are then followed by deadly epidemics of easily preventable disease. Anti-vaccination activists are negligent with their beliefs. They fail to secure their beliefs in an obvious and clear-cut case. People who don't want their loved ones to catch polio or diphtheria from a neighbor who didn't vaccinate their children have reasons to condemn - and thereby decrease - such negligence.
People often say of false or delusional beliefs: "What's the harm?" The answer is "lots." WhatsTheHarm.com collects incidents of harm from obvious products of epistemic negligence like AIDS denial, homeopathy, exorcism, and faith healing. As of today they've counted up more than 300,000 injuries, 300,000 deaths, and $2 billion in economic damages due to intellectual recklessness. Very few of those harmed by such epistemic negligence have been listed by WhatsTheHarm.com, so the problem is actually much, much worse than that.
Failure to secure one's beliefs can lead to misery on a massive scale. That is why your rationality is my business.
48 comments
Comments sorted by top scores.
comment by Vladimir_M · 2011-02-13T04:24:09.932Z · LW(p) · GW(p)
lukeprog:
WhatsTheHarm.com collects incidents of harm from obvious products of epistemic negligence like AIDS denial, homeopathy, exorcism, and faith healing.
That is all very nice, but I notice that among the beliefs criticized on that website, there does not appear to be a single one that is nowadays widespread in reputable academic circles and other influential elite institutions, or that would enjoy such respect and high status that attacking it might be dangerous for one's reputation, career, or worse.
Therefore, the obvious question is: are we indeed lucky to live in a society whose intellectual elites and respectable shapers of public opinion harbor no significant dangerous false beliefs -- or are the authors of this website themselves in the business of perpetuating a harmful and irresponsible delusion, namely that when it comes to dangerous false beliefs, we have nothing to worry about except for these petty low-status superstitions?
(Not that there aren't people who are harmed by these petty superstitions, but compared to the delusions held by the elites in charge, it's like comparing the personal shortcomings of the passengers on the Titanic with the captain's delusions on the matters of navigation. Some perspective is definitely in order in each case.)
Replies from: Will_Sawin, army1987↑ comment by Will_Sawin · 2011-02-13T05:11:21.145Z · LW(p) · GW(p)
If I go around saying "This belief is wrong! Hey everyone, did you know that this belief is wrong?" and it's a low-status belief, high-status people are likely to ask "What's the harm?" If it's a high-status belief, due to correctness of the Titanic analogy, very few high-status people will ask that. They are much more likely, instead, to criticize your argument that the belief is wrong.
What's The Harm is not an introduction to the practice of rationality. It is a response to a specific argument. It makes that counter-argument best and most clearly by only including beliefs that are almost never held by high-status individuals.
(In addition, the whole methodology of collecting individual examples works well for distributed mistakes like the passengers', but not for large, single mistakes like the captain's.)
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-02-13T19:24:25.410Z · LW(p) · GW(p)
Will Sawin:
If it's a high-status belief, due to correctness of the Titanic analogy, very few high-status people will ask that [what's the harm]. They are much more likely, instead, to criticize your argument that the belief is wrong.
Would that it were so! When it comes to really pervasive and established high-status delusional beliefs, with very few exceptions, what you'll get is at best a criticism whose content is far below the usual scholarly standards, and at worst just mindless sneering and moral indignation.
This holds both for those high-status false beliefs that are a matter of ideological orthodoxy and those that are a matter of venal interest. (The overlap between those is, of course, larger than the pure part of either category, and people have no problem coming up with honest rationalizations for their professional, ideological, and other interests.)
(In addition, the whole methodology of collecting individual examples works well for distributed mistakes like the passengers', but not for large, single mistakes like the captain's.)
In many cases, high-status delusional beliefs don't result in a single identifiable disaster, but rather in lots of widely distributed harm and suffering. (In this sense, the Titanic analogy breaks down.)
In these cases, however, a collection of touching human-interest stories will likely fail to strike the intended note among high-status readers, and will instead be dismissed as nefarious extremist propaganda.
If I go around saying "This belief is wrong! Hey everyone, did you know that this belief is wrong?" and it's a low-status belief, high-status people are likely to ask "What's the harm?"
Not really. It depends on the exact way the belief in question is perceived in high-status circles. In some cases, you'll win status points out of all proportion with the actual importance of the problem and without much scrutiny of the accuracy of your arguments (the phrase "raising awareness" comes to mind). In other cases, you won't register on high-status people's radar even if you have a solid case, simply because the issue doesn't happen to be a status-fertile cause. In yet other cases, it may happen that while a belief is low-status, it is also considered uncouth to attack it all-out; one is supposed to scoff at it in more subtle and oblique ways instead.
Replies from: Will_Sawin, NMJablonski↑ comment by Will_Sawin · 2011-02-15T03:03:35.428Z · LW(p) · GW(p)
Would that it were so! When it comes to really pervasive and established high-status delusional beliefs, with very few exceptions, what you'll get is at best a criticism whose content is far below the usual scholarly standards, and at worst just mindless sneering and moral indignation.
Would you argue that this consists of people "asking what's the harm" rather than "criticizing your argument"? I never said that they would criticize your argument well.
In many cases, high-status delusional beliefs don't result in a single identifiable disaster, but rather in lots of widely distributed harm and suffering. (In this sense, the Titanic analogy breaks down.)
The point stands. Low-status beliefs come in clear pockets with clear examples. Widely distributed harm and suffering usually has the property that you can't blame any single instance of suffering on it.
Not really. It depends on the exact way the belief in question is perceived in high-status circles. In some cases, you'll win status points out of all proportion with the actual importance of the problem and without much scrutiny of the accuracy of your arguments (the phrase "raising awareness" comes to mind). In other cases, you won't register on high-status people's radar even if you have a solid case, simply because the issue doesn't happen to be a status-fertile cause. In yet other cases, it may happen that while a belief is low-status, it is also considered uncouth to attack it all-out; one is supposed to scoff at it in more subtle and oblique ways instead.
Indeed, there are many possible responses. "likely" was not intended to mean "overwhelmingly probable". It's just a likely occurrence.
↑ comment by NMJablonski · 2011-02-14T04:05:16.395Z · LW(p) · GW(p)
You've left me very curious as to what high status beliefs you think are inaccurate.
I myself find that "right thinking" academic elites are blisteringly wrong on many things, and I would be interested to see where others are willing to go out on a limb and challenge orthodoxy.
Replies from: Vladimir_M, Dan_Moore↑ comment by Vladimir_M · 2011-02-15T04:05:42.495Z · LW(p) · GW(p)
NMJablonski:
You've left me very curious as to what high status beliefs you think are inaccurate.
Trouble is, the fact that they are high-status means that contradicting them without a very good supporting argument (and usually even otherwise) will make one sound like a crackpot or extremist of some sort.
Nevertheless, I think there are some examples that shouldn't sound too controversial. Take for example modern economics, and macroeconomics in particular. Our governments do many things based on what passes for professional scientific expertise in this field, and if this supposed expertise is detached from reality, the policies based on it can result in catastrophically bad consequences in many imaginable ways. Arguably, this has happened in many times and places historically, some arguable examples being the Great Depression and the present global economic crisis.
Now, to put it bluntly, I see no rational reason to believe that macroeconomists have any clue about anything. The greatest luminaries of scholarship in this field always espouse theories that are suspiciously convenient for their ideological agenda, and are apt to dismiss their equally prestigious ideological opponents as crackpots (more or less diplomatically, depending on the occasion). What's more, even a casual inquiry into the epistemological standards in the field reveals an awfully bad situation, with all signs of cargo cult science plainly obvious.
Accordingly, one is tempted to conclude that all these sophisticated and supposedly scientific economic policies have never been much more than ideologically-driven dilettantism (except for a few elementary principles of political economy that have been well understood since antiquity), and we're just lucky that the economy is resilient enough not to be damaged by it too catastrophically. But even if one doesn't draw such a strong conclusion, it certainly seems to me the height of irrationality to worry about petty folkish superstitions that anyone with any intellectual status scoffs at, while at the same time our prosperity is in the hands of people who dabble with it using "expertise" that at least partly consists of evident pseudoscience, but nevertheless gets to be adorned with the most prestigious academic titles.
I can think of many other examples, most of which are likely to be more controversial.
Replies from: lessdazed, wallowinmaya↑ comment by lessdazed · 2011-07-27T22:40:06.942Z · LW(p) · GW(p)
I can think of many other examples, most of which are likely to be more controversial.
One way to initiate discussion about them without lowering your status would be to include them among a larger list of high-status beliefs, with the list sufficiently large that everyone thinks the experts wrong on some of them.
E.g., Say "I think that one or more of the following fifty high status opinions are wrong," and then make your list.
↑ comment by David Althaus (wallowinmaya) · 2011-07-27T21:24:20.093Z · LW(p) · GW(p)
I can think of many other examples, most of which are likely to be more controversial.
I've already read many of your excellent comments and now I'm just too curious about your ideas. Would you mind writing me a PM elucidating your theories?
↑ comment by Dan_Moore · 2011-02-14T17:20:41.378Z · LW(p) · GW(p)
You've left me very curious as to what high status beliefs you think are inaccurate.
I 'm planning a LW post about a high status belief I believe to be inaccurate. I call this belief 'the strong law of one price' or SLOOP, a currently high status belief in financial economics. SLOOP essentially says that the fair value of an instrument (e.g., a contingent set of cash flows) does not depend on whether the market for that instrument is well modeled by a Walrasian auction. Examples of instruments that are not well modeled by a Walsasian auction include demand deposits and pension liabilities.
It's my opinion that all the arguments in favor of SLOOP involve either an invalid form of inductive logic, or circular reasoning. I plan on doing some DIY science by conducting an online experiment. I'm going to offer $1,000 to anyone who can present in support of SLOOP, either empirical evidence or a valid deductive logical argument.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2011-02-14T22:35:53.357Z · LW(p) · GW(p)
I'm going to offer $1,000 to anyone who can present in support of SLOOP, either empirical evidence or a valid deductive logical argument.
You might consider making your offer much more precise, specifying in advance exactly what sort of evidence you would find convincing. Do you really mean any evidence? Even if the SLoOP isn't the best model of reality, there could still be probabilistic evidence that favors the SLoOP over some rival theories.
↑ comment by A1987dM (army1987) · 2013-01-08T09:51:40.489Z · LW(p) · GW(p)
WhatsTheHarm.com collects incidents of harm from obvious products of epistemic negligence like AIDS denial, homeopathy, exorcism, and faith healing.
That is all very nice, but I notice that among the beliefs criticized on that website, there does not appear to be a single one that is nowadays widespread in reputable academic circles and other influential elite institutions
[emphasis added] If the Catholic Church doesn't count as an influential elite institution, I don't know what does...
comment by timtyler · 2011-02-12T17:42:50.211Z · LW(p) · GW(p)
http://whatstheharm.net/ is interesting - but not very balanced.
I was reminded of http://www.quackwatch.com/ ... I looked at:
It is trivial to compile a list of harms done by conventional medicine as well. Doctors kill many people each year, What patients want is harms vs benefits. Lists of harms are anecdotal evidence - and alas are of very little use.
Replies from: Yvain, lukeprog↑ comment by Scott Alexander (Yvain) · 2011-02-12T22:28:57.864Z · LW(p) · GW(p)
Yes; note also their page on Moon Landing Denial.
↑ comment by lukeprog · 2011-02-12T18:02:16.047Z · LW(p) · GW(p)
True.
Do you think it's likely that the benefits of AIDS denial, homeopathy, exorcism, and faith healing come close to outweighing the harms?
Note that I also did not list possible benefits of anti-vaccination.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-02-12T19:09:13.241Z · LW(p) · GW(p)
You don't compare harms and benefits of the same process. You compare properties of the alternative processes. Comparing harms with benefits of a single possible decision is a heuristic for comparing the decision with the default of doing nothing, but often there is no such default option.
(This remark is not relevant to the conversion, just a technical note on your usage of harms vs. benefits.)
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2011-02-12T20:01:02.353Z · LW(p) · GW(p)
Comparing harms with benefits of a single possible decision is a heuristic for comparing the decision with the default of doing nothing, but often there is no such default option.
There is another good reason to analyze one process into harms and benefits, even if we don't know how to avoid taking them as a package. The reason is that we may someday figure out a way to modify the process so that we get the benefits with less of the harm. Such a subtle refinement of the process may be beyond us now, but we are more likely to gain that capability someday if we perform the analysis now. That is one way that we learn of the worthwhile technical problems to work on.
This analysis wouldn't make sense if we thought that the concomitance of the harm and benefit were a law of nature or logic. Otherwise, it is always possible that we will figure out a way to separate them.
In the meantime, a side benefit of this analysis is that its results are ready made to plug into expected-utility calculations. I think that that is a good reason for comparing harms and benefits of the same process.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-02-12T20:23:43.362Z · LW(p) · GW(p)
But then you won't compare harms with benefits, you compare a particular harm, or a particular benefit, with its alternative counterpart achieved by changing the process. The error is in comparing different factors with each other, like comparing the price of a good with cost of delivery, not in paying attention to detail of all the different factors. So you do want to compile lists of harms and benefits, as instrumentally or morally relevant characteristics of a plan that you want to optimize, but not for the purpose of comparing them to each other.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2011-02-12T22:54:08.377Z · LW(p) · GW(p)
So you do want to compile lists of harms and benefits, as instrumentally or morally relevant characteristics of a plan that you want to optimize, but not for the purpose of comparing them to each other.
By "comparing harms with benefits", I mean computing the values of the benefits, harms, and U = (benefit) – (harm) that follow from an action. Of course, these values alone don't determine whether you ought to take the action. You need to compare U to the analogous quantity for the other available actions first. And of course the quantities of (benefit) and (harm) have to be measured with respect to some arbitrary "zero", which doesn't have to be a "default" as this word is intuitively understood. You could set U itself equal to zero. The computation of U is only an intermediate step to action.
Nonetheless, these are all handy quantities to store, because you can cheaply store them and compare them to the analogous quantities for other actions, including ones that you haven't thought of yet.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-02-12T23:29:00.185Z · LW(p) · GW(p)
Of course. But I still can't convince my mother that paying $10 for delivery of a $3 peg is better than driving all the way to a remote storehouse to get it yourself, even though the cost of the delivery is significantly greater than cost of the goods and hence feels "not worth it".
It's useful to hold all sorts of quantities in your mind while considering a decision, but it's important to know what they're for, so that they aid in accuracy of the decision, and not confuse you instead. To avoid this confusion, first thing is to know a reliable (if inefficient) methodology, and only then develop advanced tricks.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2011-02-12T23:58:03.259Z · LW(p) · GW(p)
But I still can't convince my mother that paying $10 for delivery of a $3 peg is better than driving all the way to a remote storehouse to get it yourself, even though the cost of the delivery is significantly greater than cost of the goods and hence feels "not worth it".
I agree that that is a common and buggy way to think. I probably tend to do that myself when I'm not careful, so I concede that it is a problem. But your example has someone comparing part of the cost of an action with another part of the cost of that same action. I'm talking about comparing the cost with the benefit of the same action. Are you saying that even doing that could put one at more risk of making the mistaken comparison that you describe?
At any rate, that wasn't the kind of comparison that lukeprog was doing. Could you elaborate on the kinds of mistakes that you see flowing from what he said? One possible confusion is using the sign of (benefit) – (cost) of an action (measured in lives saved, say) to decide whether to do it. Do you see others?
Replies from: bcoburn↑ comment by bcoburn · 2011-02-20T15:54:48.389Z · LW(p) · GW(p)
That kind of comparison just completely ignores opportunity costs, so it will result in mistakes any time they are significant.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2011-02-20T18:36:10.931Z · LW(p) · GW(p)
That kind of comparison just completely ignores opportunity costs, so it will result in mistakes any time they are significant.
Making the comparison is not the last step before decision. The comparison itself ignores opportunity costs, but it doesn't keep you from going on to perform an opportunity-cost check. The output of the comparison can then be combined with the output of the opportunity-cost check to determine a decision.
comment by steven0461 · 2011-02-12T22:36:22.576Z · LW(p) · GW(p)
Condemning people for holding particular beliefs can be useful, or it can be anti-useful when the person you're condemning is right, or it can be bad diplomacy that backfires, or it can erode the standards of debate, depending on the particular belief and other circumstances. Can you propose a criterion that allows me to tell when condemnation is net useful? Without such a criterion, the post strikes me as just saying "boo irrationality".
Replies from: lukeprog↑ comment by lukeprog · 2011-02-12T22:55:40.614Z · LW(p) · GW(p)
This post doesn't advocate condemning certain beliefs. It advocates condemning epistemic negligence that often leads to harmfully false beliefs.
As for your question for more detail, I take that to be an empirical question that is beyond the (very modest) scope of the present post. This post does indeed just say "boo irrationality." It is meant as the kind of very short post to which you can send all the people who say "Why does it matter to you what I believe? Why are you trying to get me to give up these beliefs?"
Replies from: steven0461↑ comment by steven0461 · 2011-02-12T23:03:57.781Z · LW(p) · GW(p)
I see. I certainly don't see anything wrong with condemning epistemic negligence in the abstract. I guess it can't hurt to make the point again.
This part put me on the wrong foot:
well-placed condemnation from people I respected changed my attitude toward gay marriage pretty quickly
Surely they were condemning your specific beliefs here.
Replies from: lukeprog↑ comment by lukeprog · 2011-02-12T23:59:49.773Z · LW(p) · GW(p)
As a historical matter, that may be what happened, but that's not what I've explicitly advocated here.
Replies from: None↑ comment by [deleted] · 2011-02-13T06:06:31.794Z · LW(p) · GW(p)
Choose your anecdotes carefully.
While I don't think anyone here opposes extending the legal category of marriage to homosexual couples (or rather the ones that do probably do so on Libertarian grounds where they want to abolish state sanctioned marriage altogether) you use of it basically conveyed "boo Abrahamic faiths" or "boo nonconformity".
Replies from: David_Gerard↑ comment by David_Gerard · 2011-02-13T10:15:23.591Z · LW(p) · GW(p)
Choose your anecdotes carefully.
+1
Remember that if someone is very fond of a really stupid belief, they will attack anything coming anywhere near it.
Replies from: khafra↑ comment by khafra · 2011-02-14T12:04:55.660Z · LW(p) · GW(p)
I have a gut feeling that lukeprog was still fairly fond of his really stupid beliefs when he arrived in college, but his newfound peers (1) didn't criticize anything until some mutual levels of respect and liking had built up, and (2) criticized the most peripheral manifestations of those beliefs, like gender/sexual politics, not the existence of a deity.
Regardless of that gut feeling, I didn't get "boo nonconformity" at all from his anecdote; I got "be aware of peer pressure, which is in some circumstances a pressure so deep as to affect your very belief system."
Replies from: David_Gerard↑ comment by David_Gerard · 2011-02-14T12:06:45.466Z · LW(p) · GW(p)
"be aware of peer pressure, which is in some circumstances a pressure so deep as to affect your very belief system."
Lots of cases. Possibly the usual method people change their beliefs.
comment by Tyrrell_McAllister · 2011-02-12T23:08:54.470Z · LW(p) · GW(p)
Social tools like praise and condemnation can change people's attitudes and desires. I was still a fundamentalist Christian when I went to college, but well-placed condemnation from people I respected changed my attitude toward gay marriage pretty quickly.
It's worth emphasizing that condemnation from people that you don't respect would probably fail to change your attitude, provided that you have a "tribe" of people who share your views.
If you (the general you) are contemplating using condemnation to encourage people to be more pro-gay-marriage, ask yourself, "Do these people in fact care about gaining my respect? Or am I just offering them an opportunity to earn more status-points from their fellow tribe-members by counter-condemning me?"
comment by avalot · 2011-02-12T17:27:33.888Z · LW(p) · GW(p)
Anti-vaccination activists base their beliefs not on the scientific evidence, but on the credibility of the source. Not having enough scientific education to be able to tell the difference, they have to go to plan B: Trust.
The medical and scientific communities in the USA are not as well-trusted as they should be, for a variety of reasons. One is that the culture is generally suspicious of intelligence and education, equating them with depravity and elitism. Another is that some doctors and scientists in the US ignore their responsibility to preserve the profession's credibility, and sell out big time.
Chicken, meet egg.
So if my rationality is your business, you're going to have to get in the business of morality... Because until you educate me, I'll have to rely on trusting the most credible self-proclaimed paragon of virtue, and proto-scientific moral relativism doesn't even register on that radar.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2011-02-12T17:45:57.015Z · LW(p) · GW(p)
he medical and scientific communities in the USA are not as well-trusted as they should be...
I disagree. I think Americans are far too trusting of medical professionals. So much of what has been recommended to me by doctors is useless or even harmful to my recovery. Ever tried to talk to your doctor about conditional probabilities? Also, I don't think we should associate the medical profession so closely with "science".
Replies from: jimmy, Costanza↑ comment by jimmy · 2011-02-12T20:59:06.930Z · LW(p) · GW(p)
Its entirely possible that most Americans should trust doctors more (and homeopaths and themselves less) while above average Americans should trust doctors less.
I absolutely agree that most smart people trust doctors too much. If I trusted doctors more, I'd have gotten a completely unnecessary total disc replacement at the ripe old age of 22. As it was, I trusted him too much- I'm ashamed that I actually considered it even though I understood all the reasons the next doctor used in his judgement of "total disc replacement!?!? He's insane!"
↑ comment by Costanza · 2011-02-15T00:56:08.041Z · LW(p) · GW(p)
Starting on page 114 of his book about randomness, The Drunkard's Walk, Physicist Leonard Mlodinow tells a real-life story about being told by his doctor that the results of a blood test showed that there was a 999 out of 1,000 chance he had AIDS and would be "dead within a decade." Mlodinow (who did not have AIDS) uses this as an introduction to Bayes' Theorem. His doctor had made exactly the error described in An Intuitive Explanation of Bayes' Theorem.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-02-15T01:52:43.299Z · LW(p) · GW(p)
Well, to be precise, the Intuitive Explanation describes exactly this error, previously found to have been made by doctors.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-02-16T08:53:27.512Z · LW(p) · GW(p)
You're objecting that this particular instance wasn't described in IEBT, but rather its class was? Otherwise I'm not sure what distinction you're drawing. Or am I missing an edit by the parent?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-02-16T15:33:50.333Z · LW(p) · GW(p)
I'm saying that the doctors are the original and IEBT is the copy, not the other way around.
comment by Paul Crowley (ciphergoth) · 2011-02-13T21:44:17.310Z · LW(p) · GW(p)
The thing I worry about with this line of argument is that it can create a solemn vow to pursue truth, but not a burning itch to know. Still, perhaps the former can lead to the latter.
Replies from: lukeprog↑ comment by lukeprog · 2011-02-18T20:27:33.027Z · LW(p) · GW(p)
Good point!
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-03-09T20:26:00.700Z · LW(p) · GW(p)
Still, isn't a solemn vow to pursue truth better than no vow at all? Even if a burning itch to know is better... How DO you create a burning itch to know in someone who doesn't have one?
comment by Daniel_Burfoot · 2011-02-13T04:19:11.970Z · LW(p) · GW(p)
That is why your rationality is my business.
You're preaching to the choir. The real question is: how are you going to compel to be rational who aren't otherwise inclined to do so? Demagoguery?
Replies from: Vladimir_M↑ comment by Vladimir_M · 2011-02-13T04:39:24.975Z · LW(p) · GW(p)
And an even more difficult question: it's easy to preach against those sorts of irrationality whose denunciation will win you status points in reputable, high-status circles -- but what about the irrationalities and delusions that are themselves a matter of consensus and strong moral feeling among high-status people? Who will have the courage to attack those with equal decisiveness and fervor?
Replies from: David_Gerard↑ comment by David_Gerard · 2011-02-13T10:13:19.201Z · LW(p) · GW(p)
ho will have the courage to attack those with equal decisiveness and fervor?
The level does not have to be equal. It just has to be effective. A quiet word in one place can have more resonance than a rant in another.
comment by JamesAndrix · 2011-02-12T17:07:51.554Z · LW(p) · GW(p)
So when is your book on rationality coming out?
Replies from: David_Gerard↑ comment by David_Gerard · 2011-02-13T00:20:27.546Z · LW(p) · GW(p)
When he's posted here daily for a couple of years, obviously ;-)
comment by Jonathan_Graehl · 2011-02-16T08:46:32.051Z · LW(p) · GW(p)
I approve. Also, I wonder why your personal anecdote is such good rhetoric. I guess it makes me care more about what you're advocating, because it makes me care for you.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-03-09T20:25:00.567Z · LW(p) · GW(p)
I think that's generally true of using anecdotes to introduce general topics. It's harder work for people to visualize generalities, but anecdotes are concrete.