Spreading the word?
post by Paul Crowley (ciphergoth) · 2009-04-19T19:25:32.850Z · LW · GW · Legacy · 46 commentsContents
46 comments
This has been discussed some, but I don't think it's been the sole subject of a top-level post. I want to find out other people's ideas rather than driving the discussion into my ideas, so I'm asking the question in a very general form, and holding off on my own answers:
- Should we be trying to spread the word?
- If so, what is the word, and how should we be trying to spread it?
46 comments
Comments sorted by top scores.
comment by Paul Crowley (ciphergoth) · 2009-04-19T20:45:33.942Z · LW(p) · GW(p)
I'll have a go at this. I don't think I'm in a tiny minority in anything I say here, but I'm not hugely confident of my answers; that's one reason I'm setting them out explicitly.
- There is a word, or rather, there's a programme
- the word is that there's such a thing as rationality (epistemic and instrumental)
- it is amenable to mathematical modelling
- but it is different in important systematic ways from our natural reasoning capacity (biases)
- since that is the result of natural selection, not design
- science can tell us a lot about these biases
- we can rationally correct for these biases, by understanding the model and the psychology
- the skill of making these corrections is communicable
- Eliezer Yudkowsky has written a lot of essays that are helpful in this task
- greater rationality has the potential to ameliorate not only all sorts of harm and suffering, but also existential risk
- so it's worth trying to talk to people who might be receptive to these ideas
- and trying to craft materials that will help bring people on board
- increasing world rationality is a subgoal of other programs for participants here, including one important one that cannot be named.
Where it goes from there is more controversial. I can imagine us going down a road in which we create organisations with paid staff who push for rationalism to be taught in schools and universities; others have discussed the idea of formal rationalism courses ("dojos") that we could join ourselves and encourage others to join. Or, we could decide that informal word-of-mouth combined with making materials available online is the best way forward.
I mention Eliezer's essays specifically, which might be controversial. Here and on Overcoming Bias, there is a lot of excellent writing about rationality, but it seems to me that Eliezer is the only one who is explicitly trying to develop a complete and coherent programme for advancing the art, and though we get a lot out of all the authors, it's his writings that are actually bringing us together around a programme here. This of course can and must change if the programme is to succeed; I wanted to say it explicitly because it feels like a bit of an elephant in the room otherwise.
Some people are very skeptical about the very idea of spreading the word, but it seems to me like one of the more plausible ideas I've heard for saving the world, so I'm surprised more folk aren't for it.
Replies from: byrnemacomment by gaffa · 2009-04-19T21:27:17.136Z · LW(p) · GW(p)
On several occasions I've wanted to introduce people to Eliezer's writings (and OB/LW aswell), but due to its disorganized and heavily-dependent-on-other-material-in-a-great-messy-web-like nature, I have feared that just a "hey check this guy out" would most likely just result in the person reading a few random essays, saying "yeah I guess that's pretty interesting" and then forgetting about it. Right before LW launched, I seem to recall Eliezer talking about how the LW architecture would allow better organization and that maybe he would do something to make his material more accessible. I haven't heard anything since then, but if something like that would be done I think that would be great.
(sorry if I'm being rude by focusing on just Eliezer's material when we're discussing the greater LW picture, but this is just a situation that I've found myself in a few times and I think it's still relevant to this topic. I also second ciphergoth's point about the elephant)
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T17:50:45.744Z · LW(p) · GW(p)
My original plan was to organize things into sequences, but I now think that converting to a wiki/post model (concepts defined in wiki, which organizes posts, which link back to wiki) is an even bigger win, though sequence browsing would still be nice.
Thing is, while I was actually writing all this stuff, I needed to just focus on writing it, as much as possible, even at the expense of usability - and go back and fix the accessibility afterward. It's not optimal but it got the job done.
Replies from: blogospheroid↑ comment by blogospheroid · 2009-04-21T04:29:24.533Z · LW(p) · GW(p)
A big positive here would be the book being published, like the freakonomics blog took off after freakonomics got popular and then they added more contributors who would add research that would be of interest to freakonomics readers.
Replies from: mattnewport↑ comment by mattnewport · 2009-04-21T04:37:21.241Z · LW(p) · GW(p)
I believe the freakonomics blog was started after the book was published. The blog was a spin off from the book, not the other way around.
Replies from: blogospheroid↑ comment by blogospheroid · 2009-04-21T04:54:12.182Z · LW(p) · GW(p)
That was precisely my point. After the book gets published, the wiki and the blog will get popular. People, who are successful in other fields and who have got interested in x-rationality will have a background to understand posts better and will contribute to better discussions.
comment by Paul Crowley (ciphergoth) · 2009-04-20T08:06:14.755Z · LW(p) · GW(p)
One other quick remark.
We have talked about it quite a bit, and I don't believe that we're a cult. However, in every conversation I play out in my head where I try to talk about what we're doing here with someone who's not part of it, they start to think we're a cult within about thirty seconds. I'm almost thinking of using "I've joined a cult" as my opening line, to get it out of the way.
Replies from: pre, David_Gerard, pjeby, SoullessAutomaton, andrewc↑ comment by pre · 2009-04-20T09:45:59.774Z · LW(p) · GW(p)
That's almost exactly the phrase I used when I pointed this place out to my friends. I added one word though: "I've joined another cult." I said.
I find that if I talk as though all my groups of friends are cults of various kinds that it takes the "You're in a cult" wind out of their sails.
"Yes, I'm in lots of cults, including this one here with you in it too."
Don't think any of the members of my other cults have wondered in this direction yet though.
↑ comment by David_Gerard · 2011-04-13T13:52:04.071Z · LW(p) · GW(p)
I'm almost thinking of using "I've joined a cult" as my opening line, to get it out of the way.
I have been tempted to say "Oh, that cult Paul joined? I joined it too. It's pretty good. ... No, it's a bit complicated. You probably don't want to join."
(I swear I have not done this.)
↑ comment by pjeby · 2009-04-20T15:49:43.358Z · LW(p) · GW(p)
However, in every conversation I play out in my head where I try to talk about what we're doing here with someone who's not part of it, they start to think we're a cult within about thirty seconds.
No, no, no... It's not a cult, you've just joined the "movement" to "promote greater awareness" of the "cause" of rationality. ;-)
↑ comment by SoullessAutomaton · 2009-04-20T09:53:03.886Z · LW(p) · GW(p)
Speculation: The best way to avoid looking like a cult may, in fact, be to call ourselves one in an obviously joking fashion. Something that's half Discordianism, half Bayesian Conspiracy.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T10:04:29.455Z · LW(p) · GW(p)
This is indeed a tempting avenue of attack, but we must resist the temptation these other organisations go for to hide in obscurity of language and trying to sound clever, emphasising actually being clever instead. This is one reason I think the dojo metaphor is such a mistake - we must at all costs speak straightforwardly, and be seen to do so.
↑ comment by andrewc · 2009-04-23T07:57:17.536Z · LW(p) · GW(p)
Seems more like a political party in form than a cult per se. Putting aside the distasteful connotations of the word politics, most political parties are (or at least were at their inception) groupings of people who agree on a set of values and a philosophy.
Most cults don't permit the degree of participation from peripheral semi-lurkers who only fractionally accept the principles that this site does.
Anyway I voted the post down because these meta-discussions are boring.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-23T08:28:52.137Z · LW(p) · GW(p)
What bores you is obviously way too subjective for me to discuss further, but if you think that good is unlikely to come of this discussion, I'd be interested to know why.
Replies from: andrewc↑ comment by andrewc · 2009-04-23T11:11:21.673Z · LW(p) · GW(p)
A serious question deserves a serious answer so here it is, even though as a peripheral semi-lurker it's probably not relevant to your program. My motivations for coming here are entertainment on the one hand, and trawling for insights and ideas I can use at work.
I'm a rationalist, but not a Rationalist. I cringe at the idea of a self-identified Rationalist movement or organisation in the same way I cringe at Richard Dawkin's 'Bright' movement. I think there is a danger of a sort of philosophical isolationism where participants forget that rationalism and materialism are alive and well in many scientific professional societies, political organisations, educational institutions and families.
I never said no good would come from the discussion - I sincerely hope you accomplish something worthwhile.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-23T11:41:47.773Z · LW(p) · GW(p)
I agree with you about "Brights".
There are of course plenty of rationalists who aren't here, but I think they would benefit from learning about some of the stuff we take for granted here. If there are other attempts to develop a complete (FSVO) and consistent programme for what rationalism is and how to achieve it, I'd like to know more about them.
comment by Alicorn · 2009-04-19T19:51:49.064Z · LW(p) · GW(p)
I don't know if "spread the word about rationalism" is itself a rational edict, but "tell people about fun sites that may interest them" is an Internet edict. Today I sent a link here to a former professor, for instance.
Replies from: Mulciber↑ comment by Mulciber · 2009-04-20T05:40:57.640Z · LW(p) · GW(p)
In that case, would it be a good goal to make this site more fun, independent of the focus on rationality? That way people would recommend it to each other more so the rationality information would be more effective.
Replies from: Dojan↑ comment by Dojan · 2015-01-04T18:43:04.320Z · LW(p) · GW(p)
I don't think it works that way. Do you know any example of an existing webpage/event/thing where the people behind it said "it needs to be the same but more fun!", and it actually worked? I find lesswrong to be fun as it is, and I want it to attract people who are attracted to the actual content, rather than some fun-nes sprinkled on top. (I'm not saying lesswrong cant improve, and that I'd necessarily want to conserve it exactly the way it is though.) The reason wikipedia for example is so wildly successful is that it does what it does really well, and that thing is something people want and need. So no, I don't think that would be a good goal :)
(Aware of/sorry for necroposting)
comment by Mulciber · 2009-04-20T05:39:00.062Z · LW(p) · GW(p)
I think works of fiction are the most effective way of spreading the word (logic and rationality). Personally, if it hadn't been for being exposed to rationalism from science fiction at an early age, I doubt I'd have ever come to this site.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-20T09:46:31.892Z · LW(p) · GW(p)
Anyone who reads science fiction is probably already going to be reasonably receptive to our ideas; what would really make a difference would be promoting rationality in other genres of fiction.
Replies from: MorgannaLeFey↑ comment by MorgannaLeFey · 2009-04-20T10:54:07.709Z · LW(p) · GW(p)
I think it's time for rationality to find its way into romance novels. (I'm not just being glib.)
Replies from: Eliezer_Yudkowsky, SoullessAutomaton↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T17:52:14.053Z · LW(p) · GW(p)
Tell us more!
↑ comment by SoullessAutomaton · 2009-04-20T21:08:51.878Z · LW(p) · GW(p)
I think it's time for rationality to find its way into romance novels. (I'm not just being glib.)
Okay, I'm curious now--what would this be like?
I must confess that romance novels are not something I have significant familiarity with.
comment by pangloss · 2009-04-20T05:33:02.093Z · LW(p) · GW(p)
The word is "rationality", and we should be trying to spread it, because rationality (in ourselves and in others) is useful (to ourselves and others).
The proper way to spread it is to show others how rationality can benefit them, and assist them in their development as rationalists.
Don't think about people as divided into two groups: those who are rationalists and those who are not. Rather, think of them as practicing rationalists or potentially practicing rationalists.
There is not a single strategy for spreading the word; the number of techniques needed to help people to become practicing rationalists is almost as large as the number of people there are.
Just as it would make no sense to bury a duckling in the soil and pour water on it, or to throw bits of bread at a turnip seed, it may make no sense to attempt to intellectually nourish your grandparents by giving them a link to this blog, or to try to awaken the rationalist within your significant other by challenging them to a game of chess.
So, rationality enjoins us to spread the word, but, as with all things, it enjoins us to spread the word rationally. In essence, all preaching should be personalized.
comment by MorgannaLeFey · 2009-04-20T11:03:22.685Z · LW(p) · GW(p)
I'm not entirely clear on what you're hoping to accomplish. Get more people reading this particular blog/forum? Get more people to think rationally in general? Yes to both?
I think you realize that this blog simply will not gain in popular appeal. The posts are too long, too complicated, fairly dry, very academic-sounding, and very connected to a complex series of other posts that require too much research and link following/retracing in order to puzzle out what one does not understand or with which one does not have much experience.
I think it might be more fruitful to focus on ways of introducing rationality in other venues, or sneaking it out there through other means, rather than trying to get more people to come here and read these posts.
Replies from: ciphergoth, stcredzero↑ comment by Paul Crowley (ciphergoth) · 2009-04-20T11:39:16.228Z · LW(p) · GW(p)
No, it's rationality itself I want to promote; this site is just part of that process.
What I want is for fallacies to be widely accepted. For example, I want people to be less scope insensitive. I think that improving the level of debate could improve democracy.
↑ comment by stcredzero · 2009-04-21T17:47:30.810Z · LW(p) · GW(p)
I suspect the purpose of this site to to attain a high S/N ratio. If you try to introduce rationality to other sites, what you inject there is generally diluted. So to attain a high S/N ratio, you need a site devoted to that purpose.
comment by Cameron_Taylor · 2009-04-21T15:20:35.536Z · LW(p) · GW(p)
Should we be trying to spread the word?
To the extent that there is a 'we' that is being 'shoulded' I distance myself.
I find OB and LW valuable resources for expanding my mind and LW is a useful place to find discussions that are slightly less stupid than most that our species can manage. I don't use it to define my identity or as a source of social obligation.
Bayes rule doesn't include a 'should' function.
comment by gwern · 2009-04-20T15:39:00.169Z · LW(p) · GW(p)
Should we be trying to spread the word?
No. The people who value 'intrinsic rationality', who accept the various premises and ideas that are prerequisites for liking LW/OB stuff - they are mostly being reached already. It may not look like it, but there aren't a whole lot of them.
Vastly many more people value utilitarian rationality, but LW/OB stuff doesn't seem to help very much in that respect, and the good things we do have like the near/far view are either unconvincingly new, or taken from elsewhere. If we were to push LW/OB to such people, they would reject it as being useless, conclude it is a failed program, and never ever come near it again. (This is akin to software releases; you don't want to push a buggy release as 1.0, because people will never ever forget.)
comment by Mulciber · 2009-04-20T07:45:16.065Z · LW(p) · GW(p)
I'm curious about why you asked the second question. It seems obvious that "the word" you're talking about is human rationality, that being the whole focus of this community. So why ask people what the word you're asking about is? Is there something more subtle going on here?
comment by JulianMorrison · 2009-04-19T22:01:56.702Z · LW(p) · GW(p)
I'd rather spread the meme.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-19T22:12:34.755Z · LW(p) · GW(p)
I'm afraid I don't understand the distinction you're drawing, can you help?
Replies from: JulianMorrison↑ comment by JulianMorrison · 2009-04-19T23:35:53.451Z · LW(p) · GW(p)
I think the vast majority of people out there don't understand rationality as a scalar, but as almost a binary have-or-haven't, and they think everyone has quite enough of it merely by virtue of being human, unless intoxicated, emotionally wound up or mentally ill. Even if they think other people don't have it, they assume they have it themselves. "Am I rational enough" is a thing only crazy people ask themselves.
We need to hack people's most basic assumptions about rationality before they will even be receptive to improvement.
Because they aren't receptive, mere advice will be turned away and overt or unsubtle hacks will cause push-back. So we must hack subtly. A meme, spreading by itself rather than offered as advice, is a subtle hack. "Rationalists should win" might be a spreadable meme.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-19T23:40:08.410Z · LW(p) · GW(p)
I agree, in general.
However, memes have a tendency to mutate and be selected for virulence rather than beneficial payload. In particular, something like "rationalists should win" can easily become something more like "winners are rational" which is... not at all what we want.
I suggest aiming to make people curious about rationalism without trying to attach any useful information about it to compact memes.
Replies from: JulianMorrison↑ comment by JulianMorrison · 2009-04-19T23:51:32.121Z · LW(p) · GW(p)
"If you're so smart, why ain't you rich" is almost a rationalist meme, and it survives intact. It's also a huge problem because it equates near-immutable IQ and winning, and then uses the obvious wrongness of this equation argue for egalitarian idiocracy. But "If you're so rational, why ain't you rich" is sneaky-good and similar enough to hitch a ride. It asks: maybe you aren't rational enough? And suddenly a scale is introduced.
Replies from: pjeby↑ comment by pjeby · 2009-04-20T04:42:42.251Z · LW(p) · GW(p)
But "If you're so rational, why ain't you rich" is sneaky-good and similar enough to hitch a ride. It asks: maybe you aren't rational enough? And suddenly a scale is introduced.
An interesting data point: those who are rich (powerful, successful with the appropriate sex, etc.) are usually those who are willing to accept unpleasant truths regarding what is required of them.
It is generally not necessary for such people to actually discover or work out those truths, since most of them are readily apparent, available in books or other educational material, and of course learnable via "hard knocks".
So, the rationality that "wins" the most (in bang-for-the-buck terms) is not so much being a perfect Bayesian or smart reasoner, as it is in the willingness to accept potentially-unpleasant truths, including those that violate your most cherished ideals and preferences about the way the world should be.
(And unfortunately, the people who are most attracted by the idea of being right, are usually also the people least willing to admit they might be wrong.)
Replies from: JulianMorrison, Vladimir_Golovin↑ comment by JulianMorrison · 2009-04-20T07:26:32.981Z · LW(p) · GW(p)
I doubt that's all the winning that's possible. They just leaped hurdle number one, non-delusion.
Replies from: pjeby↑ comment by pjeby · 2009-04-20T15:16:05.195Z · LW(p) · GW(p)
I doubt that's all the winning that's possible. They just leaped hurdle number one, non-delusion.
I'm just saying that leaping that one hurdle is sufficient for the vast majority of people to take huge steps forward in their results. Outside of attempts to advance science or technology, there are very few things in life that can't be had with only that much "rationality".
↑ comment by Vladimir_Golovin · 2009-04-20T08:28:33.507Z · LW(p) · GW(p)
So, the rationality that "wins" the most (in bang-for-the-buck terms) is not so much being a perfect Bayesian or smart reasoner, as it is in the willingness to accept potentially-unpleasant truths, including those that violate your most cherished ideals and preferences about the way the world should be.
This is perfectly in line with the definition of epistemic rationality, that is, building an accurate map of reality regardless of the pleasantness of the 'reality landscape' that needs to be mapped.
A map that reflects some features of reality and doesn't reflect others based on their pleasantness to the mapper is not accurate.
Replies from: pjeby↑ comment by pjeby · 2009-04-20T15:20:47.551Z · LW(p) · GW(p)
This is perfectly in line with the definition of epistemic rationality, that is, building an accurate map of reality regardless of the pleasantness of the 'reality landscape' that needs to be mapped.
That may well be, but in my experience people whose ideal is seeking for "Truth" often have a tendency to reject truths that don't match their other ideals. Or, on the flip side, they acknowledge the truths but become bitter and cynical because actually acting upon those truths would violate their other ideals.
In other words, merely knowing the truth is not enough. It is accepting the truth -- and acting on it -- that is required.
(Subject, of course, to the usual caveat that instrumental rationalists do not require "the" truth, only usable models. Winning rationalists use whatever models produce results, no matter how ludicrous they sound or obviously "untruthful" they are.)
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-20T17:54:35.113Z · LW(p) · GW(p)
It's not clear to me whether you mean that accepting models that "produce results" means you'll arrive at actually true model, or that you think winners are willing to use obviously false approximations, or that you think winners believe falsely in order to win.
Replies from: pjeby↑ comment by pjeby · 2009-04-20T19:32:34.552Z · LW(p) · GW(p)
It's not clear to me whether you mean that accepting models that "produce results" means you'll arrive at actually true model, or that you think winners are willing to use obviously false approximations, or that you think winners believe falsely in order to win.
The first, I don't really care about. Maybe the winner will get something "more true", or maybe they'll be talking phlogiston but still be able to light or put out fires and predict what will happen well enough under most circumstances.
The second is mainly what I mean -- plenty of self-help techniques work, despite having ludicrous or awful theories of why/how they work. I see no point in people waiting for the theories to get better before they make progress.
As for the third, it depends on how you look at it. I think it is more accurate to say that winners are able to suspend disbelief, separating truth from usefulness.
But it's important to understand what "suspending disbelief" actually means here. As I was explaining to a class yesterday, the difference between a confident state and a non-confident state is the number of simultaneous mental processes involved. (Subject to the disclaimer that everything I'm about to say here is not a "true" model, just a useful one!)
In a non-confident state, you run two processes: one to generate behavior, and the other one to limit it: self-critique, skepticism, analysis, etc. And it doesn't matter what the target of that "critique process" is... the theory, the person teaching it to you, your own ability to learn, what other people are thinking of you while you do it, whatever. Makes no difference at all what the content of that critique process is, just that you have one.
Confidence, on the other hand, is just running the behavior-generating process. It's literally "suspension of disbelief", regardless of what the disbelief is targeted at. This is why so many self-help books urge suspension of disbelief while reading and trying the techniques they offer: it is in fact essential to being able to carry out any processes that make use of intuition or unconscious learning and computation.
(Because the conscious process ends up diverting needed resources or distracting one from properly attending to necessary elements of the ongoing experience.)
It's also pretty essential to behaving in a confident way -- when you're confident, you're not simultaneously generating behavior and critiquing; you fully commit to one or the other at any given moment.
Anyway, I think it is a confusion to call this "believing falsely" - it is the absence of something, not the presence of something: i.e., it is simply the mental hygiene of refraining from engaging in mental processes that would interfere with carrying out a desired course of action. Intentionally believing falsely doesn't make any sense, but refraining from interfering with yourself "acting as if" a model is true, is an entirely different ball game.
The real purpose of the made-up theories found in self-help books is to give people a reason to let go of their disbeliefs and doubts: "I can't do this", "I'm no good at this", "This stuff never works," etc. If you can convince somebody to drop those other thoughts long enough to try something, you can get them to succeed. And as far as I have observed, this exact same principle is being taught by the self-help gurus, the marketing wizards, and even the pickup people.
And it's probably a big part of why people think that being "too rational" is a success hazard... because it is, if you can't let go of it when you're trying to learn or do things that require unconscious competence.
comment by Annoyance · 2009-04-20T21:38:51.516Z · LW(p) · GW(p)
Should we be trying to spread the word? If so, what is the word, and how should we be trying to spread it?
Why are you asking whether the word should be spread before determining what it is?
If you're already convinced that you have a message potentially worth spreading, at least have the good grace to admit so -- and to know what it is.
Replies from: Curiouskid↑ comment by Curiouskid · 2011-11-24T03:41:13.177Z · LW(p) · GW(p)
Another way of putting it is: "how do I compress LW into a soundbite?"