Leveling Up in Rationality: A Personal Journey
post by lukeprog · 2012-01-17T11:02:55.284Z · LW · GW · Legacy · 59 commentsContents
Power 0: Curiosity Power 1: Belief Propagation Power 2: Scholarship Power 3: Acting on Ideas Winning with Rationality None 59 comments
See also: Reflections on rationality a year out
My favorite part of Lord of the Rings was skipped in both film adaptations. It occurs when our four hobbit heroes (Sam, Frodo, Merry and Pippin) return to the Shire and learn it has been taken over by a gang of ruffians. Merry assumes Gandalf will help them free their home, but Gandalf declines:
I am not coming to the Shire. You must settle its affairs yourselves; that is what you have been trained for... My dear friends, you will need no help. You are grown up now. Grown indeed very high...
As it turns out, the hobbits have acquired many powers along their journey — powers they use to lead a resistance and free the Shire.
That is how I felt when I flew home for the holidays this December. Minnesota wasn't ruled by ruffians, but the familiar faces and places reminded me of the person I had been before I moved away, just a few years ago.
And I'm just so much more powerful than I used to be.
And in my case, at least, many of my newfound powers seem to come from having seriously leveled up in rationality.
Power 0: Curiosity
I was always "curious," by which I mean I felt like I wanted to know things. I read lots of books and asked lots of questions. But I didn't really want to know the truth, because I didn't care enough about the truth to study, say, probability theory and the cognitive science of how we deceive ourselves. I just studied different Christian theologies — and, when I was really daring, different supernatural religions — and told myself that was what honest truth-seeking looked like.
It took 20 years for reality to pierce my comfortable, carefully cultivated bubble of Christian indoctrination. But when it finally popped, I realized I had (mostly) wasted my life thus far, and I was angry. Now I studied things not just for the pleasure of discovery and the gratifying feeling of caring about truth, but because I really wanted an accurate model of the world so I wouldn't do stupid things like waste two decades of life.
And it was this curiosity, more than anything else, that led to everything else. So long as I burned for reality, I was bound to level up.
Power 1: Belief Propagation
One factor that helped religion cling to me for so long was my ability to compartmentalize, to shield certain parts of my beliefs from attack, to apply different standards to different beliefs like the scientist outside the laboratory. When genuine curiosity tore down those walls, it didn't take long for the implications of my atheism to propagate. I noticed that contra-causal free will made no sense for the same reasons God made no sense. I noticed that whatever value existed in the universe was made of atoms. I assumed the basics of transhumanism without knowing there was a thing called "transhumanism." I noticed that minds didn't need to be made of meat, and that machines could be made more moral than humans. (I called them "artificial superbrains" at the time.) I noticed that scientific progress could actually be bad, because it's easier to destroy the world than to protect it. I also noticed we should therefore "encourage scientific research that saves and protects lives, and discourage scientific research that may destroy us" — and this was before I had read about existential risk and "differential technological development."
Somehow, I didn't notice that naturalism + scientific progress also implied intelligence explosion. I had to read that one. But when I did, it set off another round of rapid belief updates. I noticed that the entire world could be lost, that moral theory was an urgent engineering problem, that technological utopia is actually possible (however unlikely), and more.
The power of belief propagation gives me clarity of thought and coherence of action. My actions are now less likely to be informed by multiple incompatible beliefs, though this still occurs sometimes due to cached thoughts.
Power 2: Scholarship
I was always one to look things up, but before my deconversion my scholarship heuristic seems to have been "Find something that shares most of my assumptions and tells me roughly what I want to hear, filled with lots of evidence to reassure me of my opinion." That's not what I thought I was doing at the time, but looking back at my reading choices, that's what it looks like I was doing.
After being taken by genuine curiosity, my heuristic became something more like "Check what the mainstream scientific consensus is on the subject, along with the major alternative views and most common criticisms." Later, I added qualifications like "But watch out for signs that an entire field of inquiry is fundamentally unsound."
The power of looking shit up proved to have enormous practical value. How could I make Common Sense Atheism popular, quickly? I studied how to build blog traffic, applied the major lessons, and within 6 months I had one of the most popular atheism blogs on the internet. How could I improve my success with women? I skim-read dozens of books on the subject, filtered out the best advice, applied it (after much trepidation), and eventually had enough success that I didn't need to worry about it anymore. What are values, and how do they work? My search lead me from philosophy to affective neuroscience and finally to neuroeconomics, where I hit the jackpot and wrote A Crash Course in the Neuroscience of Human Motivation. How could I be happier? I studied the science of happiness, applied its lessons, and went from occasionally suicidal to stably happy. How could I make the Singularity Institute more effective? I studied non-profit management and fundraising, and am currently (with lots of help) doing quite a lot to make the organization more efficient and credible.
My most useful scholarship win had to do with beating akrasia. Eliezer wrote a post about procrastination that drew from personal anecdote but not a single experiment. This prompted me to write my first post, which suggested he ought to have done a bit of research on procrastination, so he could stand on the shoulders of giants. A simple Google scholar search on "procrastination" turned up a recent "meta-analytic and theoretical review" of the field as the 8th result, which pointed me to the resources I used to write How to Beat Procrastination. Mastering that post's algorithm for beating akrasia might be the most useful thing I've ever done, since it empowers everything else I try to do.
Power 3: Acting on Ideas
Another lesson from my religious deconversion was that abstract ideas have consequences. Because of my belief in the supernatural, I had spent 20 years (1) studying theology instead of math and science, (2) avoiding sexual relationships, and (3) training myself in fantasy-world "skills" like prayer and "sensing the Holy Spirit." If I wanted to benefit from having a more accurate model of the world as much as I had been harmed by having a false model, I'd need to actually act in response to the most probable models of the world I could construct.
Thus, when I realized I didn't like the Minnesota cold and could be happy without seeing my friends and family that often, I threw all my belongings in my car and moved to California. When I came to take intelligence explosion seriously, I quit my job in L.A., moved to Berkeley, interned with the Singularity Institute, worked hard, got hired as a researcher, and was later appointed Executive Director.
Winning with Rationality
These are just a few of my rationality-powers. Yes, I could have gotten these powers another way, but in my case they seemed to flow largely from that first virtue of rationality: genuine curiosity. Yes, I've compressed my story and made it sound less messy than it really was, but I do believe I've been gaining in rationalist power — the power of agency, systematized winning — and that my life is much better as a result. And yes, most people won't get these results, due to things like akrasia, but maybe if we figure out how to teach the unteachable, those chains won't hold us anymore.
What does a Level 60 rationalist look like? Maybe Eliezer Yudkowsky + Tim Ferris? That sounds like a worthy goal! A few dozen people that powerful might be able to, like, save the world or something.
59 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2012-01-17T14:01:50.840Z · LW(p) · GW(p)
I liked the Can the Chain Still Hold You? because the baboon example felt inspiring. But two essentially contentless "yay rationality" posts in a row is overdoing it, and starting to give an "overzealous recent convert caught in a happy death spiral" vibe.
Replies from: bryjnar, Swimmer963, ChrisHallquist, lukeprog, Ezekiel, Jonathan_Graehl, FiftyTwo↑ comment by bryjnar · 2012-01-17T15:28:35.774Z · LW(p) · GW(p)
I agree: this post felt like a self-congratulatory collection of applause lights. (I wonder if the habit of linking to previous posts every other word is an indicator of this. Most people aren't going to follow those links, they're just going to think, "Ah yes, a link to a Sequence post. That must be an accepted thing!")
I also think there's a worrying tendency towards ideology here. Luke suggests that "levelling up" in rationality led him to a bunch of beliefs, which are, coincidentally, fairly widely accepted views around here. Cue a round of back-slapping as we all congratulate ourselves on how rational we are.
But rationality doesn't necessarily lead you anywhere: the evidence should do that. And if the evidence starts pointing somewhere else, you should move. And so I'm a bit wary of the tendency to draw too close a link between any particular beliefs and rationality. You never want to be in the situation where you're trying to persuade someone of your views and you find yourself saying "But it's the rational thing to believe!" instead of presenting the evidence.
Also: hints of the No True Scotsman fallacy.
When genuine curiosity tore down those walls, it didn't take long for the implications of my atheism to propagate.
Woe betide you who don't come to the same conclusions as Luke: your curiousity clearly isn't genuine!
Now this may all sound a bit harsh, but frankly I really wish Luke would stop writing posts like this and start doing some hard-headed thinking about some actual problems.
Replies from: Swimmer963, Jonii↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-17T15:47:25.005Z · LW(p) · GW(p)
I do agree with some of your points, especially with the fact that this post is annoyingly self-congratulating in places (though I know I have a bias to find that annoying, and so I don't necessary think my annoyance means much. However, I don't think this post is content-less...in fact, content is a lot of what you're disagreeing with, i.e. that specific Less Wrong beliefs are associated with being rational.
↑ comment by Jonii · 2012-01-17T20:51:56.709Z · LW(p) · GW(p)
I do think it is good to have some inspirational posts here than don't rely that much on actual argumentation but rather paint an example picture where you could be when using rationality, what rationality could look like. There are dangers to that, but still, I like these.
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-17T14:51:50.734Z · LW(p) · GW(p)
I don't necessarily think that this post was written in the same spirit as the previous one. 'Can the Chain Still Hold You' was one of lukeprog's more abstract, ideology-related posts, whereas this one, though it may be 'yay, rationality', is at least very specific in what it's yay-ing about.
↑ comment by ChrisHallquist · 2012-01-17T19:55:42.331Z · LW(p) · GW(p)
I disagree. The previous post spoke in parables about animals with questionable applicability to humans, this one said "here is what I did to pop out of my Evangelical bubble and make my life better."
↑ comment by lukeprog · 2012-01-17T16:28:15.681Z · LW(p) · GW(p)
I'm usually confused by these kinds of comments. Reflections on rationality a year out had even less content and was less well-written and less detailed, but was massively upvoted. Kevin hit the nail on the head with his comment on my Existential Risk post:
I'd like to point out some lukeprog fatigue here, if anyone else wrote this article it would have way more points by now.
Also, this is a personal contribution to the long-running "What good is rationality?" discussion on LessWrong.
Replies from: Kaj_Sotala, malthrin, erratio, None↑ comment by Kaj_Sotala · 2012-01-17T17:13:12.627Z · LW(p) · GW(p)
Well, yes, there is lukeprog fatigue, but not in the sense that you probably mean it. One, or even a couple, of such posts from the same person are fine. It's good to have information about how rationality has impacted somebody, and it's motivatonal as well. But when the same person keeps posting about the same things, over and over again, it ceases to have motivational value. And while it's good to summarize old material, clarify it or make it sexier (your Existential Risk post was great, in those respects), simply linking to old stuff or restating it provides little of value.
- This is your third "yay rationality" post within a relatively short time: it was preceded by Can the Chain Still Hold You and What Curiosity Looks Like. So the motivational impact is rapidly hitting zero.
- "Power 1", is basically a recap of What Curiosity Looks Like (which by itself is less than two weeks old!), plus it explains things about your Christian background that you've already told us about.
- "Power 2" is a recap of The Neglected Value of Scholarship, as well as a plug for some of your later posts.
- "Power 3" recaps parts of your personal history that most people here are already perfectly aware of.
You're right in that a lot of this material would be more appreciated if it was coming from somebody else, but it's not because we've started to take it for granted that you're producing quality material. It's because coming from somebody else, it would provide an independent datapoint about this stuff being useful for someone. You restating the ways in which this has been useful to you only tells us that you haven't changed your mind about this being useful to you.
(And I second the "don't take this personally" bit - I still upvote most of your posts, and I think you're one of the best posters on the site. It's just this particular series of posts that doesn't thrill me.)
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2012-01-17T19:58:57.159Z · LW(p) · GW(p)
"Power 2" is a recap of The Neglected Value of Scholarship, as well as a plug for some of your later posts.
Not really. The previous post focused on the example of William Lane Craig, who is just an awful example for rationalists to emulate. This section is more "scholarship allowed me to do X, Y, and Z to make my own life better," which is much more helpful.
↑ comment by malthrin · 2012-01-17T16:36:53.683Z · LW(p) · GW(p)
You're harder to relate to now that you've made progress on problems the rest of us are still struggling with. Don't take it personally.
Replies from: lukeprog↑ comment by lukeprog · 2012-01-17T16:42:01.403Z · LW(p) · GW(p)
Yeah. I've gotten that comment before. No offense taken. I haven't devoted enough cycles to this problem yet. If suggestions come to mind, feel free to share them.
Replies from: katydee↑ comment by katydee · 2012-01-17T18:36:50.613Z · LW(p) · GW(p)
I've worked out some ways to avoid certain variants of this problem, but most of them really boil down to "trick people with framing," which isn't really desirable-- both because it's at least a little deceptive and because it generally minimizes legitimate progress and serves mostly to make the other party feel better about their existing state.
↑ comment by erratio · 2012-01-17T17:38:37.837Z · LW(p) · GW(p)
The tones of the two articles are very different, and that affects how we perceive the (lack of) content.
I think part of it is that you've broken the rule about making high-status claims in public (or to put it another way, you've broken the show don't tell rule). We all already know that you're intelligent, curious, persuasive, a good researcher, etc etc. because we've read your highly impressive posts where you've compressed ridiculous amounts of research into a nice readable form backed up with a million references. But now you've made a post that's all about how great rationality has been for you, and a lot of it involves rehashing how great you are. You didn't just improve Common Sense Atheism's traffic, you made it one of the most popular atheism blogs on the internet. You didn't just start working on x-risk, you were appointed Executive Director of SingInst. You haven't just made progress against akrasia, you've mastered the algorithm. And so on.
Compare to the other post, where we didn't already have overwhelming evidence of awesomeness and the tone is much more humble.
I also have a strong dislike of linkspam without supporting content and particularly disliked most of the linking in the last paragraph, but that's probably more of a personal thing.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2012-01-17T20:05:20.652Z · LW(p) · GW(p)
You didn't just improve Common Sense Atheism's traffic, you made it one of the most popular atheism blogs on the internet. You didn't just start working on x-risk, you were appointed Executive Director of SingInst. You haven't just made progress against akrasia, you've mastered the algorithm.
But these things are true, at least the first two are. And knowing what Luke feels helped him in achieving these things is very good to know. Previously, I hadn't known Luke did a ton of research in driving traffic to make Common Sense Atheism what it was, and I'm glad to know that.
Replies from: erratio↑ comment by erratio · 2012-01-17T21:46:51.332Z · LW(p) · GW(p)
But these things are true
The visceral reaction to a high-status claim has nothing to do with truth values.
Previously, I hadn't known Luke did a ton of research in driving traffic to make Common Sense Atheism what it was
Same here, but that doesn't detract from any tone issues.
↑ comment by [deleted] · 2012-01-17T17:09:29.173Z · LW(p) · GW(p)
Kevin is correct, but there may also be an element of "What good is rationality?" fatigue involved. This discussion has gone on for quite a while but it's mostly been dominated by applause light-ish anecdotes rather than deep theories. I suspect people are tiring of this kind of post, which would explain why your post has fewer upvotes than previous ones on the same topic.
Replies from: lukeprog↑ comment by lukeprog · 2012-01-17T17:36:54.905Z · LW(p) · GW(p)
That may also be the case. In my case, I used my personal reflections on the value of rationality as a first step in working towards a deep theory of why rationality helps sometimes and not others, so that I might take a crack at the hard problem of how to create superheroes.
↑ comment by Ezekiel · 2012-01-17T14:39:58.014Z · LW(p) · GW(p)
I wouldn't call Can the Chain Still Hold You? a "Yay rationality" post; more just generally inspirational. But yeah, I'm with you.
Fittingly, this website provides several examples of how even when talking about rationality, we (humans) still have a tendency to lapse. One of the most annoying: most of the metaethics sequence seems to be designed to make the reader feel good about Yudkowsky's metaethical position, rather than argue for it and/or explain it.
↑ comment by Jonathan_Graehl · 2012-01-17T21:07:12.056Z · LW(p) · GW(p)
(acknowledging that Kaj only said his posting exudes a vibe, not that the vibe reveals his actual state)
I don't have any reason to suspect lukeprog of overconfidence or overexuberance. He really is more successful and happy than he'd have been without thinking about how to optimize both, and then acting on those thoughts. It's been a long time now; illusions would have crashed.
If you're forgiving enough of yourself, it doesn't grate as much to hear someone congratulating themselves. It's annoying when someone wastes time by falsely signaling their great success via some questionable method, but I've noticed that when I feel under-appreciated by others, I can become excessively skeptical about others' bragging. (apologies if the psychoanalysis isn't relevant to you; it definitely is to me).
The antidote to excessive cheerleading is more focused, concrete advice toward the 5-second end of the spectrum. If you're encouraging luke to mix it up, I'd agree, in that it's probably best for all of us, whether or not we're posting. Too much thinking for too long at a high level of abstraction can become self-rewarding and divorced from reality.
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-01-17T21:09:12.028Z · LW(p) · GW(p)
I've noticed that when I feel under-appreciated by others, I can become excessively skeptical about others' bragging.
Incredibly true, incredibly irritating tidbit about human nature. It happens to me too, and makes me feel like a huge asshole when I catch myself.
↑ comment by FiftyTwo · 2012-01-18T00:15:41.149Z · LW(p) · GW(p)
I like this post personally, and as well as being enjoyable to read I think it can serve two practical purposes:
It provides a useful starting point for people entering the community who aren't already aware of the ideas discussed, and points them towards further reading. Its easy to forget how common the question "whats the point of being rational" really is. Providing evidence that we can provide something more than philosophical masturbation is a good thing for attracting people into the community.
Its also nice to be reminded occasionally that this all has a point and a purpose, particularly when one loses focus or hope. Personally Its reminded me of what I've gained and what more I have to do.
comment by Louie · 2012-01-18T22:38:17.555Z · LW(p) · GW(p)
I'm concerned with the overuse of the term "applause light" here.
An applause light is not as simple as "any statement that pleases an in-group". The way I read it, a charge of applause lights requires all of the following to hold:
1) There are no supporting details to provide the statement with any substance.
2) The statement is a semantic stopsign.
3) The statement exists purely to curry favor with an in-group.
4) No policy recommendations follow from that statement.
I don't see a bunch of applause lights when I read this post. I see a post overflowing with supporting details, policy recommendations, and the opposite of semantic stopsigns -- Luke actually bent over backwards and went to the trouble of linking to as many useful posts on the topic as he could find. By doing so, he's giving the curious reader a number of pointers to what others have said about the subject he's discussing -- so that they can go learn more if they're actually curious.
Really, what more could he have done? How was he supposed to discuss the massive utility he's gained from rationality without mentioning rationality? To make his post shorter, Luke had to use several terms that most people around here feel good about. Yay for Luke! He saved me from having to read a longer, less information-dense post by writing it this way. I understand the sanity benefits of guarding yourself against blatant applause lights, but at the same time, it would be rather perverse of me to automatically feel unhappy in response to Luke mentioning something that makes me happy.
It's not an affective death spiral for me to feel happy when someone tells me an inspiring life-success story that involves terms that I happen to have a positive affect for. It's having a reaction that fits the facts. I'm happy Luke is having a good life. It's relevant for him to tell me about it here on Less Wrong because rationality played a big part in his success. And I'm even more overjoyed and grateful to Luke that he's leaving a trail of sign-posts behind him, pointing the way forward as he levels up in rationality. Now is the time for him to be documenting this... while it's still fresh... so that one day when he forgets how he got to where he is, there will still be an overly detailed record to point people to.
Replies from: lsparrish↑ comment by lsparrish · 2012-01-19T02:22:55.156Z · LW(p) · GW(p)
I agree. Whatever the reason is for me being annoyed or uncomfortable about lukeprog's writing on occasion is probably not because it is Applause Lights. It may even be that he is following Best Practices and I should just adjust to it.
On the other hand, I think I get the feeling that when someone links to something external, they are trying to activate a cognitive shortcut. It's like being fast-talked by a car salesman, I'm being given a reference instead of a whole concept. I'm afraid that I will accept something without actually forming a complete mental model of it. I get the same sensation when someone uses footnote references to something that is unexplained or not apparent.
That's the problem with scholarship -- it can be tricky to recreate an idea in the mind of another, because the other mind needs time to adjust to whatever you adjusted to. So if you dump something large and complex on someone it can end up seeming like an appeal to authority, even when there are good reasons -- because a good reason usually needs time and deliberate attention to be understood.
Also, one has to keep in mind that having dependencies on many sources can make something less persuasive -- say you have five sources that independently seem 90% persuasive -- your result now only seems 59% persuasive.
On an unrelated note, it is kind of weird to me when people use lukeprog's first name instead of complete username because it is also my first name (and a couple other LWers.) This may just be because Luke is a kind of uncommon name, and I have not previously had to get used to it referring to someone else.
Replies from: Solvent, Louie↑ comment by Solvent · 2012-01-19T10:16:00.197Z · LW(p) · GW(p)
Apparently, Luke is the 41st most common baby name. I know 5 Lukes, according to Facebook. Are you sure it's that uncommon?
Replies from: lsparrish↑ comment by lsparrish · 2012-01-19T14:57:08.694Z · LW(p) · GW(p)
Yes, I'd say there being 40 more common names in existence is enough to make it "kind of" uncommon. Certainly enough to explain why someone with a fairly small social group does not see it overlapping many times. I don't contest that in a large group like LW (or a moderately large Facebook network) you would expect to see multiple Lukes, indeed that is the point of using usernames.
↑ comment by Louie · 2012-01-19T07:55:15.766Z · LW(p) · GW(p)
I know lukeprog personally, but I suppose I should call him lukeprog on LW for other people's benefit. Thanks for the reminder.
Replies from: Kevin, lsparrish↑ comment by Kevin · 2012-01-19T09:00:15.196Z · LW(p) · GW(p)
I know lukeprog personally and I call him lukeprog.
Replies from: Caspiancomment by Risto_Saarelma · 2012-01-17T14:33:02.877Z · LW(p) · GW(p)
I get a scummy vibe off Tim Ferriss and don't really feel like buying self-help that is being sold by namedropping him. Trying to sell a research program combining cutting-edge science and advanced philosophy by namedropping Tim Ferriss will make me walk away at a brisk pace.
Replies from: lukeprog, knb, badger, ChrisHallquist, Ezekiel↑ comment by lukeprog · 2012-01-17T16:22:27.861Z · LW(p) · GW(p)
What Tim Ferris has nailed is goal-directed behavior. He's a master of instrumental rationality, in many ways.
Replies from: None↑ comment by knb · 2012-01-18T23:42:26.794Z · LW(p) · GW(p)
I'll chime in to support Tim Ferriss. I read the 4-Hour Workweek in college and started a few businesses based on the model he described. One of them worked fairly well and is now my primary source of (largely passive) income. I've also had excellent results from the 4-Hour Body.
More generally, his general writings on "lifestyle design" (basically applied Epicureanism) are a great if not really mind-blowing source of insight. He is a fantastic salesmen, but being good at sales is actually not a bad or scummy thing, in my opinion. He has helped a lot of people, (like me) and I think he deserves credit for that.
↑ comment by badger · 2012-01-17T17:41:28.638Z · LW(p) · GW(p)
Ferris strikes me as a master instrumental rationalist, but not necessarily a great purveyor of advice. Everything he does seems well-optimized to enhance his own reputation. The main purpose of his advice is to make him look like a guru rather than level up his readers. Ferris is a good example (substituting in your own values, unless you also want to be seen as a badass guru), if not a great teacher.
Replies from: Desrtopa↑ comment by Desrtopa · 2012-01-17T18:08:39.717Z · LW(p) · GW(p)
I think if he were optimizing his own reputation really well, we wouldn't be having this conversation in the first place.
My own exposure to Tim Ferriss has been very limited, but while I don't know if I would go so far as to say that he gives me a "scummy" vibe, I get the feeling that I ought to be treating him with more than a little suspicion.
Maybe it's impossible to maximize popularity in our population while still appealing to people who're skeptical or inclined to critical thinking. I'm reminded of this experiment, where the winning estimate is one that assumed a very low level of recursive thinking in the average participant. Trying to account for "smarter" participants would only have resulted in a less accurate answer. But Tim Ferriss's reputation is not so glowing that I expect that a person with the same resources at their disposal couldn't do better.
Replies from: gwern↑ comment by gwern · 2012-01-17T19:46:50.351Z · LW(p) · GW(p)
I think part of the problem is that he's openly manipulative and exploitative, which regardless of whether it works, is going to put people off - if only because they don't want people to think they are like Ferriss too. (Though they still pay close attention; it's all very Hansonian.)
For example, I have something of a standing invitation from an assistant to write a post for the Ferriss blog about brain training and dual n-back; no compensation, of course, since the traffic is supposed to be worth it to me. And it probably is, because it's a popular blog - I suspect just a link in the footer to my DNB FAQ and other pages would result in a traffic spike greater than I've ever gotten from Hacker News or Reddit or LessWrong. But nevertheless, I've found it hard to motivate myself to write such a post and haven't written it yet.
↑ comment by ChrisHallquist · 2012-01-17T20:03:00.611Z · LW(p) · GW(p)
I have mixed feelings about Tim Ferriss. His books have plenty of legitimate advice, but he tends to understate the difficult of doing the things he talks about because "X made easy" is better for selling books than "how to do X effectively if you're willing to put a ton of work into it for long-term payoffs."
↑ comment by Ezekiel · 2012-01-17T14:40:55.936Z · LW(p) · GW(p)
What do you mean by "scummy"?
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-01-17T14:46:11.508Z · LW(p) · GW(p)
Like he's the SEO age equivalent of the used car salesman cultural archetype.
Replies from: CronoDAScomment by Aleksei_Riikonen · 2012-01-17T16:28:11.591Z · LW(p) · GW(p)
Hmm, I for one don't share the negative reactions that several other commenters seem to feel now. I felt very glad upon reading this "leveling up" post.
I was especially thinking that this is a very cool first LW article for people to bump into (and therefore shared this on some social networks). In this vein, I very much like the criticized-by-some feature that every other word is a link to a previous article. It's useful for those new people who might be inspired to check this stuff out in more detail.
Replies from: Dustin, Technoguyrob↑ comment by Dustin · 2012-01-17T17:32:05.684Z · LW(p) · GW(p)
I, too, like this post for it's value in pointing others to when trying to explain the value of rationality.
This has been discussed before, but linking to the Sequences and saying "read that" always feels like a dodge to me, so I find real value in this post.
It seems like part of what that lukeprog has to do is balance between the regular readers of this site and new(ish) readers of this site.
↑ comment by robertzk (Technoguyrob) · 2012-01-18T12:49:10.707Z · LW(p) · GW(p)
What were the reactions of your friends?
comment by ChrisHallquist · 2012-01-17T19:12:37.025Z · LW(p) · GW(p)
Okay, after criticizing Luke's last post, I just want to say this (particularly the scholarship section) is a totally awesome post that makes me want to get my butt in gear on a whole bunch of different things. I've just added "look into science of happiness" and "study what Luke wrote on how to beat procrastination" to my to-do list.
But a question: when you're researching a question you know fuck-all about, how do you filter out all the crap? I almost feel silly asking this, because I've successfully done this with some subjects, but I've also sometimes had trouble doing this.
In particular, when I had my book (soon to become "my first book," if all goes well) coming out, some of the main books I got on book promotion seemed geared to promoting the author's book PR business. I did get in touch with the author, and got references from former clients of his, who persuaded me his services were a waste of money. Problem was, I had invested enough time going down that blind alley that I never got together a coherent plan for promoting my book. What should have I done differently in my "research book promotion" strategy?
comment by ChrisHallquist · 2012-01-17T20:44:07.467Z · LW(p) · GW(p)
Oh, and by the way: all the links? Very helpful. Even if 90% of the people here have seen that stuff already, it's helpful for the 10% who weren't here or weren't paying attention when that particular post was written.
comment by Mark_Eichenlaub · 2012-01-18T09:14:42.058Z · LW(p) · GW(p)
I am curious: when someone says they are happy, how do you judge the credibility of the claim?
There are certainly a lot of reasons to trust Luke's judgment. His other claims are verifiable, and given the nature of his message and the community he's delivering it to, he probably feels a strong desire both to tell the truth and to understand the truth about himself.
Nonetheless, I suspect there are far more people who claim to be happy than who really are, essentially due to belief in belief. What are some tests? For example, are there people known to have high emotional intelligence who know Luke well and think he exhibits high happiness levels?
I hope this doesn't come off as a personal questioning of Luke in particular. It seems like a difficult problem in general, but nonetheless an important one if I want to study what people have to say about happiness.
comment by [deleted] · 2012-01-17T17:53:34.101Z · LW(p) · GW(p)
Luke, I have a question related to your recent posts. Do you think it's possible to be good or learned or both in philosophy? If so, in what does this consist?
Replies from: lukeprog↑ comment by lukeprog · 2012-01-17T18:42:52.094Z · LW(p) · GW(p)
Yes, I think it's possible to be good and learned in philosophy. "Learned" (LURN-EDD) presumably means something like "Knows about lots of philosophers and their arguments." "Good" presumably means something like "consistently gets correct or improved answers on tough philosophical problems." Being a good philosopher is more important than be a learned philosopher, but most universities focus on producing philosophers that are learned but not good.
Replies from: katydee, None↑ comment by katydee · 2012-01-18T21:54:47.345Z · LW(p) · GW(p)
Note also that being learned is sufficient for appearing good to most people, but not for actually making improved decisions except in a few edge cases. Overall, being good is less impressive but more effective. The obvious implication is that most philosophy education is signalling.
Replies from: None↑ comment by [deleted] · 2012-01-18T22:03:11.011Z · LW(p) · GW(p)
That may well be true, but it's worth pointing out that your conclusion isn't actually implied by your premises. If you want to say that your conclusion is evidenced by your premises, we should go about trying to figure out first if Luke is right about academic philosophy, and if you're right about the high-status of learning over quality. These claims are, again, plausible, but at the moment pretty much conjectures.
Replies from: None↑ comment by [deleted] · 2012-01-18T22:18:13.033Z · LW(p) · GW(p)
Hmm, I'm getting some downvotes here and I'm having a hard time interpreting them. Is what I've said false about the logic of katydee's claim?
Replies from: katydee↑ comment by katydee · 2012-01-19T03:33:50.287Z · LW(p) · GW(p)
I'm implicitly agreeing with luke's post (thanks to my own experiences with academic philosophy) and taking parts of it as premises. A pseudo-formalized version of the claim in my post might go something like:
- (me) Being learned is sufficient for appearing good but not for making better decisions
- (lukeprog) Being good improves your decision-making/ability to resolve philosophical issues
- (lukeprog) Most universities focus on producing philosophers that are learned but not good
(general definition) Something "is signalling" (is primarily for signalling) when it is oriented towards improving appearances rather than actual abilities/quantities.
(from 3, implied step) Most philosophy programs focus on producing people who appear good rather than who are good.
- (from 1, 4, 5) Most philosophy programs are signalling
I upvoted you, since I didn't really think it was an unwarranted remark, but I suspect that others thought that your criticism was low-level, seeing the fact that I was basing some of what I said on Luke's post to be obvious. Also, keep in mind that I don't actually need to prove those premises to show that my conclusion follows from them or is implied by them; an argument can be valid without having true premises.
Replies from: None↑ comment by [deleted] · 2012-01-19T16:55:21.113Z · LW(p) · GW(p)
Also, keep in mind that I don't actually need to prove those premises to show that my conclusion follows from them or is implied by them; an argument can be valid without having true premises.
RIght, but my point was that your conclusion doesn't follow from your premises (though it is evidenced by them). The reason is that 5 does not follow from 1, 2, and 3, and so the argument is invalid. It could be, for instance, that philosophy programs focus on producing learned rather than good philosophers because they are incapable of producing good philosophers over and above learned ones (suppose we grant my premise that there is no such thing as a good philosopher, for example).
I'm not actually contesting the truth of your premises or your conclusion.
↑ comment by [deleted] · 2012-01-17T20:33:18.545Z · LW(p) · GW(p)
I guess I'm torn between, on the one hand, the impression that you're exactly right and that I find myself saying "this is good/bad" about works of philosophy and philosophers all the time. On the other hand, Socrates knew basically nothing by modern standards, provided no real answers to any tough questions, and argued terribly. He himself said that there was no method, no body of knowledge, and no possible skill in philosophy. And yet one would be hard pressed to argue that there has been a greater or more important philosopher in our history.
If you have any way to reconcile these, or refute one or the other opinion, I would be most appreciative. It seems to me that a condition on the truth of your recent posts is that there is something like being good at philosophy, so I wonder if you see Socrates as a challenge to that.