[SEQ RERUN] My Wild and Reckless Youth

post by MinibearRex · 2011-08-07T05:58:34.499Z · LW · GW · Legacy · 9 comments

Today's post, My Wild and Reckless Youth was originally published on 30 August 2007. A summary (taken from the LW wiki):

 

Traditional rationality (without Bayes' Theorem) allows you to formulate hypotheses without a reason to prefer them to the status quo, as long as they are falsifiable. Even following all the rules of traditional rationality, you can waste a lot of time. It takes a lot of rationality to avoid making mistakes; a moderate level of rationality will just lead you to make new and different mistakes.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Say Not "Complexity", and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

9 comments

Comments sorted by top scores.

comment by [deleted] · 2011-08-07T06:59:55.748Z · LW(p) · GW(p)

I was young, and a mere Traditional Rationalist who knew not the teachings of Tversky and Kahneman.

He means Judgment under Uncertainty: Heuristics and Biases, almost certainly. I think at one point there was a reading group surrounding it, but I don't know what ever happened to them.

Today, one of the chief pieces of advice I give to aspiring young rationalists is "Do not attempt long chains of reasoning or complicated plans."

Advice more or less completely ignored by everyone, including EY himself.

To a Bayesian, on the other hand, if a hypothesis does not today have a favorable likelihood ratio over "I don't know", it raises the question of why you today believe anything more complicated than "I don't know". But I knew not the Way of Bayes, so I was not thinking about likelihood ratios or focusing probability density.

I want to point out that thinking about likelihood ratios or focusing probability density is independent of any knowledge of Bayes' theorem. I'd be surprised if any calculation actually occurred.

When I think about how my younger self very carefully followed the rules of Traditional Rationality in the course of getting the answer wrong, it sheds light on the question of why people who call themselves "rationalists" do not rule the world. You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.

I don't understand this argument. What does calling yourself rationalist have to do with not ruling the world? Traditional rationality has all sorts of specialized counter-memes against political activity. Hell, even LW has counter-memes against entering politics. Isn't it plausible that traditional rationalists eschew politics for these reasons, rather than EY's thesis, which always seems to be along the lines of "Traditional Rationality is good for nothing, or perhaps less than nothing."

Traditional Rationality is taught as an art, rather than a science; you read the biography of famous physicists describing the lessons life taught them, and you try to do what they tell you to do. But you haven't lived their lives, and half of what they're trying to describe is an instinct that has been trained into them.

The crushing irony of reading an autobiographical anecdote of EY's in an essay that implicitly says "learn this intuition I stumbled across" is almost too great to bear. Certainly LW-style rationality is no more a science than traditional rationality is.

The way Traditional Rationality is designed, it would have been acceptable for me to spend 30 years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera.

I don't think there's enough data presented here to actually support this claim, depending upon what EY means by "acceptable." Would the community support him financially if he spent 30 years trying to demonstrate quantum consciousness, without producing something intrinsically valuable along the way? I don't think so; even Penrose had to produce sound mathematics to back up his crackpot-ness.

Traditional Rationalists can agree to disagree.

The LW advance in this area seems to consist entirely of agreeing to disagree after quoting Aumann's theorem and continuing the argument well past the point of diminishing returns. In that respect, Traditional Rationalists win, merely because they don't have to put up with as much bullshit from e.g., religious nutjobs playing at rationality.

Maybe that will be enough to cross the stratospherically high threshold required for a discipline that lets you actually get it right, instead of just constraining you into interesting new mistakes.

If modern LW is any indication, then it probably wasn't enough. Everyone talks about Bayes, but few people do any actual math. EY wrote a whole sequence on quantum physics, writing the Schrodinger operator exactly once. If math is what will save us from making interesting new mistakes, we clearly aren't doing enough of it.

Replies from: Incorrect, jsalvatier
comment by Incorrect · 2011-08-07T19:39:49.621Z · LW(p) · GW(p)

Today, one of the chief pieces of advice I give to aspiring young rationalists is "Do not attempt long chains of reasoning or complicated plans."

Advice more or less completely ignored by everyone, including EY himself.

An alternative interpretation is that we should break up long chains of reasoning into individually analyzed lemmas and break up complicated plans into subgoals.

Hell, even LW has counter-memes against entering politics.

Avoiding discussing politics directly is not the same as not personally entering politics.

Traditional Rationalists can agree to disagree.

The LW advance in this area seems to consist entirely of agreeing to disagree after quoting Aumann's theorem and continuing the argument well past the point of diminishing returns

It's good advice, but only if both parties are truly following it; an admittedly implausible prospect.

If math is what will save us from making interesting new mistakes, we clearly aren't doing enough of it.

What about requiring all new users to solve some different numbers of Euler problems to comment, vote, post top level, have cool neon color names, etc.? Alternatively or conjunctively, breaking up the site into "fuzzy self help" and "1337 Bayes mathhacker" sections might help.

Replies from: Desrtopa, None, None
comment by Desrtopa · 2011-08-08T15:35:30.938Z · LW(p) · GW(p)

What about requiring all new users to solve some different numbers of Euler problems to comment, vote, post top level, have cool neon color names, etc.? Alternatively or conjunctively, breaking up the site into "fuzzy self help" and "1337 Bayes mathhacker" sections might help.

Even assuming that this only filters out people whose contributions are unhelpful and provides useful exercise to those who are, it still sounds like too much inconvenience.

It can certainly be helpful to apply actual math to a question rather than relying on vague intuitions, but if you don't ensure that the math corresponds to the reality, then calculations only provide an illusion of helpfulness, and illusory helpfulness is worse than transparent unhelpfulness.

I'd much prefer a system incentivizing actual empiricism ("I will go out and test this with reliable methodology") rather than math with uncertain applicability to the real world.

comment by [deleted] · 2011-08-07T20:06:37.197Z · LW(p) · GW(p)

Today, one of the chief pieces of advice I give to aspiring young rationalists is "Do not attempt long chains of reasoning or complicated plans."

Advice more or less completely ignored by everyone, including EY himself.

An alternative interpretation is that we should break up long chains of reasoning into individually analyzed lemmas and break up complicated plans into subgoals.

It would be overwhelmingly excellent if people did that.

Hell, even LW has counter-memes against entering politics.

Avoiding discussing politics directly is not the same as not personally entering politics.

True, I should have said "engaging in" or similar.

If math is what will save us from making interesting new mistakes, we clearly aren't doing enough of it.

What about requiring all new users to solve some different numbers of Euler problems to comment, vote, post top level, have cool neon color names, etc.? Alternatively or conjunctively, breaking up the site into "fuzzy self help" and "1337 Bayes mathhacker" sections might help.

I don't have any data on these sorts of incentive programs yet.

I disagree that breaking up the site into multiple walled gardens would be helpful, under the principle that there are few enough of us as it is without fragmenting ourselves further.

comment by [deleted] · 2011-08-09T18:34:33.289Z · LW(p) · GW(p)

Because I have nowhere better to post this:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Today (9/8/11) #lesswrong IRC user puritan (User:Incorrect)
earned ten paper-machine points
by successfully executing a CSRF attack against LW.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJOQXzUAAoJEDjQ6lJtxNBEe64H/Anw8YaXpaA6wte2nfqv277G
t9eLlSA95+vdew/4UjGGQ8+LBn456g/JRyn6SiTdKyd1sZGeoE/C4H6JMZRqfQD4
nqTJmUqEreF3YjVvYTrthAhsw9zUhAuxx0yCTmzyaaZucGUxJ+Rd4EAM5qVyT2Gx
RBXukG2QQw7BGCJpWX1qPdZPY25hRB3FIbey0Nb43gOBAylx+JNrkWsZbz2Woeol
AH9l0P6SeljAyfz3q9JLItuGf3jUYf0Jq/7SMJTs8D7nmP9SZdAw3yQB7hYCpk1c
4oUzNMUf/n7S9kgyPZ/cYEsFLqrl5Vvq2GSr3Uap+aqAuvlZ3tmrezqaRi9Kad0=
=mOoY
-----END PGP SIGNATURE-----

Public key is on my wiki userpage.

comment by jsalvatier · 2011-08-08T14:45:24.693Z · LW(p) · GW(p)

I think 'do not rule the world' meant something 'are not highly influential in the world, being CEOs, influential politicians, directors or large scientific projects, etc'.

comment by Unnamed · 2011-08-07T17:48:13.805Z · LW(p) · GW(p)

The way Traditional Rationality is designed, it would have been acceptable for me to spend 30 years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera. This is enough to let the Ratchet of Science click forward, but it's a little harsh on the people who waste 30 years of their lives.

I think this is a case where Eliezer's nontraditional career path caused him to miss out on some of the traditional guidance that young researchers get. If a graduate student tells their adviser that they want to work on some far-fetched research project like quantum neurology, the adviser will have some questions for their student like "What's the first step that you can take to conduct research on this?", "How likely is this to pan out?", "What publications can you get out of this?", and "Will this help you get a job?" Most young researchers have a mentor who is trying to help them get started on a successful career, and the mentor will steer them away from unproductive projects which don't leave them with good answers to these questions.

This careerism has its downsides, but it set a higher standard than mere falsifiability, which helps keep young researchers from wasting their careers pursuing some silly idea. You have to get tenure before you can do that. (The exception is when the whole field has already embraced the silly idea enough to publish articles about it in top journals and allow researchers to make a career out of it.)

Replies from: Desrtopa
comment by Desrtopa · 2011-08-08T15:21:32.299Z · LW(p) · GW(p)

This approach has tremendous downsides, because so many researchers are encouraged to focus on projects where it's easy rather than useful to publish, and a majority of publications are hardly interesting or useful to anyone.

comment by moridinamael · 2011-08-08T03:05:28.343Z · LW(p) · GW(p)

I happen to be in the middle of Zen and the Art of Motorcycle Maintenance right now and I'm amused that this post popped up. It seems almost to be aimed directly at Pirsig, whose primary problem seems (so far) to be that his use of traditional rationality to critique traditional rationality leads to the breaking of his mind. I find myself saying to the book, "Dissolve the question," each time Pirsig reaches a dilemma or ponders a definition, but instead he builds towering recursive castles of thought (often grounded in nothing more than intuition) that would be heavily downvoted if posted here.

That came off as more negative than I had intended, and yet I still mean it.