No Really, Why Aren't Rationalists Winning?

post by Sailor Vulcan · 2018-11-04T18:11:15.586Z · LW · GW · 90 comments

Reply to Extreme Rationality: It's Not That Great [LW · GW], Extreme Rationality: It could Be Great [LW · GW], the Craft and the Community [LW · GW] and Why Don't Rationalists Win? [LW · GW]

I’m going to say something which might be extremely obvious in hindsight:

If LessWrong had originally been targeted at and introduced to an audience of competent business people and self-improvement health buffs instead of an audience of STEM specialists and Harry Potter fans, things would have been drastically different. Rationalists would be winning.

Right now, rationalists aren’t winning. Rationality helps us choose which charities to donate to, and as Scott Alexander pointed out in 2009 [LW · GW] it gives clarity of mind benefits. However, as he also pointed out in the same article, rationality doesn't seem to be helping us win in individual career or interpersonal/social areas of life.

It’s been nearly ten years since then, and I have yet to see any sign that this fact has changed. I considered the possibility that I just hadn’t heard about other rationalists’ practical success due to having not become a rationalist until around 2015, or simply because no one was talking about their success. Then I realized that was silly. If rationalists had started winning, at least one person would have posted about it here on lesswrong.com. I recently spoke to Scott Alexander, and he said he still agreed with everything he said in his article.

So rationalists aren’t winning. Why not? The Bayesian Conspiracy podcast (if I recall correctly), proposed the following explanation in one of their episodes: that rationality can only help us improve a limited amount relative to where we started out. They predicted that people who started out at a lower level of life success/cognitive functioning/talent cannot outperform non-rationalists who started out at a sufficiently high level.

This argument is fundamentally a cop-out. When others win in places where we fail, it makes sense to ask, “How? What knowledge, skills, qualities or experience do they have which we don't? And how might we obtain the same knowledge, skills, qualities or experience?” To say that others are simply more innately talented than we are, and leave it at that, doesn't explain the mechanism behind their hypothesized greater rate of improvement after learning rationality. It tells us why but not how. And if there was such a mechanism, could we not replicate it so we could improve more anyway?

So why aren't we winning? What’s the actual mechanism behind our failure?

It's because we lack some of the skills we need to win - not because we don't want to win, and not because we're lazy.

Rationalists are very good at epistemic rationality. But there's this thing that we've been referring to as "instrumental rationality" which we're not so good at. I wouldn’t say it’s just one thing, though. Instrumental rationality seems like many different arts that we're lumping together.

It's more than that, though. We're not just lumping together many different arts of rationality. As anyone who's read the sequence A Human’s Guide to Words [? · GW] would know, categorization and labeling are not neutral actions for a human. By classifying all rationality as one of two types, epistemic or instrumental, we limit our thinking about rationality. As a result of this classification, we fail to acknowledge the true shape of rationality’s similarity cluster [LW · GW].

The cluster’s true shape is that of instrumental rationality: it is the art of winning, a.k.a. achieving your values. All rationality is instrumental, and epistemic rationality is merely one example of it. The art of epistemic rationality is how you achieve the value of truth. Up until now, "instrumental rationality" has been a catch-all term we've been using for the arts of winning at every other value.

While achieving the value of truth is extremely useful for achieving every other value, truth is still only one value among many. The skills needed to achieve other values are not the same as the skills needed to achieve the value of truth. That is to say, epistemic rationality includes the skill sets that are useful for obtaining truth and “instrumental rationality” includes all other skill sets.

Truth is a precious and valuable thing. It's just not enough by itself to win in other areas of life.

That might seem obvious at face value. However, I'm not sure we understand that on a gut level.

I have the impression that many of us assume that so long as we have enough truth, everything else will simply fall into place - that we’ll do everything else right automatically without needing to really develop or practice any other skills.

Perhaps that would be the case with enough computing power. An artificial superintelligence could perhaps play baseball extremely well with the following method:

1. Use math to calculate where the particles in the bat, the ball, the air, and all the players are moving.

2. Predict which particles have to be moved to and from what positions in order to cause a chain reaction that results in the goal state. In this case, the goal state would be a particle configuration that humans would identify as a won game of baseball.

3. Move the key particles to the key positions. If you fail to reach the goal state, adjust your priors accordingly and repeat the process.

An artificial superintelligence could perhaps navigate relationships, or discover important scientific truths, or really anything else, all by this same method, provided that it had enough time and computing power to do so.

But humans are not artificial superintelligences. Our brains compress information into caches for easier storage. We will not succeed at life just by understanding particle physics, no matter how much reductionism we do. As humans, our beliefs are organized into categorical levels. Even if we know that reality itself is really all just one level, our brains don't have the space to contain enough particle-level knowledge to succeed at life (assuming that particles really are the base level, but we’ll leave that aside for now). We need that knowledge compressed into different categorical levels or we can't use it.

This includes procedural knowledge like "how many particles need to be moved to and from what positions to win a game of baseball". If our brains were big enough to be capable of knowing that, then all we would need to do to win is to obtain that knowledge and then output the correct choice.

For an artificial superintelligence, once it has enough relevant knowledge, it would have all that it needs to make optimal decisions according to its values.

For a human, given the limits of human brains, having enough relevant knowledge isn't the only thing needed to make better decisions. Having more knowledge can be extremely useful for achieving one's other goals besides just knowledge for knowledge’s sake, but only if one has the motivation, skills and experience to leverage that knowledge.

Current rationalists are really good at obtaining knowledge, at least when we manage to apply ourselves. But we're failing to leverage that knowledge. For instance, we ought to be dominating prediction markets and stock markets and outputting a disproportionately high number of superpredictors, to the point where other people notice and take an interest in how we managed to achieve such a thing.

In fact, betting in prediction markets and stock markets provides an external criteria for measuring epistemic rationality - just as martial arts success can be measured by the external criteria of hitting your opponent.

So why haven't we been dominating prediction and stock markets? Why aren't we dominating them right now?

In my own case, I'm still an undergraduate college student living largely off of my parents' income. I can't afford to bet on things since I don't have enough money of my own for it, and my income is highly irregular and hard to predict so it’s difficult to budget things. I would need to explain the expense to my mother if I started betting. If I did have more money of my own, though, I definitely would be spending some of it on this. Do a lot of other people here have such extenuating circumstances? Somehow that would feel like too much of a coincidence.

It's more likely to be because many of us haven't learned the instrumental skills needed to get ourselves to go out and bet. Such skills might include time management to set aside time to go bet, or interpersonal/communication skills to make sure the terms of the bets are clear and that we're only betting against those who will abide by the terms once they're set.

Prediction markets and stock markets aren't the only opportunity that rationalists are failing to take advantage of. For example, our community almost entirely neglects public relations, despite its potential as a way to significantly increase staff and funds for the causes we care about by raising the sanity waterline. We need better interpersonal/communication skills for interacting with the general public, and we need to learn to be more pragmatic so we will actually be able to get ourselves to do that instead of succumbing to an irrational deep-seated fear of appearing cultish.

Competent business people and self-improvement health buffs do have those skills. We don’t. That’s why we’re not winning.

In short, we need arts of rationality for the pursuit of values beyond mere truth. One of my friends who has read the Sequences has been spending years working on beginning to map out those other arts, and he recently presented his work to me. It's really interesting. I hope you find it useful.

(Note: Said friend will be introducing himself on here and writing a sequence about his work later. When he does I will add the links here.)

90 comments

Comments sorted by top scores.

comment by romeostevensit · 2018-11-04T19:19:25.708Z · LW(p) · GW(p)

Humans who won typically just choose harder goals and don't spend a lot of time patting themselves on the back online. Fwiw superforecasters were disproportionately ssc readers, I interviewed four of them. Also, lw, like most self help communities, attracts the walking wounded. See mental health incidence in the ssc survey. Going from well below average in several metrics to slightly above doesn't look impressive from the outside but is very large from the inside.

comment by Scott Alexander (Yvain) · 2018-11-06T20:29:54.019Z · LW(p) · GW(p)

I support the opposite perspective - it was wrong to ever focus on individual winning and we should drop the slogan.

"Rationalists should win" was originally a suggestion for how to think about decision theory; if one agent predictably ends up with more utility than another, its choice is more "rational".

But this got caught up in excitement around "instrumental rationality" - the idea that the "epistemic rationality" skills of figuring out what was true, were only the handmaiden to a much more exciting skill of succeeding in the world. The community redirected itself to figuring out how to succeed in the world, ie became a self-help group.

I understand the logic. If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people. I can't deny this makes sense. I can just point out that it doesn't resemble reality. Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss it as a fluke. I think it's possible (and important) to analyze this phenomenon and see what's going on. But the point is that this will involve analyzing a phenomenon - ie truth-seeking, ie epistemic rationality, ie the thing we're good at and which is our comparative advantage - and not winning immediately.

Remember the history of medicine, which started with wise women unreflectingly using traditional herbs to cure conditions. Some very smart people like Hippocrates came up with reasonable proposals for better ideas, and it turned out they were much worse than the wise women. After a lot of foundational work they eventually became better than the wise women, but it took two thousand years, and a lot of people died in the meantime. I'm not sure you can short-circuit the "spend two thousand years flailing around and being terrible" step. It doesn't seem like this community has.

And I'm worried about the effects of trying. People in the community are pushing a thousand different kinds of woo now, in exactly the way "Schools Proliferating Without Evidence" condemned. This is not the fault of their individual irrationality. My guess is that pushing woo is an almost inevitable consequence of taking self-help seriously. There are lots of things that sound like they should work, and that probably work for certain individual people, and it's almost impossible to get the funding or rigor or sample size that you would need to study it at any reasonable level. I know a bunch of people who say that learning about chakras has done really interesting and beneficial things for them. I don't want to say with certainty that they aren't right - some of the chakras have a suspicious correspondence to certain glands or bundles of nerves in the body, and for all I know maybe it's just a very strange way of understanding and visualizing those nerves' behavior. But there's a big difference between me saying "for all I know maybe..." and a community where people are going around saying "do chakras! they really work!" But if you want to be a self-help community, you don't have a lot of other options.

I think my complaint is: once you become a self-help community, you start developing the sorts of epistemic norms that help you be a self-help community, and you start attracting the sort of people who are attracted to self-help communities. And then, if ten years later, someone says "Hey, are we sure we shouldn't go back to being pure truth-seekers?", it's going to be a very different community that discusses the answer to that question.

We were doing very well before, and could continue to do very well, as a community about epistemic truth-seeking mixed with a little practical strategy. All of these great ideas like effective altruism or friendly AI that the community has contributed to, are all things that people got by thinking about, by trying to understand the world and avoid bias. I don't think the rationalist community's contribution to EA has been the production of unusually effective people to man its organizations (EA should focus on "winning" to be more effective, but no moreso than any other movement or corporation, and they should try to go about it in the same way). I think rationality's contribution has been helping carve out the philosophy and convince people that it was true, after which those people manned its organizations at a usual level of effectiveness. Maybe rationality also helped develop a practical path forward for those organizations, which is fine and a more limited and relevant domain than "self-help".

Replies from: ChristianKl, SaidAchmiz, Vaniver, elityre, ChristianKl, Elo
comment by ChristianKl · 2018-11-07T08:40:01.223Z · LW(p) · GW(p)

There's a lot in the word "woo".

One of my favorite examples is Roy Baumeister's book Willpower which he published in 2011. He's a professor who got two years later highest award given by the Association for Psychological Science, the William James Fellow award.

The book builds on a bunch of not-replicateable science and goes on to recommend that people should eat sugar to improve their Willpower, in a way that maps well to what Feymann describes as Cargo Cult science. We know the bad effects of sugar on the human body.

Here we have a distinguished psychologists who wrote in this decade a book that does the equivalent of recommending bloodletting. That's not a community with high epistemic norms.

You Scott recently wrote a post where you were suprised that neuroscience as a field messes up a question such as neurogenesis. Given the track record of the community that should be no suprise as they are largely doing the thing Feymann called Cargo Cult Science. They even publish papers that constantly say that they can predict things better then theoretically possible.

Everybody tries to succeed at his life. It feels to me like "not do self-help" because it might lead you to believe wrong things is like "don't reroute the trolley car" because rerouting makes you kill people. Taking self-help seriously will lead to expose to nontrivial effects that various self-help paradigms produce.

Is the point of the analogy you are trying to make that we should be less like Hippocrates and more like the wise ladies? That we should ignore all persuit of health?

There are a lot of things that produce interesting effects and if the only interesting effect you experienced is playing with chakra's and you as a result recommend chakra's, I'm not sure that expose to self-help is the main issue.

When being inside our community I don't focus on spreading concepts because they produce interesting effects but on those self-help things like Focusing or Internal Double Crux that provide insight in addition to produce interesting effects or produce results.

Replies from: agefree
comment by agefree · 2021-11-17T16:34:53.578Z · LW(p) · GW(p)

Is the point of the analogy you are trying to make that we should be less like Hippocrates and more like the wise ladies? That we should ignore all persuit of health?

Is the better argument not that the wise ladies were onto something? Traditional medicines are a mixed bag, but some herbal remedies are truly effective and have since been integrated into scientific medicinal practices. Rather than inventing his own theoretical framework, Hippocrates would have been better-served by investigating the existing herbal practices and trying to identify the truly-effective from the placebo. Trial-and-error is a form of empiricism, after all - and this seems to be how cultural knowledge like herbal medicine came to be.

comment by Said Achmiz (SaidAchmiz) · 2018-11-06T22:49:55.505Z · LW(p) · GW(p)

I have… issues with this comment; it is not without flaws. That said, Scott, I want to focus on a point which you make and with which I don’t so much disagree as think that you don’t take it far enough.

I will be the first to agree that becoming a self-help community was, for Less Wrong, a terrible, terrible mistake. But I would go further.

I would say that becoming a community, at all, was a mistake. There was never a need for it. Despite even Eliezer’s approval and encouragement, it was a bad idea—because it predictably led to all the problems, and the problem-generating dynamics, which we have seen and which we continue to see.

It always was, and remains, a superior strategy, to be a, shall we say, “project group”—a collective of individuals, who do not constitute a community, who do not fulfill each other’s needs for companionship, friendship, etc., who do not provide a “primary social circle” to its members, but who are united by their collective interest in, and commitment to, a certain pursuit. A club, in other words.

In short, “bonding” was at least as bad an idea as I initially suspected [LW(p) · GW(p)]. Probably much worse.

Replies from: cousin_it, Davidmanheim, Kevin92, linkhyrule5
comment by cousin_it · 2018-11-07T11:00:46.829Z · LW(p) · GW(p)

Very much agree. Some people want the well-being benefits of belonging to a substitute church, and will get these benefits somewhere anyway, but I think productive projects should avoid that association. (And accept the risk of fizzling out, like IAFF and Arbital did when trying to grow independently from LW.) Here's hoping that Abram, Paul and Rohin with their daily posts can make LW a more project-focused place.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-11-07T13:38:13.831Z · LW(p) · GW(p)

I’ve never even heard of IAFF! What is that?

Replies from: Benito, cousin_it
comment by Ben Pace (Benito) · 2018-11-07T14:01:47.186Z · LW(p) · GW(p)

Edit: oops, cousin_it beat me to it.

The "Intelligent Agent Foundations Forum" at https://agentfoundations.org/.

It was a platform MIRI built for discussing their research, that required an invite to post/comment. There's lots of really interesting stuff there - I remember enjoying reading Jessica Taylor's many posts summarising intuitions behind different research agendas.

It was a bit hard and confusing to use, and noticing that it seemed like we might be able to do better was one of the things that caused us to come up with building the AI Alignment Forum.

As the new forum is a space for discussion of all alignment research, and all of the old IAFF stuff is subset of that, we (with MIRI's blessing) imported all the old content. At some point we'll set all the old links to redirect to the AI Alignment Forum too.

comment by cousin_it · 2018-11-07T13:55:45.553Z · LW(p) · GW(p)

agentfoundations.org - lots of good stuff there, but most of it gets very few responses. The recently launched alignmentforum.org is an attempt to do the same thing, but with crossposting to LW.

comment by Davidmanheim · 2018-11-07T12:43:00.882Z · LW(p) · GW(p)

Very much disagree - but this is as someone not in the middle of the Bay area, where the main part of this is happening. Still, I don't think rationality works without some community.

First, I don't think that the alternative communities that people engage with are epistemically healthy enough to allow people to do what they need to reinforce good norms for themselves.

Second, I don't think that epistemic rationality is something that a non-community can do a good job with, because there is much too little personal reinforcement and positive vibes that people get to stick with it if everyone is going it alone.

Replies from: TAG
comment by TAG · 2019-05-22T10:12:24.311Z · LW(p) · GW(p)

Are you saying that epistemic rationality didn't exist before the LW community, or that (for instance) academia is an adequate community?

Replies from: Davidmanheim
comment by Davidmanheim · 2019-05-22T11:20:43.998Z · LW(p) · GW(p)

Academia in general is certainly not an adequate community from an epistemic standards point of view, and while small pockets are relatively healthy, none are great. And yes, the various threads of epistemic rationality certainly predated LessWrong, and there were people and even small groups that noted the theoretical importance of pursuing it, but I don't there were places that actively advocated that members follow those epistemic standards.

To get back to the main point, I don't think that it is necessary for the community to "fulfill each other's needs for companionship, friendship, etc," I don't think that there is a good way to reinforce norms without something at least as strongly affiliated as a club. There is a fine line between club and community, and I understand why people feel there are dangers of going too far, but before LW, few groups seem to have gone nearly far enough in building even a project group with those norms.

Replies from: TAG
comment by TAG · 2019-05-22T11:58:38.383Z · LW(p) · GW(p)

Academia in general is certainly not an adequate community from an epistemic standards point of view,

By whose epistemic standards? And what's the evidence for the claim?

Replies from: Davidmanheim
comment by Davidmanheim · 2019-05-22T16:40:48.150Z · LW(p) · GW(p)

Mine, and my experience working in academia. But (with the very unusual exceptions of FHI, GMU's economics department, and possibly the new center at Georgetown) I don't think you'd find much disagreement among LWers who interact with academics that academia sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals.

Replies from: rossry, TAG
comment by rossry · 2019-05-22T23:09:55.839Z · LW(p) · GW(p)

I think your comment is unnecessarily hedged -- do you think that you'd find much disagreement among LWers who interact with FHI/GMU-Econ over whether people there sometimes (vs never) fail to do level-one things?

I think I understand the connotation of your statement, but it'd be easier to understand if you strengthened "sometimes" to a stronger statement about academia's inadequacy. Certainly the rationality community also sometimes fails to do even the obvious, level-one intelligent character things to enable them to achieve their goals -- what is the actual claim that distinguishes the communities?

Replies from: Davidmanheim
comment by Davidmanheim · 2019-05-23T06:33:50.135Z · LW(p) · GW(p)

That's a very good point, I was definitely unclear.

I think that the critical difference is that in epistemically health communities, when such a failure is pointed out, some effort is spent in identifying and fixing the problem, instead of pointedly ignoring it despite efforts to solve the problem, or spending time actively defending the inadequate status quo from even pareto-improving changes.

comment by TAG · 2019-05-22T17:12:52.793Z · LW(p) · GW(p)

Oh, I so your complaint us about instrumental rationality. Well, naturally they're bad at that. Most people are. You don't get good at doing things by studying rationality in the abstract. EY couldn't succeed in spending $300k of free money on producing software to his exact specifications.

I was thinking more of epistemic rationality, having given up on instrumental rationality.

Replies from: Davidmanheim
comment by Davidmanheim · 2019-05-23T06:30:39.333Z · LW(p) · GW(p)

I don't think they get epistemic rationality anywhere near correct either. As a clear and simpole example, there are academics currently vigorously defending their right not to pre-register empirical studies.

Replies from: ChristianKl
comment by ChristianKl · 2019-05-23T10:07:11.872Z · LW(p) · GW(p)

And even of those who do preregister, noboby puts down their credence for the likilihood that there's an effect.

comment by Lavender (Kevin92) · 2018-11-09T21:23:13.241Z · LW(p) · GW(p)Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-11-09T21:46:42.621Z · LW(p) · GW(p)

But caring about rationality, as well as curiosity and skepticism, is a pretty big part of who I am and I want to have a group of people in my life who are okay with that. I want to have people I can be rational around without them being rude or condescending towards me.

This is a fine desire to have. I share it.

And people who have some interest in rationality, any interest at all, are the only people I really feel fully safe around for this reason.

And herein lies your problem.

You have, I surmise, encountered many terrible people in your life. This sucks. The solution to this is simple, and it’s one I have advocated in the past and continue to advocate:

Be friends with people who are awesome. Avoid people who suck.

Let me assure, you in the strongest terms, that “rationalists” are not the only people in the world who are awesome. I, for one, have a wonderful group of friends, who are “interested in rationality” in, perhaps, the most tangential way; and some, not at all. My friends are never “rude or condescending” to me; I can be, and am, as “rational” around them as I wish; and the idea that my close friends would not be ok with curiosity and skepticism is inconceivable.

It is even perfectly fine and well if you select your personal friends on the basis of some combination of intelligence, curiosity, skepticism, “rationality”, etc. But this is not at all the same thing as making a “rationality community” out of Less Wrong & co—not even close.

Finally:

For example, I want people I can be around an acknowledge that high rents are caused by housing shortages instead of “tech-bros”. I want it to be safe to say that without being accused of mansplaining.

For God’s sake, get out of the Bay Area. Seriously. Things are not like this in the rest of the world.

Replies from: Kevin92
comment by Lavender (Kevin92) · 2018-11-10T19:58:42.790Z · LW(p) · GW(p)Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-11-10T20:33:41.927Z · LW(p) · GW(p)

Alright. Well, generalize my advice, then: leave the social circles where that sort of thing is at all commonplace. If that requires physical moving cities, do that. (For example, I live in New York City. I don’t recall ever hearing the term “tech-bro” used in a real conversation, except perhaps as an ironic mention… maybe not even that.)

comment by linkhyrule5 · 2019-10-17T02:08:39.535Z · LW(p) · GW(p)

The thing is -- and here I disagree with your initial comment thread as well -- peer pressure is useful. It is spectacularly useful and spectacularly powerful.

How can I make myself a more X person, for almost any value of X, even values that we would assume entirely inherent or immutable? Find a crowd of X people that are trying to be more X, shove myself in the middle, and stay there. If I want to be a better rationalist, I want friends that are better rationalists than me. If I want to be a better forecaster, I want friends that are better forecasters than me. If I want to be a more effective altruist, earn more to give more, learn more about Y academic topic, or any other similar goal, the single most powerful tool in my toolbox -- or at least the most powerful tool that generalizes so easily -- is to make more friends that already have those traits.

Can this go bad places? Of course it can. It's a positive feedback cycle with no brakes save the ones we give it. But...

... well, to use very familiar logic: certainly, it could end the world. But if we could harness and align it, it could save the world, too.

(And 'crowds of humans', while kind of a pain to herd, are still much much easier than AI.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-10-17T02:38:43.983Z · LW(p) · GW(p)

You’re equivocating between the following:

  1. To become more X, find a crowd of people who are more X.
  2. To become more X, find a crowd of people who are trying to be more X.

Perhaps #1 works. But what is actually happening is #2.

… or at least, that’s what we might charitably hope is happening. But actually instead what often happens is:

  1. To become more X, find a crowd of people who are pretending to try to be more X.

And that definitely doesn’t work.

Replies from: linkhyrule5
comment by linkhyrule5 · 2019-10-17T03:46:00.469Z · LW(p) · GW(p)

Actually, no, I explicitly want both 1 and 2. Merely being more X than me doesn't help me nearly as much as being both more X and also always on the lookout for ways to be even more X, because they can give me pointers and keep up with me when I catch up.

And sure, 3 is indeed what often happens.

... First of all, part of the whole point of all of this is to be able to do things that often fail, and succeed at them anyway; being able to do the difficult is something of prerequisite to doing the impossible.

Secondly, all shounen quips aside, it's actually not that hard to tell when someone is merely pretending to be more X. It's easy enough that random faux-philosophical teenagers can do it, after all :V. The hard part isn't staying away from the affective death spiral, it's trying to find the people who are actually trying among them -- the ones who, almost definitionally, are not talking nearly as much about it, because "slay the Buddha" is actually surprisingly general advice.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2019-10-17T03:57:11.085Z · LW(p) · GW(p)

Ac­tu­ally, no, I ex­plic­itly want both 1 and 2. Merely be­ing more X than me doesn’t help me nearly as much as be­ing both more X and also always on the look­out for ways to be even more X, be­cause they can give me poin­t­ers and keep up with me when I catch up.

What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.

EDIT:

Se­condly, all shounen quips aside, it’s ac­tu­ally not that hard to tell when some­one is merely pre­tend­ing to be more X.

Empirically, it seems rather hard, in fact.

Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…

Replies from: Zack_M_Davis, linkhyrule5
comment by Zack_M_Davis · 2019-10-17T06:13:13.995Z · LW(p) · GW(p)

a whole lot of people seem to have some reason for pretending not to be able to tell ...

Right—they call it the "principle of charity."

comment by linkhyrule5 · 2019-10-18T02:43:58.211Z · LW(p) · GW(p)
What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.

Fair. Nevertheless, if the average of the group is around my own level, that's good enough for me if they're also actively trying. (Pretty much by definition of the average, really...)

Empirically, it seems rather hard, in fact.
Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…

... Okay, sorry, two place function. I don't seem to have much trouble distinguishing.

(And yes, you can reasonably ask how I know I'm right, and whether or not I myself are good enough at the relevant Xs to tell, etc etc, but... well, at some point that all turns into wasted motions. Let's just say that I am good enough at distinguishing to arrive at the extremely obvious answers, so I'm fairly confident I'll at least not be easily mislead.)


comment by Vaniver · 2018-11-06T22:04:43.823Z · LW(p) · GW(p)
I think it's possible (and important) to analyze this phenomenon and see what's going on. But the point is that this will involve analyzing a phenomenon - ie truth-seeking, ie epistemic rationality, ie the thing we're good at and which is our comparative advantage - and not winning immediately.

I mostly agree with this, but want to point at something that your comment didn't really cover, that "whether to go to the homeopath or the doctor" is a question that I expect epistemic rationality to be helpful for. (This is, in large part, a question that Inadequate Equilibria was pointed towards.) [This is sort of the fundamental question of self-help, once you've separated it into "what advice should I follow?" and "what advice is out there?"]

But this requires that the question of how to evaluate strategies be framed more in terms of "I used my judgment to weigh evidence" and less in terms of "I followed the prestige" or "I compared the lengths of their articulated justifications" or similar things. A layperson in 1820 who is using the latter will wrongly pick the doctors, and a confused layperson in 2000 will wrongly pick homeopathy, but ideally a rationalist would switch from homeopathy to doctors as the actual facts on the ground change.

This doesn't mean a rationalist in 1820 should be satisfied with homeopathy; it should be known to them as a temporary plug to a hole in their map. But that also doesn't mean it's the most interesting or important hole in their map; probably then they'd be most interested in what's up with electricity. [Similarly, today I'm somewhat confused about what's going on with diet, and have some 'reasonable' guesses and some 'woo' guesses, but it's clearly not the most interesting hole in my map.]

And so my sense is a rationalist in 2018 should know what they know, and what they don't, and be scientific about things to the degree that they capture their curiosity (which relates both to 'irregularities in the map' and 'practically useful'). Which is basically how I read your comment, except that you seem more worried about particular things than I am.

comment by Eli Tyre (elityre) · 2019-05-20T17:30:10.608Z · LW(p) · GW(p)
I'm not sure you can short-circuit the "spend two thousand years flailing around and being terrible" step.

It sure seems like you should be able to do better than spending literally two thousand years. There are much better existing methodologies now than there were then.

comment by ChristianKl · 2018-11-07T07:56:44.300Z · LW(p) · GW(p)
Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss it as a fluke.

I would be very vary with using him as an example because the public image of him is very much determined by the media.

He did succeed at getting a degree at Wharton Business school.

Peter Thiel who actually meet him in person considers him to be expectional at understanding how individual people with whom he deals tick.

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-05-22T00:39:52.283Z · LW(p) · GW(p)
Peter Thiel who actually meet him in person considers him to be expectional at understanding how individual people with whom he deals tick.

How do you know this?

Replies from: ChristianKl
comment by ChristianKl · 2019-05-22T07:03:16.323Z · LW(p) · GW(p)

One of the interviews on YouTube. I unfortunately don't have a link right now.

comment by Elo · 2018-11-06T20:40:47.403Z · LW(p) · GW(p)

Get out of the armchair and try the woo before you dismiss it.

You are not smarter than running an experiment of your own.

comment by Davidmanheim · 2018-11-05T12:41:31.172Z · LW(p) · GW(p)

I think you are not looking in the right places, as the groups of rationalists I know are doing incredibly well for themselves - tenure-track positions at major universities, promotions to senior positions in US government agencies, incredibly well paid jobs doing EA-aligned research in machine learning and AI, huge amounts of money being sent to the rationalist-sphere AI risk research agendas that people were routinely dismissing a few years ago, etc.

To evaluate this more dispassionately, however, I'd suggest looking at the people who posted high-karma posts in 2009, and seeing what the posters are doing now. I'll try that here, but I don't know what some of these people are doing now. They seem to be a overall high-achieving group. (But we don't have a baseline.)

https://www.greaterwrong.com/archive/2009 - Page 1: I'm seeing Eliezer, (he seems to have done well,) Hal Finney (unfortunately deceased, but had he lived a bit longer he would have been a multi-multi millionaire for being an early bitcoin holder / developer,) Scott Alexander (I think his blog is doing well enough,) Phil Goetz - ?, Anna Salomon (helping run CFAR,) "Liron" - (?, but he's now running https://relationshiphero.com/ and seems to have done decently as a serial entrepreneur,) Wei Dei, (A fairly big name in cryptocurrency,) cousin_it - ?, CarlShulman, doing a bunch of existential risk work with FHI and other organizations, Alicorn (now a writer and "Immortal bisexual polyamorous superbeing"), HughRistik - ?, Orthonormal (Still around, but ?), jimrandomh (James Babcock - ?), AllanCrossman, (http://allancrossman.com/ - ?) and Psychohistorian (Eitan Pechenick, Academia)

Replies from: Benito, cousin_it, dkirmani
comment by Ben Pace (Benito) · 2018-11-05T15:02:43.016Z · LW(p) · GW(p)

Jim Babcock is working on the LW team with Oli, Ray and I :-)

My hot take response to OP is that the general question should be how well teams of rationalists are doing at their actual goals, not whether they seem superficially successful on common, easy to measure metrics (i.e. lotsa money and popularity).

To pick one of the top goals, how much better is the world doing on its long-term trajectory due to the work of teams around here? There's many key object-level insights (e.g. logical inductors and other core research [? · GW]), and noticeably more global coordination around superintelligence as x-risk (discussion of Bostrom's book, several full-time and thoughtful funders in the space - OpenPhil, BERI, etc, - highly competent research teams at DeepMind and OpenAI and UC Berkeley, focused tech teams building software for the research community *cough [? · GW]*, a few major conferences, and more). Naturally a bunch of stuff is behind the scenes too.

Perhaps you expected all of this to happen by default, but I've be repeatedly surprised by the magnitude of positive events that have occurred. If I compare this to a few bloggers talking about the problem details just 10 years ago, it seems quite astounding.

(And when I see very surprising events occur that are high in the utility function, I infer agency [LW · GW].)

Agreed that rationality work has not seen much progress, and I'd personally like to move the needle forward on that. I do think that if you found the people responsible for things I listed above, more than 30% would say reading the sequences / going to CFAR was drastically important for them doing anything useful in this direction whatsoever.

I suppose I didn't really respond to the explicit claim of the post, being

rationality doesn't seem to be helping us win in individual career or interpersonal/social areas of life

I do agree that's not the main thing this community is built around, and that we could do better on that if we tried more. But OP did feel a bit like "Huh, Francis Bacon thought he'd figured out a general methodology for understanding the world better, but did his personal relationships get better / did he get rich quick? I don't think so." And then missing out that he helped build the frickin' Royal Society. It's not true to say the community here hasn't been tremendously successful while working on some very hard problems.

(Naturally, it's not clear this progress is enough, and there's a good chance superintelligence will be an existential catastrophe.)

Replies from: Davidmanheim
comment by Davidmanheim · 2018-11-05T16:59:20.140Z · LW(p) · GW(p)
Agreed that rationality work has not seen much progress, and I'd personally like to move the needle forward on that.

Unfortunately, or perhaps fortunately, the really huge deal problems get all the attention from the really motivated smart people who get convinced by the rational arguments.

Perhaps the way forward on the "improve general rationality" is to try hiring educational experts from outside the rationality community to build curricula and training based on the sequences while getting feedback from CFAR, instead of having CFAR work on building such curricula (which they are no longer doing, AFAICT.)

Replies from: Benito
comment by Ben Pace (Benito) · 2018-11-05T17:43:38.859Z · LW(p) · GW(p)

Yep, certainly much of the founding CFAR team have become busy with other projects in recent years.

I'm not sure I expect hiring people solely based on their educational expertise to work out well. I agree that one of the things CFAR has done expertly, to a much higher level than any other institution I've interacted with (and I did CS at Oxford), is learn how to teach. But while pedagogy is core to the product, the goal is rationality. And I'm not sure someone good at pedagogy but not rationality will be able to make a lot of progress working on rationality without heavy guidance, they can only (do something like) streamline the existing product.

Also: I think you're implying that AI is a really huge deal problem and rationality is less. I'm not sure I agree, I mostly think that AI alignment research has seen an increase in tractability lately. I think that sanity is basically still very rare and super important.

For example, people haven't become more sane re: AGI, it's just that the amount of sanity required to not bounce off the problem entirely has decreased (e.g. your local social incentives push against recognising the problem less now that Superintelligence+Musk caused it to be a bit more mainstream, and also there are some open technical problems to work on and orders of magnitude more funding available). If there's another problem as hard and important as noticing AI alignment research as important, it's not obvious we've gotten much better at noticing it, in the past 5 years.

Replies from: Davidmanheim
comment by Davidmanheim · 2018-11-08T09:52:45.103Z · LW(p) · GW(p)
I'm not sure I expect hiring people solely based on their educational expertise to work out well.

Yes, there needs to be some screening other than pedagogy, but money to find the best people can fix lots of problems. And yes, typical teaching at good universities sucks, but that's largely because it optimizes for research. (You'd likely have had better professors as an undergrad if you went to a worse university - or at least that was my experience.)

...they can only (do something like) streamline the existing product.

My thought was that streamlining the existing product and turning it into useable and testably effective modules would be a really huge thing.

Also: I think you're implying that AI is a really huge deal problem and rationality is less.

If that was the implication, I apologize - I view safe AI as only near-impossible, while making actual humans rational is a problem that is fundamentally impossible. But raising the sanity water-line has some low-hanging fruits - not to get most people to CFAR-expert-levels, but to get high schools to teach some of the basics in ways that potentially has significant leverage in improving social decision-making in general. (And if the top 1% of people in High Schools also take those classes, there might be indirect benefits leading to increasing the number of CFAR-expert-level people in a decade.)

comment by cousin_it · 2018-11-05T21:50:32.931Z · LW(p) · GW(p)

Just to fill in the slot: in 2009 I was living in Moscow and mostly just partying and enjoying life, and in 2018 I'm living in Zurich with my wife and five kids. Was very happy with my life then, and am very happy now. Doing nicely in terms of money, but no big accomplishments if that's what you're asking about. And no, I wouldn't attribute it to LW, just normal life going on.

comment by dkirmani · 2021-12-31T04:41:24.261Z · LW(p) · GW(p)

Psychohistorian (Eitan Pechenick, Academia)

Holy shit. Pyschohistorian taught my AP Calc BC class. I am in shock.

Replies from: dkirmani
comment by dkirmani · 2022-01-04T10:58:39.730Z · LW(p) · GW(p)

Update: I messaged Dr. Pechenick on LinkedIn, and I regret to report that he is not in fact Psychohistorian on LessWrong, but Psychohistorian on Twitter. Still, hell of a coincidence.

comment by Viliam · 2018-11-05T23:42:16.711Z · LW(p) · GW(p)
If LessWrong had originally been targeted at and introduced to an audience of competent business people and self-improvement health buffs instead of an audience of STEM specialists and Harry Potter fans, things would have been drastically different. Rationalists would be winning.

This sounds like "the best way to make sure your readers are successful is to write for people who are already successful". It makes sense if you want to brag about how successful are your readers. But if your goal is to improve the world, how much change would that bring? There is already a ton of material for business people and self-help fans; what is yet another website for them going to accomplish? If people are already competent self-improving businessmen, what motive would they have to learn about rationality?

The Bayesian Conspiracy podcast ... proposed ... that rationality can only help us improve a limited amount relative to where we started out. They predicted that people who started out at a lower level of life success/cognitive functioning/talent cannot outperform non-rationalists who started out at a sufficiently high level.

The past matters, because changing things takes time. Obtaining "the same knowledge, skills, qualities or experience" requires time and money. (Money, because while you are chasing the knowledge and experience, you are not maximizing your short-term income.) Sometimes I wonder how life would be in a parallel universe where LessWrong would appear when I was 15, as opposed to 35. I had a ton of free time while studying at university; I have barely any free time now. I lived at my parents; now I need a daily job to pay my bills. Even putting money into index funds (such a simple advice, yet no one from my social group was able to give it to me) would have brought more interest. In this universe I cannot get on average as far as I could have in the parallel one.

So why haven't we been dominating prediction and stock markets? Why aren't we dominating them right now?

Because there are people who already spent years learning all that stuff, and now they do it as a day job; some of them 16 hours a day. Those are the ones you would have to compete against, not the average person.

In my own case, ... I can't afford to bet on things since I don't have enough money of my own for it, and my income is highly irregular and hard to predict so it’s difficult to budget things. ... Do a lot of other people here have such extenuating circumstances? Somehow that would feel like too much of a coincidence.

For me it feels like too much of a coincidence when a person complaining about why others aren't achieving greater success immediately comes with a good excuse for themselves.

Speaking for myself, my income is nice and regular, but I have kids to feed and care about, and between my daily job and taking care of my kids, I don't have enough time to research things to make bets about, or learn to understand finance as a pro. That is a quite different situation, but still one that makes making miracles difficult. I suppose some people are busy doing their scientific careers, etc.

And then, a few people are actually winning a lot. Now, maybe you overestimate the size of the rationalist community. How many people even among those who participate at meetups, are truly serious about this rationality stuff (and how many are there merely for social reasons)? Maybe it's in total only a few dozen people, worldwide. Some of them have good reasons to not be super winning, and some of them are super winning. Not a very surprising outcome.

I believe there is a lot of space for improvement. I believe there is specifically a lot to improve in "making our kind cooperate". But at the end of the day, we are just a handful of people.

And of course I'm looking forward to your friend's articles.

Replies from: TAG
comment by TAG · 2019-05-22T10:24:43.449Z · LW(p) · GW(p)

Because there are people who already spent years learning all that stuff, and now they do it as a day job; some of them 16 hours a day°

That's a simple and plausible point, but its also rather devastating to the claim that you can get significant value out of learning generic rationality skills.

Replies from: rossry
comment by rossry · 2019-05-22T12:30:33.702Z · LW(p) · GW(p)

I'm confused; it seems like evidence against the claim that you can get arbitrary amounts of value out of learning generic rationality skills, but I don't see it as "devastating" to the claim you can get significant value, unless you're claiming that "spent years learning all that stuff, and now do it as a day job; some of them 16 hours a day" should imply only a less-than-significant improvement. Or am I missing something here?

Replies from: Viliam
comment by Viliam · 2019-06-04T21:06:00.726Z · LW(p) · GW(p)

Yeah, this. It is a mistake -- and I suspect a popular one -- to think that rationality trumps any amounts of domain-specific knowledge or resources.

Ceteris paribus, a rational person playing the stock market should have an advantage against an irrational one, with same amount of skills, experience, time spent, etc. Question is, whether this advantage creates a big difference or a rounding error. Another question is whether playing the stock market is actually a winning move: how much is skill, how much is luck, and whether the part that is skill is adequately rewarded... compared to using the same amount of skill somewhere else, and putting your savings into a passively managed index fund.

If you invest your own money, even if you do everything right, you profit will be 1000 times smaller compared with a person who invests 1000× more money equally well. So, even if you make a profit, it may be less than your potential salary somewhere else, because you are producing a multiplier only for a moderate amount of money (unless you started as a millionaire).

On the other hand, if you invest other people's money, it depends on the structure of the market: how much other people's money is there to be invested, and how many people are competing for this opportunity. Maybe there are thousands of wannabe investors competing for the opportunity to manage a few dozen funds. Then, even if the smartest ones make a big profit, their personal reward may be quite small. Because the absolute value of your skill is not relevant here; it is the relative value of employing you versus employing the other guy who would love to take your position; and the other guy is pretty smart, too.

comment by TurnTrout · 2018-11-04T19:35:17.274Z · LW(p) · GW(p)

I broadly agree with your main points. However,

If rationalists had started winning, at least one person would have posted about it here on lesswrong.com.

I did post about this [LW · GW], and the benefits have continued to accrue. Compared to my past self, I perceive myself to be winning dramatically harder on almost all metrics I care about.

comment by stardust · 2018-11-04T21:23:29.660Z · LW(p) · GW(p)

What does winning look like to you? Lots of rationalists have pretty successful careers as programmers, which depending on what they are going for, could be considered winning. Is it that they aren't "winning" by your definition, or theirs?

Can you describe the thing you think rationalists are failing at, tabooing "winning"?

Replies from: Viliam
comment by Viliam · 2018-11-05T22:55:07.333Z · LW(p) · GW(p)

Not the author, but my guess would be this:

On various metrics, there can be differences in quantity, e.g. "a job that pays $10k" vs "a job that pays $20k", and differences in quality, e.g. "a job" vs "early retirement". Merely improving quantity does not make a good story. And perhaps it is foolish, but I imagine "winning" as a qualitative improvement, instead of merely 30% or 300% more of something.

And maybe this is wrong, because a qualitative improvement brings qualitative improvements as a side effect. A change from "$X income" to $Y income" can also mean a change from "worrying about survival" to "not worrying about survival", a change from "cannot afford X" to "bought the X", or even a change from "the future is dark" to "I am going to retire early in 10 years, but as of today, I am not there yet". Maybe we insufficiently emphasize these qualitative changes, because... uhm, illusion of transparency?

comment by GuySrinivasan · 2018-11-04T21:14:35.640Z · LW(p) · GW(p)

I went from borderline nonfunctional to pretty functional. This is not at all obvious even to those who knew me because I had been masking the growing problems really well using just raw intellectual brute force. More "attracts the walking wounded" anecdote.

Further, I kind of expect that Really Winning in the sense you're talking about is far more likely when (a) you get lucky, and/or (b) you're willing to stomp on other people. The first is not increased and the second is actively decreased by LWing (I think and hope).

Also, we have funded, active research into the not-so-covert true goal of original LW.

Can we do better? Yeah, definitely. Is it really so bleak? I don't think so.

comment by Thrasymachus · 2018-11-05T20:35:32.921Z · LW(p) · GW(p)

I don't see the 'why aren't you winning?' critique as that powerful, and I'm someone who tends critical of rationality writ-large.

High-IQ societies and superforecasters select for demonstrable performance at being smart/epistemically rational. Yet on surveying these groups you see things like, "People generally do better-than-average by commonsense metrics, some are doing great, but it isn't like everyone is a millionaire". Given the barrier to entry to the rationalist community is more, "sincere interest" than "top X-percentile of the population", it would remarkable if they exhibited even better outcomes as a cohort.

There's also going to be messy causal inference worries that cut either way. If there is in some sense 'adverse selection' (perhaps alike IQ societies) for rationalists tending to have less aptitude at social communication, greater prevalence of mental illness (or whatever else), then these people enjoying modest to good success in their lives reflects extremely well on the rationalist community. Contrariwise, there's plausible confounding where smart creative people will naturally gravitate to rationality-esque discussion, even if this discussion doesn't improve their effectiveness (I think a lot of non-rationalists were around OB/LW in the early days): the cohort of people who 'teach themselves general relativity for fun' may also enjoy much better than average success, but it probably wasn't the relativity which did it.

A deeper worry wrt to rationality is there may not be anything to be taught. The elements of (say) RQ don't show much of a common factor (unlike IQ), correlate more strongly with IQ than one another, and improvements in rational thinking have limited domain transfer. So there might not be much of a general sense of (epistemic) rationality, and limited hope for someone to substantially improve themselves in this area.

comment by Donald Hobson (donald-hobson) · 2018-11-04T22:39:53.729Z · LW(p) · GW(p)

Within a narrow field, where data is plentiful, learning rationality is much less powerful than learning from piles of data. Imagine three people, A, B and C. A doesn't know any chess or rationality, B has studied game theory, bays theorem, principles of decision theory and all round rationality. They have never played chess before, and have just been told the rules. C has been playing chess for years.

I would expect C to win easily. Its much easier to learn from experience, and remember your teachers experience, than it is to deduce what good chess strategies are from first principles. The only time I would expect B to win is if they were playing nim, or some other game with a simple winning strategy, and C had an intuition for this strategy, but sometimes made mistakes. I would expect B to beat A however.

Rationality is learning to squeeze every last drop of usefulness out of your data, and doing this is less effective at just grabbing more data when data is plentiful. Financial markets are another plentiful data domain. Many hedge fundies already know game theory, they also have a detailed knowledge of financial minutiae. Wanabe rationalists, If you want to be a banker, go ahead. But don't expect to beat the market from rationality alone any more than you can good deduce chess moves from first principles and beat a grandmaster without ever having played before.

Rationality comes into its own because it applies a small boost to many domains of skill, not a big boost to any one. It also works much better in the absence of piles of data.

The every day world is roughly inexploitable, and very data rich. The regions you would expect rationality to do well in are the ones where there isn't a pile of data so large even a scientist can't ignore it. Fermi Paradox, AGI design, Interpretations of Quantum mechanics, Philosophical Zombies, ect.

There is also a cultural element in that the people who know most Rationality have more important things to do than using this to gain a slight advantage in buisness. Many of the people here would rather be discussing AI alignment, or the fermi paradox, or black holes, or anything interesting really than being an investment banker. All the people that get to be skilled rationalists value knowledge for its own sake and are pursuing that.

You would also need many data points to gain good evidence unless rationality was just magic. I am faced with a tricky choice and choose option 1. Its quite good. Would I have chosen option 2 if I hadn't learned rationality? How good was option 2 any way? It's hard to spot when rationaliy has helped someone.

In conclusion, the lack of "Rationality gave me magic powers" clickbait is not significant evidence that we are doing something wrong. A large Randomized controlled trial finding that rationality didn't work would be worrying.

Replies from: elityre, elityre
comment by Eli Tyre (elityre) · 2019-06-22T18:45:36.154Z · LW(p) · GW(p)
The every day world is roughly inexploitable, and very data rich. The regions you would expect rationality to do well in are the ones where there isn't a pile of data so large even a scientist can't ignore it. Fermi Paradox, AGI design, Interpretations of Quantum mechanics, Philosophical Zombies, ect.

I think I would add to this, "domains where there is lots of confusing/ conflicting data, where you have to filter the signal from the noise". I'm thinking of fields where there are many competing academic positions like macroeconomics, or nutrition, or (of highest practical relevance) medicine.

Many of Scott Alexander's posts, for instance are a wading into a confusing morass of academic papers and then using principles of good reasoning to figure out, as best he/we can, what's actually going on.

comment by Eli Tyre (elityre) · 2019-06-22T18:40:02.513Z · LW(p) · GW(p)

This is a very important point, and I think it is worthy of being its own, titled, top-level post.

comment by Richard Meadows (richard-meadows-1) · 2018-11-05T02:39:23.576Z · LW(p) · GW(p)
In fact, betting in prediction markets and stock markets provides an external criteria for measuring epistemic rationality [...]
So why haven't we been dominating prediction and stock markets? Why aren't we dominating them right now?

Trying to 'dominate' the stock market is a very bad idea, roughly analogous to your AI baseball example. The generally accepted best approach is to passively accumulate index funds, which I imagine is exactly what many people here are already doing. For individuals, winning is mostly about not-losing, which tends to be invisible; if you succeed, nothing happens.

Replies from: ioannes_shade
comment by ioannes (ioannes_shade) · 2018-11-07T19:25:29.452Z · LW(p) · GW(p)

cf. Antigravity Investments (investment advisor service for EAs), which recommends a passive index fund approach.

comment by James_Miller · 2018-11-04T18:37:00.522Z · LW(p) · GW(p)

My son is winning. Although only 13 he received a 5 (the highest score) on the calculus BC, and the Java programming AP exams. He is currently taking a college level course in programming at Stanford Online High School (Data Structures and Algorithms), and he works with a programming mentor I found through SSC. He reads SSC, and has read much of the sequences. His life goal is to help program a friendly superintelligence. I've been reading SSC, Overcomming Bias, and Lesswrong since the beginning.

Replies from: Nebu
comment by Nebu · 2018-11-04T23:36:18.470Z · LW(p) · GW(p)

Yeah, but which way is the arrow of causality here? Like, was he already a geeky intellectual, and that's why he's both good at calculus/programming and he reads SSC/OB/LW? Or was he "pretty average", started reading SSC/OB/LW, and then that made him become good at calculus/programming?

Replies from: James_Miller
comment by James_Miller · 2018-11-05T03:19:20.797Z · LW(p) · GW(p)

Yes, genetics + randomness determines most variation in human behavior, but the SSC/LW stuff has helped provide some direction and motivation.

Replies from: megasilverfist
comment by megasilverfist · 2018-11-07T14:07:53.370Z · LW(p) · GW(p)

Have you been using turbocharging training with him?

Replies from: An1lam
comment by NaiveTortoise (An1lam) · 2018-11-07T19:19:32.077Z · LW(p) · GW(p)

Is there an actual description of turbocharging training beyond "deliberate practice but where you think hard about not Goodhart-ing and practicing the wrong thing"?

Replies from: Benito, elityre, megasilverfist
comment by Ben Pace (Benito) · 2018-11-07T19:23:07.672Z · LW(p) · GW(p)

I don't think there's been a write-up of it anywhere.

comment by Eli Tyre (elityre) · 2019-06-22T18:37:43.319Z · LW(p) · GW(p)

Val started (didn't finish) a sequence once, but it looks like he removed the sequence-index from his blog:

In any case, I (who am not Val), would endorse that description.

comment by megasilverfist · 2018-11-08T03:09:05.833Z · LW(p) · GW(p)

It was taught at CfAR during the period I think James attended.

Replies from: James_Miller
comment by James_Miller · 2018-11-12T04:29:22.630Z · LW(p) · GW(p)

What is it? I don't remember turbocharging from CfAR.

Replies from: megasilverfist
comment by megasilverfist · 2018-12-17T04:46:23.131Z · LW(p) · GW(p)

Its one of the things Val taught. I honestly don't remember much of the details, but "deliberate practice but where you think hard about not Goodhart-ing and practicing the wrong thing"? actually sounds about right.

comment by Ilya Shpitser (ilya-shpitser) · 2018-11-05T15:54:53.585Z · LW(p) · GW(p)

"Rationalists are very good at epistemic rationality."

As people very good at _epistemic_ rationality, I am sure you realize that the relevant comparison is success after one had been exposed to rationality with hypothetical success had one not been exposed.

Replies from: TAG
comment by TAG · 2018-11-06T12:18:28.295Z · LW(p) · GW(p)

That -- compared to what? -- needed saying

comment by ShardPhoenix · 2018-11-05T09:36:06.381Z · LW(p) · GW(p)

The winningest rationalist I know of is Dominic Cummings, who was the lead strategist behind the Brexit pro-leave movement. While the majority of LWers may not agree with his goals, he did seem to be effective, and he frequently makes references to rationalist concepts (including IIRC some references to the work of Eliezer Yudkowsky) on his blog: https://dominiccummings.com/

Replies from: TAG
comment by TAG · 2018-11-05T13:31:39.618Z · LW(p) · GW(p)

The winningest rationalist I know of is Dominic Cummings,

How much of a difference do you think he made? Was there strong pro-remain sentiment before he got stated?

Replies from: ryan_b
comment by ryan_b · 2018-11-05T15:51:40.417Z · LW(p) · GW(p)

I followed his work, and I estimate the difference he made to be very high relative to other individuals working on the issue (on either side). According to his own estimation, his contribution consisted of assembling highly competent people and then minimizing interference from incompetent ones.

Some context: he had previously worked on the campaign to reject the Euro, and so had more experience with the question of 'how people in the UK feel about the EU' than most, which is why there was a push to recruit him for a campaign. Their campaign took a series of basic steps, like tried to determine what voters actually thought, which none of the other campaigns did. Then they tested a bunch of different methods to communicate with the voters effectively (the other campaigns went with old strategies and did not check whether they worked), and focused on driving voter turnout.

In a nutshell what he did was: determine to solve the actual problem, find other like-minded people, and then set about actually trying to solve the problem using basic tools like measurement and experiment. You can find the list of posts on his blog relevant to the campaign here, but I think the real meat is in #20-22. He does not claim responsibility for success, placing most of the credit with the team and most of the blame with an incompetently run Remain campaign.

Replies from: TAG
comment by TAG · 2018-11-06T12:23:17.742Z · LW(p) · GW(p)

I followed his work, and I estimate the difference he made to be very high relative to other individuals working on the issue (on either side).

On what basis the polling was close before the referendum, and the result of the referendum was close. I am not seeing evidence that he made something happen that would not have happened. Are you saying that he must have got results, because he was using the right methods?

tried to determine what voters actually thought,

How come we sill don't know?

Replies from: ryan_b
comment by ryan_b · 2018-11-06T16:14:07.542Z · LW(p) · GW(p)

What he made happen that would not have happened is voters turning out to vote for Brexit at a higher rate.

When the campaign began, polling was not close. Here is one, from a company which Cummings referred to frequently, that showed a 10 point lead for staying in as of August 2015. The rest of that post is here, wherein he discusses the state of things as the campaign was beginning.

Further, I point you to the expected outcomes, which were heavily in favor of the UK remaining. On page 4 here you see the odds Betfair was putting on the question. This is only over the 10 week span of the official campaign immediately before the referendum.

Using the right methods, their team was able to determine that the actual level of support for leaving was higher than the other campaigns or the media expected. Investigating what the voters thought (via focus groups) helped them identify what people's concerns were, for and against. Then they tested different ways of communicating with voters, such that their communication resonated with leave voters and minimized antagonism of remain voters. As a result, the turnout for leave voters was higher than expected before the campaign.

At the same time, the other campaigns made assumptions both about the real state of opinion and about methods for communicating with voters. These assumptions were wrong, and they did not test them. As a result, turnout for remain voters was mediocre. I'm not sure if this was expected or not; the remain campaign was pretty much business as usual, so I suspect it was.

comment by TedSanders · 2018-11-05T21:08:10.062Z · LW(p) · GW(p)

What does winning look like?

I think I might be a winner. In the past five years: I have won thousands of dollars across multiple prediction market contests. I earned a prestigious degree (PhD Applied Physics from Stanford) and have held a couple of prestigious high-paying jobs (first as a management consultant at BCG, and now an algorithms data scientist at Netflix). I have a fulfilling social life with friends who make me happy. I give tens of thousands to charity. I enjoy posting to Facebook and surfing the internet. I have the means and motivation to keep learning about areas outside my expertise. I floss and exercise and generally am satisfied with my health.

I think I could be considered both a rationalist and a winner.

But I post rarely to LessWrong because my rational perception is that it takes effort but does not provide return. Generally I think my shortcomings are shortcomings of execution rather than irrationality, and those are the areas I aim to improve upon. My arena for self-improvement is my workplace and my life, not a website. As a result, my stories like mine might be underrepresented in your sampling.

If rationalists were winning, how would we know? What would winning look like?

Replies from: Sailor Vulcan
comment by Sailor Vulcan · 2018-11-17T16:53:14.750Z · LW(p) · GW(p)

In other words, people who win at offline life spend less time on the internet because they're devoting more time offline. And since rationalists are largely an online community rather than offline at least outside of the bay area, this results in rationalists dropping out of the conversation when they start winning. That's a surprisingly plausible alternative explanation. I'll have to think about this.

comment by cata · 2018-11-05T04:51:45.430Z · LW(p) · GW(p)

The people I know IRL who identify as rationalists are doing great. Not a lot of people bet on prediction markets since the ones that exist are small and hard to use. Not a lot of people bet on stock markets since making money doing so is a boring full-time job.

I presume that the reason people don't post about how they are "winning" is because it's tactless to write a post about how great you are.

comment by ChristianKl · 2018-11-05T08:02:20.062Z · LW(p) · GW(p)

If there's a hedge fund out there that leverages superforcasting style reasoning and makes billions with that I doubt that it would be rational for them to openly speak about their secret sausage. It might also be rational for them to currently reinvest all their money if they are getting a great return for it.

Replies from: ryan_b
comment by ryan_b · 2018-11-05T22:15:50.818Z · LW(p) · GW(p)

This is broadly the pitch of Bridgewater, with the caveat that what they are doing is largely not losing billions. As far as I can tell there is no direct relationship between Tetlock's methods and Dalio's, but they seem to have drawn similar conclusions.

Replies from: elityre
comment by Eli Tyre (elityre) · 2019-06-22T18:50:12.128Z · LW(p) · GW(p)

This is relevant to my interests. Do you have a particular source that describes their "pitch"?

comment by Lavrov Andrey (lavrov-andrey) · 2022-09-23T22:56:09.638Z · LW(p) · GW(p)

I'm quite late (the post was made 4 years ago), and I'm also new to LessWrong, so it's entirely possible that other, more experienced members, will find flaws in my argument.

That being said, I have a very simple, short and straightforward explanation of why rationalists aren't winning.

Domain-specific knowledge is king.

That's it.

If you are a programmer and your code keeps throwing errors at you, then no matter how many logical fallacies and cognitive biases you can identify and name, posting your code on stackoverflow is going to provide orders of magnitude more benefit.

If you are an entrepreneur and you're trying to start your new business, then no matter how many hours you spend assessing your priors and calibrating your beliefs, it's not going to help you nearly as much as being able to tell a good manager apart from a bad manager.

I'm not saying that learning rationality isn't going to help at all, rather I'm saying that the impact of learning rationality on your chances of success will be many times smaller than the impact of learning domain-specific knowledge.

comment by avturchin · 2018-11-08T11:35:30.120Z · LW(p) · GW(p)

Sometimes winning is an evidence of non-rationality. For example, if one plays in a lottery and wins a million dollars, - it was still irrational for him to play as most lotteries have negative total expected utility. The same thing is with becoming very rich: most who try, fail.

Imagine the following game: You are put into a bath where you will be a) dissolved with acid with 99 per cent probability and b) you will become a billionaire with 1 per cent. Would you agree to play?

I would say that playing the game is very irrational, and any winner was likely not able to correctly calculate the odds. So extreme winning is a signal of some form of irrationality.

Replies from: TAG
comment by TAG · 2018-11-08T12:05:50.437Z · LW(p) · GW(p)

most lotteries have negative total expected utility.

Its quite possible for a lottery to have positive expected utility for an individual, and this was one of the cases that prompted the development of utility as a separate concept to value.

comment by Linda Linsefors · 2024-03-17T00:32:41.612Z · LW(p) · GW(p)

(Note: Said friend will be introducing himself on here and writing a sequence about his work later. When he does I will add the links here.)

 

Did you forget to add the links?

comment by MrAKaDeus · 2019-12-28T21:44:42.539Z · LW(p) · GW(p)

It could be that people don't use their rationality skills at their "bottlenecks". You could improve many things but if they aren't your bottlenecks the result would be negligible. I've seen people training to recognize their biases but not using this for strategic planning and just doing "the safe thing" everyone does.

comment by ChristianKl · 2018-11-05T07:53:24.573Z · LW(p) · GW(p)

I don't think it's accurate to say that our rationality techniques are only about pursuing truth. It might be true that the sequences are mostly about this but a lot has happened since the sequences were written.

If you look at the recent CFAR handbook there's plenty of techniques that are useful for getting things done.

comment by shminux · 2018-11-05T02:48:17.896Z · LW(p) · GW(p)

Humans are only in a small part pliable reasoning. Most of what makes us us is genetic, subconscious and not available to introspection. We have more blind spots than we have sighted, and we actively resist correcting those blind spots. LW-style rationality tends to appeal to the people who on average are at or below mean in interpersonal skills, so you start with a huge handicap and learning about biases and how to deal with them only gives you some marginal advantage over those like you, not a magic bullet to achieve your goals. Speaking of the goals, humans are confused about what goals they have, what values they have, and a person is better represented not as a single optimizer, but as a multitude of competing agents, some of which are not aware of the others' presence, and some never bubble up to the conscious awareness at all. Those lucky few of us who have one or two rationality-shaped blind spots benefit the most, the rest, well, here we are discussing why we are not winning instead of actually winning.

comment by Raj Thimmiah (raj-thimmiah) · 2020-09-08T07:07:36.162Z · LW(p) · GW(p)

Did your friend ever finish that sequence? I'd still be quite interested in seeing it. After reading Chinese Businessmen: Superstition Doesn't Count, I've become very interested in becoming more instrumental.

comment by David Gross (David_Gross) · 2018-11-04T21:16:39.143Z · LW(p) · GW(p)

If you want to know more about really winning vs. theoretically winning, you might be interested in what Aristotle taught about baseball: https://sniggle.net/TPL/index5.php?entry=03Feb04

comment by Erfeyah · 2018-11-05T23:42:03.895Z · LW(p) · GW(p)

..