Rationality !== Winning

post by Raemon · 2023-07-24T02:53:59.764Z · LW · GW · 51 comments

Contents

  Appendix: Being Clearer on My Claims
None
51 comments

I think "Rationality is winning [LW · GW]" is a bit of a trap. 

(The original phrase is notably "rationality is systematized winning", which is better, but it tends to slide into the abbreviated form, and both forms aren't that great IMO)

It was coined to counteract one set of failure modes - there were people who were straw vulcans, who thought rituals-of-logic were important without noticing when they were getting in the way of their real goals. And, also, there outside critics who'd complain about straw-vulcan-ish actions, and treat that as a knockdown argument against "rationality."

"Rationalist should win" is a countermeme that tells both groups of people "Straw vulcanism is not The Way. If you find yourself overthinking things in counterproductive ways, you are not doing rationality, even if it seems elegant or 'reasonable' in some sense."

It's true that rationalists should win. But I think it's not correspondingly true that "rationality" is the study of winning, full stop. There are lots of ways to win. Sometimes the way you win is by copying what your neighbors are doing, and working hard. There is rationality involved in sifting through the various practices people suggest to you, and figuring out which ones work best. But, the specific skill of "sifting out the good from the bad" isn't always the best approach. It might take years to become good at it, and it's not obvious that those years of getting good at it will pay off.

Rationality is the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions. Sometimes this is the appropriate tool for the job, and sometimes it's not. 

The reason this is particularly important is in deciding what feedback loops to focus on when developing rationality. If you set your feedback loops as "am I winning, generally?" (or, "are the students of my rationality curriculum winning, generally?"), well, that's a really noisy feedback loop that's probably swamped by random environmental variables. It's not nothing, but it's really slow.

There's also a problem where, well, a lot of techniques that help with winning-at-life just aren't especially about rationality in particular. If you're running a rationality program with "win-at-life" as your goal, you may find yourself veering in a direction that's not really capitalizing on the things rationality was actually especially good at, and become a generic self-help program. Maybe that's fine, but the result seems to lose something of the original spirit.

The domains where rationality matters are domain where information is scarce, and the common wisdom of people around you is inadequate. 

The coronavirus pandemic was a good example where rationality was relevant: a very major change disrupted society, there was not yet a scientific consensus on the subject, there were reasons to doubt some claims by scientific authorities [LW · GW], and your neighbors were probably slow to react. (I think rationalists did well at navigating the early pandemic, but, alas, also stayed in overly-stressful-lockdown-mode longer than was appropriate [LW · GW], and lacked some other key skills [LW · GW])

Building a startup is a domain where I think rationality is pretty relevant. There is a lot of common wisdom that is relevant. Paul Graham et al have useful advice. But because you need to outperform a lot of competitors, common wisdom isn't enough. You need to continuously model the world, design products people don't know they want yet while soaking in new information so you can continuously update and iterate. You need to stare into the darkness and admit major mistakes [LW · GW].

Rationality is particularly helpful for solving problems we don't know how to solve [LW · GW].

Rationality is useful for judges and jurors, who must accurately weigh evidence and not get caught up in misleading arguments. (I know a lawyer who tried to explain Bayes theorem to a court, and the judge/jury... didn't believe him)

Rationality and AI alignment are heavily intertwined on LessWrong, and a major reason for that is catastrophic misalignment is a problem we won't get to see and iterate on with the usual set of scientific wisdom.

Eliezer notes a few mistakes he made in the original sequences:

It was a mistake that I didn’t write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples.

In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say “Oops” and “Duh.”

Yes, sometimes those big issues really are big and really are important; but that doesn’t change the basic truth that to master skills you need to practice them and it’s harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.)

A third huge mistake I made was to focus too much on rational belief, too little on rational action.

I agree with these mistakes, and I think it was probably right for some amount of correction towards rationality-solving-your-personal-life-problems. Rationality does help with personal life problems. It helps you understand and get unconfused about your goals, understand the perspectives of people you need to interact with, etc. I have definitely gotten value out of CFAR-style workshops. I'd certainly rather have overcorrected in this direction than not at all. 

(I also want to be clear, I don't think CFAR was just focused on self-help-y stuff. A lot of it continues to be pretty foundational to my strategic thinking in a variety of domains. I think double-crux [? · GW] ended up being pretty important to navigating disagreement at Lightcone)

But, 

Well, 

It's hard to say this kindly. I think "rationality-for-solving-personal-problems" created a cultural vortex that attracted people with personal problems. And I think this diverted attention away from "use rationality to systematically be faster-than-science [LW · GW] at understanding and solving difficult problems." 

I'm glad those people (including me, to some degree) found/built a home. But I want to recapture some of that original spirit.

I had a bit of a wake-up-call a few years ago when I read How to Measure Anything (see Luke's review [LW · GW]). This is not a book written to create a rewarding community of nerds. It's targeted at professional businessmen who want to improve decisions where thousands (or millions) of dollars are at stake. It teaches Bayes and calibration training and value-of-information in a context that does imply concrete exercises, but, is not necessarily for everyone. It presents rationality as a professional tool that is useful for people that want to specialize in some types of problems.

How to Measure Anything notably won't help you (much) with aligning AI. It teaches "good decisionmaking", but doesn't teach "research taste in novel domains". I think there's a concrete branch of rationality training that'd be relevant for novel research, that requires pretty different feedbackloops from the "generally be successful at life" style of training. I think some of "research taste rationality" is reasonably alive in academia, but many elements are not, or are not emphasized enough.

I'm interested in cultivating a set of rationality feedbackloops that are geared towards training research taste. And as I've mulled this over, I've found it helpful to distance myself a bit from "rationality is systematized winning" as a catchphrase.

 

Appendix: Being Clearer on My Claims

Originally I stopped the post here. But commenter df fd asked [LW(p) · GW(p)]:

I got into Rationality for a purpose. If it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced. 

And this made me realize I had left my claims here somewhat vague, and a lot of my background models implicit. So here's a breakdown of what I mean and why I care:

...

First, I want to note my definition of rationality here is not new, it's basically how it was described by Eliezer in 2012, and I'm pretty confident it's what he meant when writing most of the sequences. "Eliezer said so" isn't a great argument, but it looks like some people might feel like I'm shifting goalposts and changing definitions and I claim I am not doing that. From Rationality: Appreciating Cognitive Algorithms [LW · GW]:

The word 'rational' is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement. 

He notes that in the sentence "It's rational to believe the sky is blue", the word rational isn't really doing any useful work. The sky is blue. But by contrast, the sentence "It's (epistemically) rational to believe more in hypotheses that make successful experimental predictions" is saying something specifically about how to generate beliefs, and if you removed "rational" you'd have to fill in more words to replace it.

I think there's something similar going on with "it's rational to [do an object-level-strategy that wins]." If it just wins, just call it winning, not rationality. 

I do agree Eliezer also described rationality as systemized winning, and that ultimately winning was an important arbitrator of "were you being rational", and that this required breaking out of some narrower definitions of rationality. I think these are not in as much tension as they look, but they are in at least superficial tension. 

With that as background, here's the claims I'm actually making:

...

Second: I'm mostly making an empirical claim as to what seems to happen to individual people (and more noticeably to groups-of-people) if they focus on the slogan "rationality is winning." 

It's hypothetically possible for it to true that "rationality is systemized winning", and for it to be subtly warping to focus on that fact. The specific failure modes I'm worried about are:

...

Third: The valley of bad rationality [LW · GW] means that study of systemized winning is not guaranteed to actually lead to winning, even en-net over the course of your entire lifetime. 

Maybe civilization, or your local culture, just has too many missing pieces for the deliberate study of systematic winning to be net-positive. Or maybe you can make some initial progress, but hit a local optima, and the only way to further improve is to invest in skills that will take too long to pay off.

...

Fourth: Honestly, while I think LessWrong culture is good at epistemics, addressing motivated cognition, and some similar things... I don't have a strong reason to believe that we are particularly good at systematically winning across domains (except in domains where epistemics are particularly relevant)

So while it might be true that "The True Spirit of Rationality" is systemized winning, and epistemics is merely subservient to that... it's nonetheless true that if you're showing on LessWrong or in other rationalist spaces, I think you'll be kind of disappointed if you're hoping to learn skills that will be help you win at life in a generalized sense. 

I do still think "more is possible". And I think there is "alpha" in epistemics, such that if you invest a lot in epistemics you will find a set of tools that the rest of the world is less likely to find. But I don't have a strong belief that this'll pay off substantially for any specific person.

(side note: I think we have maybe specialized reasonably in "help autistic nerds recover their weak spots", which means learning from our practices will help with some initial growth spurt, but then level up)

...

So, fifth: So, to answer df fd's challenge here: 

I got into Rationality for a purpose if it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced. 

A lot of my answer here is "sure, that might be fine!" I highly recommend you focus on winning, and use whatever tools are appropriate, which sometimes will be "study/practice cognitive algorithms" shaped and sometimes will have other shapes. 

I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes "applying cognitive algorithms to make good decisions"). Having that skill is good, all-else being equal. But it's not necessarily the case that studying that skill will pay off.

Linguistically, I think it's correct to say "the rational move is the one that resulted in you winning (given your starting resources, including knowledge)", but, "that was the rational move" doesn't necessarily equal "'rationality' as a practice was helpful."

Hope that helps explain where I'm coming from.

51 comments

Comments sorted by top scores.

comment by Going Durden (going-durden) · 2023-07-25T09:11:16.224Z · LW(p) · GW(p)

One thing I don't see explored enough, and which could possibly bridge the gap between Rationality and Winning, is Rationality for Dummies.

Rationalist community is oversaturated with academic nerds, borderline geniuses, actual geniuses, and STEM people who's intellectual level and knowledge base is borderline transhuman.

In order for Rationality and Winning to be reconciled with minimum loss, we need a bare-bones, simplified, kindergarten-level Rationality lessons based on the simplest, most relatable real life examples. We need Rationality for Dummies. We need Explain Methods Like Im Five, that would actually work for actual 5-year olds.

True, Objective Rationality Methods should be applicable whether you are an AI researcher with a phd, or someone to young/stupid to tie their own shoes. Sufficiently advanced knowledge and IQ can just brute-force winning solutions despite irrationality. it would be more enlightening if we equipped a child/village idiot with simple Methods and judge their successes on this metric alone, while they lack intellectual capacity or theoretical knowledge, and thus need to achieve winning by a step-by-step application of the Methods, rather than jumps of intuition resulting from unconscious knowledge and cranial processing power.

Only once we have solid Methods of Rationality that we can teach to kids from toddler-age, and expand on them until they are Rational Adults, then we can say for certain which Rationalist ideas lead to Winning and which do not.

Replies from: benjamin-kost
comment by Benjamin Kost (benjamin-kost) · 2024-08-24T04:51:56.153Z · LW(p) · GW(p)

I just recently realized this place is even here, but simplifying concepts and applying better pedagogical techniques so that people of average intelligence can learn them is one of my main areas of focus. I believe we could do a lot better job both teaching and getting normal people interested in learning which are two sides of the same coin.

comment by Valentine · 2023-07-24T03:50:15.574Z · LW(p) · GW(p)

I find this refreshing. It rings true. It feels like the kind of North Star we were groping toward in early CFAR but never landed on.

This in particular feels clarifying:

Rationality is the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions. Sometimes this is the appropriate tool for the job, and sometimes it's not. 

I find myself breathing with relief reading this. It has the flavor of a definition that could use some iterations. But as it stands, it strikes me as (a) honoring the spirit of the discipline while (b) not overblowing what it can do or is for. Like part of the clarity is in what the Art isn't for.

Thank you for writing this.

comment by Eli Tyre (elityre) · 2023-07-28T19:42:14.172Z · LW(p) · GW(p)

I feel strongly about this part:

How to Measure Anything notably won't help you (much) with aligning AI. It teaches "good decisionmaking", but doesn't teach "research taste in novel domains". I think there's a concrete branch of rationality training that'd be relevant for novel research, that requires pretty different feedbackloops from the "generally be successful at life" style of training. I think some of "research taste rationality" is reasonably alive in academia, but many elements are not, or are not emphasized enough.

I want to keep pushing for people to disambiguate for what precisely they mean when they use the word "rationality". It seems to me that there are a bunch separate project that plausibly, but not necessarily, overlap, which have been lumped together under a common term, which causes people to overestimate how much they do overlap.

In particular, “effective decision-making in the real world”, “how to make progress on natural philosophy when you don’t have traction on the problem” are much more different than one might think from reading the sequences (which talks about both in the same breath, and under the same label). 

Problems where "rational choice under uncertainty" / necessarily, problems where you already have a frame to operate in. If nothing else, you have your decision theory and probability theory frame.

Making progress on research questions about which you are extremely confused is mostly a problem of finding and iterating on a frame for the problem. 

And the project of "raising the sanity waterline" and of "evidence-based self-help", are different still. 

comment by eillasti · 2023-07-27T20:12:49.774Z · LW(p) · GW(p)

True (Scottish) Rationality is winning. Firstly, whom do we call a rational agent in Economics? It is a utility optimiser given constraints. An agent that makes optimal choices and attains max utility possible, that's basically winning. But real life is more complicated, all agents are computationally constrained and "best case" is not only practically unattainable, it is uncomputable, so we cannot even compare against it. So we talk about doing "better than normally expected", "better than others", etc. When I say "winning" I mean achieving ambitious goals.

But achieving ambitious goals in real life is mainly not about calculating the optimal choices! It is mainly about character, integrity, execution, leadership and a lot of other stuff! How come I still claim that Rationality is Winning? What use is knowing what to do if in practice you don't do it? Well, that's the point! An "optimal" strategy that is infeasible is not optimal)

But why focus on rationality at all if other stuff is more important? Because, well, your character, integrity, execution, resources, etc are not under your direct control except via the decisions that you make. You get them by making rational decisions. Making decisions that make you win and achieve your (ambitious) goals is what I call rationality.

"Sometimes the way you win is by copying what your neighbours are doing, and working hard." And in this case behaving rationally is copying what your neighbours do and working hard. Doing anything else is irrational, IMHO. Figuring out whom, when and how to copy is a huge part of rationality! We also call it critical thinking. Knowing how and when to work hard is another one! Why do you exclude one of the most important cognitive algorithms "sifting out the good from the bad" from "the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions"? If you are not good at critical thinking, how do you know that LW is not complete bullshit?

"Developing rationality" as a goal is an awful one because you don't get feedback. Learning doesn't happen without feedback. "Winning" may be a great one IIF you pick fields with strong immediate practical unbiased feedback. For example, try playing poker, data science in an executive position, all kinds of trading (RTB, crypto, systematic, HFT, just taking advantage of the opportunities when they present themselves), doing research just for the purpose of figuring out stuff for yourself because your life & money (universal resource) depends on it (or because it's fun and later your life will depends on it :) ). These are all examples from my life and they worked wonders for me.

I am sorry if I am coming a little aggressive, I think this is a great post raising a great point. I am just a rude post-USSR trader and I believe that being direct and up to the point is the best way to communicate and to show respect)

I never had an opportunity to participate in CFAR workshops and that's a pity. I would be happy to discuss this stuff further because I think both sides have useful stuff to share.
 

Replies from: Raemon
comment by Raemon · 2023-07-27T21:04:24.011Z · LW(p) · GW(p)

This post is primarily targeted towards people trying to develop rationality, either as a personal skill or as an overall field/artform. 

Could you clarify if you disagree with the claims I more explicitly make/don't make in the appendix?

Why do you exclude one of the most important cognitive algorithms "sifting out the good from the bad" from "the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions"? If you are not good at critical thinking, how do you know that LW is not complete bullshit?

fyi I explicitly included this, I just warned that it wouldn't necessarily pay off in time to help

Replies from: eillasti
comment by eillasti · 2023-07-28T11:35:35.793Z · LW(p) · GW(p)

The word 'rational' is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement. 

I disagree with the definition "systematically promote map-territory correspondences" because for me it is "maps all the way down", we never ever perceive the territory directly, we perceive and manipulate the world via models (maps). Finding models that work (that enable goal achievement/winning) is the essence of intelligence. "All models are wrong, some are useful". Even if we get to the actually elemental parts of reality and can essentially equate our most granular map with the territory that is out there, we still mainly won't care in practice about this perfect map because it is going to be computationally intractable. Lets take Newtonian Mechanics and General Relativity for example. We know that General Relativity is "truer" but we don't use it for calculating pendulum dynamics at the earth surface, the differences that it models are just irrelevant compared to other more relevant stuff. 

Second: I'm mostly making an empirical claim as to what seems to happen to individual people (and more noticeably to groups-of-people) if they focus on the slogan "rationality is winning."

This is the core claim I think!

The feedbackloops are long/slow/noisy, which makes it hard to learn if what you're trying is working.

Definitely! If the feedback loops are long, slow and noisy, then learning is long, slow and noisy. That's why I give examples of areas where the feedback loops are short, fast and with very little noise. These are examples that worked for me with astonishing efficiency. I would be the person I am otherwise. And I've chosen these areas explicitly for this reason.

If you set out to systematically win, many people end up pursuing a lot of strategies that are pretty random. And maybe they're good strategies! But bucketing all of them under "rationality" starts to deflate the meaning of the word.

"pretty random" sounds to me like the exact opposite of rational and winning)

People repeatedly ask "but, isn't it rationality to believe false things?"

Here I make an extremely strong claim that it is never rational to believe false things. Personal integrity is the cornerstone of rationality and winning. This is a blogpost scope topic, so I won't go into it further right here.

Similarly and more specifically: a lot of things-that-win in some respects are wooy, and while I think there's in fact good stuff in some woo, while the first generation of rationalists exploring that woo were rationalists with a solid epistemic foundation. Subsequent generations came more for the woo than for the rationality (See Salvage Epistemology [LW · GW]). 

"Woo" is stuff that doesn't fit into your clear self-consistent world model. There is a lot of useful stuff out there that you guys ignore! Copenhagen interpretation, humanities, biology, religion, etc... If you don't understand why it makes sense, you don't understand it, fullstop. I believe that mining woo for useful stuff is exactly how you do original research. It worked wonders for me! But integrity goes first! You shouldn't just replace your model with the foreign one or do "model averaging", you should grok what those guys get that you are missing and incorporate it in your model. Integrity and good epistemiology are a must, if you don't have those yet, don't touch woo! This is power aka dark arts, it will corrupt you.  

In both the previous two bullets, the slogan "rationality is winning" is really fuzzy and makes it harder to discern "okay which stuff here is relevant?". Whereas "rationality is the study of cognitive algorithms that systematically arrive at truth and succeeding at your goals" at least somewhat 

I go for "rationality is cognitive algorithms that systematically arrive at succeeding at your goals". 

Third: The valley of bad rationality [LW · GW] means that study of systemized winning is not guaranteed to actually lead to winning, even en-net over the course of your entire lifetime. 

In my experience there is a valley of bad X for every theory X. This is what you have to overcome. I agree that many perish in it. But the success of those who pass is well worth it. I think we should add more "here be dragons" and "most of you will perish" and "like seriously, 90% will do worse of by trying this". It's not for everybody, you need to have character. 

Fourth: Honestly, while I think LessWrong culture is good at epistemics, addressing motivated cognition, and some similar things... I don't have a strong reason to believe that we are particularly good at systematically winning across domains (except in domains where epistemics are particularly relevant)

I am really sorry to say this, I love LW and I took a lot from it and I deeply respect a lot of people from here, I mean like genius-level, but yep, LW sucks at winning and you are not even good in epistemics in the areas that matter for you the most. Lets do smth about it, lets win?)

So, fifth: So, to answer df fd's challenge here: 

I got into Rationality for a purpose if it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced. 

A lot of my answer here is "sure, that might be fine!" I highly recommend you focus on winning, and use whatever tools are appropriate, which sometimes will be "study/practice cognitive algorithms shaped" and sometimes will have other shapes. 

I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes "applying cognitive algorithms to make good decisions"). But it's not necessarily the case that studying that skill will pay off.

Linguistically, I think it's correct to say "the rational move is the one that resulted in you winning (given your starting resources, including knowledge)", but, "that was the rational move" doesn't necessarily equal "'rationality' as a practice was helpful."

Hope that helps explain where I'm coming from.

This one I just agree. 

fyi I explicitly included this, I just warned that it wouldn't necessarily pay off in time to help

I see from the 5th point that you explicitly included it, sorry for missing it, I just tend to really get stuck in writing good deliberate replies, so I just explicitly decided to contribute whatever I realistically can. 

I still stand on the position that this one (I call it critical thinking) should come first. It's true that there is no guarantee that it would pay off in time for everybody. But if you miss it, how do you distinguish between woo and rationality? I think you are just doomed in this case. Here be dragons, most of you will perish on the way. 

comment by Vladimir_Nesov · 2023-07-24T10:34:00.434Z · LW(p) · GW(p)

Physics is not particularly about getting to the Moon, there are many relevant activities of very different flavors, most of them not wielding physical law for personal benefit of the practitioner. There's even theory that doesn't have the slightest notion of applicability in the real world, and yet it's legitimately about physics.

comment by Said Achmiz (SaidAchmiz) · 2023-07-24T08:00:36.141Z · LW(p) · GW(p)

Rationality is the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions. Sometimes this is the appropriate tool for the job, and sometimes it’s not.

If “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions” is not the appropriate tool for the job (whatever “the job” might be), what exactly do you call the skill of (a) realizing this (or, more generally, determining when something is or is not the right tool for a job), (b) finding the right tool for the job, and (c) using that tool to do the job successfully?

There’s one obvious word we can use for this…

Replies from: Raemon
comment by Raemon · 2023-07-24T08:40:13.178Z · LW(p) · GW(p)

I think some versions of this are rationality (if you figured out the right tool for the job via deliberate study/cultivation of cognitive patterns) and some are not (if you did it by blindly copying your neighbors or listening to the high status authority without much/any consideration of alternatives).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-07-24T08:49:37.535Z · LW(p) · GW(p)

How about the skill of determining which of “figur[ing] out the right tool for the job via deliberate study/cultivation of cognitive patterns”, “blindly copying your neighbors”, or “listening to the high status authority without much/any consideration of alternatives” works best (or, better / worse in what cases) for finding / using the right tools for various jobs? What would you call that?

Replies from: Raemon
comment by Raemon · 2023-07-24T08:53:41.996Z · LW(p) · GW(p)

Rationality

Replies from: Raemon
comment by Raemon · 2023-07-24T08:54:25.897Z · LW(p) · GW(p)

This was covered in the post, not sure what point your making.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-07-24T09:10:20.041Z · LW(p) · GW(p)

The point I am making is that your claim—that rationality is sometimes the right tool for the job, and sometimes not—doesn’t follow from your argument, any more than it follows from the observation that, say, the skill of cooking pasta is not, strictly speaking, “rationality”. Figuring out how to cook pasta, or figuring out how to figure out how to cook pasta, or figuring out whether to cook pasta, or figuring out how to determine whether you should cook pasta and how much time you should spend on figuring out how to cook pasta… these things might recognizably be “rationality”, but the skill of cooking pasta is just a skill.

But what do we conclude from that? That—contrary to previous beliefs—sometimes we shouldn’t apply rationality? That the definition of “rationality” should somehow exclude cooking pasta, placing that skill outside of the domain of “rationality”? I mean, this—

Rationality is the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions. Sometimes this is the appropriate tool for the job, and sometimes it’s not.

—does it include cooking pasta or not? (Suppose you say “no”, are we to understand that heretofore the consensus answer among “rationalists” was instead “yes”?) This seems like a silly thing to be asking, but that’s because there’s a motte-and-bailey here: the motte is “cooking pasta is not really ‘rationality’, per se, it’s just a skill”, while the bailey is “we shouldn’t apply rationality to the domain of pasta-cooking, or cooking of any sort, or [some even broader category of activity]”.

To put it another way, if “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions” is sometimes the wrong tool for the job, but figuring out whether it’s the right tool for the job (or the question one meta level up from that, or from that, etc.) is “rationality”, then evidently “rationality” is not, in fact, “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions”, but rather something much broader and more fundamental than that. The definition is self-defeating. As is unsurprising; indeed, the idea that “rationality” absolutely should not be defined this narrowly is one of the most important ideas in the Sequences!

Replies from: ryan_b, mr-hire
comment by ryan_b · 2023-07-26T15:44:25.893Z · LW(p) · GW(p)

I do not follow this section:

To put it another way, if “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions” is sometimes the wrong tool for the job, but figuring out whether it’s the right tool for the job (or the question one meta level up from that, or from that, etc.) is “rationality”, then evidently “rationality” is not, in fact, “the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions”, but rather something much broader and more fundamental than that

Why should this be? Naively, this is covered under make better decisions, because the method used to solve a problem is surely a decision. More broadly, it feels like we definitely want rationality to have the property that we can determine the limits of the art, using the art; and also that we can expand the limits of the art, using the art. Math has this property, and we don't consider that to not be math but something more fundamental; not even in light of incompleteness theorems.

For the cooking pasta example: it feels like we should be able to rationally consider the time it would take to grok cooking pasta, compare it to the time it would take to just follow a good recipe, and conclude just following the recipe is a better decision. More specifically, we should be able to decide whether investing in improving our beliefs about pasta cooking is better or worse than going with our current beliefs and using a recipe, on a case-by-case basis.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-07-26T16:38:33.907Z · LW(p) · GW(p)

I agree with your second paragraph, but I’m not sure how it’s meant to be contradicting what I wrote.

Replies from: ryan_b
comment by ryan_b · 2023-07-26T23:25:29.570Z · LW(p) · GW(p)

It is only contradictory insofar as I wrote it using the beliefs and decisions phraseology from Raemon's definition, which isn't much, but what I am really interested in is hearing more about your intuitions behind why applying the definition to meta-level questions points away from the usefulness of the definition.

Note that I am not really interested in Raemon's specific definition per se, so if this is a broader intuition and you'd prefer to use other examples to illustrate that would be just fine.

comment by Matt Goldenberg (mr-hire) · 2023-07-26T13:53:08.574Z · LW(p) · GW(p)

I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes "applying cognitive algorithms to make good decisions"). But it's not necessarily the case that studying that skill will pay off

To me this paragraph covered the point your making and further took a stance on the efficacy of such an approach.

Replies from: Raemon
comment by Raemon · 2023-07-26T16:04:30.230Z · LW(p) · GW(p)

Note I added that after Said wrote this comment. (partly as a response to Said's comment)

The place where I feel like I addressed this (though somewhat less clearly) in the original version was:

There is rationality involved in sifting through the various practices people suggest to you, and figuring out which ones work best. But, the specific skill of "sifting out the good from the bad" isn't always the best approach. It might take years to become good at it, and it's not obvious that those years of getting good at it will pay off.

comment by MondSemmel · 2023-07-29T17:06:16.525Z · LW(p) · GW(p)

If the title doesn't mean "A != B", then I don't know what it's supposed to mean instead. Can you edit the actual explanation of the title into the top of this essay?

Replies from: Raemon
comment by Raemon · 2023-07-29T18:31:41.625Z · LW(p) · GW(p)

Are you saying my current explanation (in the comments here) does or doesn't make sense to you? (i.e. if I just edit that explanation into the top, does it solve the problem you're worried about, or does it feel like it needs a better explanation?)

Replies from: MondSemmel
comment by MondSemmel · 2023-07-29T19:12:07.266Z · LW(p) · GW(p)

I scrolled past that comment [LW(p) · GW(p)] because it was in a sub-sub-thread I wasn't immediately interested in, but now I see that it answers my question, yes. If you edit it into the main post, maybe make the explanation a bit shorter?

Meta feedback: I recommend, in the strongest possible terms, not to use obscure jargon in important communications (which includes essay titles), because it can and will be misinterpreted. You get about five words [LW · GW], after all, and the title plus explanation far exceed that.

Personally I've done a bunch of hobbyist programming, and I've seen but never personally used that syntax (does it even exist in Python?). Now consider that most people have zero programming experience (although I wonder about the median programming experience of LW users), and I suspect that if they're familiar with "!=" or "!=="  at all, they're more likely to be familiar with "!=" as the "unequal" sign in mathematics. But that results in a misleading interpretation of the title! I read "A != B", and interpreted it not as "A is not 100% equal to B", but as "A is very different from B".

comment by df fd (df-fd) · 2023-07-24T12:13:25.054Z · LW(p) · GW(p)

Personally, I am strongly against this, 

I got into Rationality for a purpose if it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced. 

On the other hand, I suspect we mostly agree with our disagreement is on the definition of the word "winning" 

I could have failed reading comprehension but I did not see "winning" defined anywhere in the post

Replies from: Raemon, ryan_b
comment by Raemon · 2023-07-24T19:40:40.505Z · LW(p) · GW(p)

Okay, I've updated I should be a bit more clear on which claims I'm specifically making (and not making) in this post.

First, I want to note my definition of rationality here is not new, it's basically how it was described by Eliezer in 2012, and I'm pretty confident it's what he meant when writing most of the sequences. "Eliezer said so" isn't an argument, but it looks like some people might feel like I'm shifting goalposts and changing definitions and I claim I am not doing that. From Rationality: Appreciating Cognitive Algorithms [LW · GW]:

The word 'rational' is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement. 

He notes that in the sentence "It's rational to believe the sky is blue", the word rational isn't really doing any useful work. The sky is blue. But by contrast, the sentence "It's (epistemically) rational to believe more in hypotheses that make successful experimental predictions" is saying something specifically about how to generate beliefs, and if you removed "rational" you'd have to fill in more words to replace it.

I think there's something similar going on with "it's rational to [do an object-level-strategy that wins]." If it just wins, just call it winning, not rationality. 

...

Second: I'm mostly making an empirical claim as to what seems to happen to individual people (and more noticeably to groups-of-people) if they focus on the slogan "rationality is winning." 

It's hypothetically possible for it to true that "rationality is systemized winning", and for it to be subtly warping to focus on that fact. The specific failure modes I'm worried about are:

  • The feedbackloops are long/slow/noisy, which makes it hard to learn if what you're trying is working.
  • If you set out to systematically win, many people end up pursuing a lot of strategies that are pretty random. And maybe they're good strategies! But bucketing all of them under "rationality" starts to deflate the meaning of the word. 
  • People repeatedly ask "but, isn't it rationality to believe false things?". And, my answer is "maybe, for some people? I think you should be really wary of doing that, but there's certainly no law of the universe saying it's false." But, this gets particularly bad as a way to orient as a group. The first generation of people who came for the epistemics maybe has a decent judgment on when it's okay to ignore epistemics. The second generation who comes for "systemized winning, including maybe ignoring epistemics?" has less ability to figure out if they're actually winning because they can't reason as clearly.
  • Similarly and more specifically: a lot of things-that-win in some respects are wooy, and while I think there's in fact good stuff in some woo, while the first generation of rationalists exploring that woo were rationalists with a solid epistemic foundation. Subsequent generations came more for the woo than for the rationality (See Salvage Epistemology [LW · GW]). 
  • In both the previous two bullets, the slogan "rationality is winning" is really fuzzy and makes it harder to discern "okay which stuff here is relevant?". Whereas "rationality is the study of cognitive algorithms that systematically arrive at truth and succeeding at your goals" at least somewhat 

...

Third: The valley of bad rationality [LW · GW] means that study of systemized winning is not guaranteed to actually lead to winning, even en-net over the course of your entire lifetime. 

Maybe civilization, or your local culture, just has too many missing pieces for the deliberate study of systematic winning to be net-positive. Or maybe you can make some incremental progress, but hit a local optima, and the only way to further improve is to invest in skills that will take too long to pay off.

...

Fourth: Honestly, while I think LessWrong culture is good at epistemics, addressing motivated cognition, and some similar things... I don't have a strong reason to believe that we are particularly good at systematically winning across domains (except in domains where epistemics are particularly relevant)

So while it might be true that "The True Spirit of Rationality" is systemized winning, and epistemics is merely subservient to that... it's nonetheless true that if you're showing on LessWrong or in other rationalist spaces, I think you'll be kind of disappointed if you're hoping to learn skills that will be help you win at life in a generalized sense. 

I do still think "more is possible". And I think there is "alpha" in epistemics, such that if you invest a lot in epistemics you will find a set of tools that the rest of the world is less likely to find. But I don't have a belief that this'll pay off that hard for any specific person.

(side note: I think we have maybe specialized reasonably in "help autistic nerds recover their weak spots", which means learning from our practices will help with some initial growth spurt, but then level up)

...

So, fifth: Regarding your claim here: 

I got into Rationality for a purpose if it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced. 

A lot of my answer here is "sure, that might be fine!" I highly recommend you focus on winning, and use whatever tools are appropriate, which sometimes will be "study/practice cognitive algorithms shaped" and sometimes will have other shapes. 

I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes "applying cognitive algorithms to make good decisions"). But it's not necessarily the case that studying that skill will pay off. And it's not necessarily the case that focusing on that cultivating skill as a community will pay off harder than alternatives like "specialize in a particular sub-domain"

Linguistically, I think it's correct to say "the rational move is the one that resulted in you winning (given your starting resources, including knowledge)", but, "that was the rational move" doesn't necessarily equal "'rationality' as a practice was helpful."

Hope that helps explain where I'm coming from.

Replies from: Raemon, None, df-fd
comment by Raemon · 2023-07-25T03:25:24.957Z · LW(p) · GW(p)

I figure this is as good a place as any to flag a jargon-y nuance in the post title.

The post title is "Rationality !== Winning", not "Rationality != Winning". Different programming languages implement this somewhat differently, but typically "!=" means "X not equal to Y" and "!==" means "X not exactly equal to Y" (when there are various edge cases on what exactly counts as 'equal').

I think there is some sense in that Rationality is Winning, but I don't think it's true that it's exactly equal to winning, and the difference has some implications.

Replies from: eillasti
comment by eillasti · 2023-07-28T12:10:35.751Z · LW(p) · GW(p)

Actually, I missed this one. I agree with you. 

I would edit this into the main post.  I am a programmer, but I missed it. 

comment by [deleted] · 2023-07-25T18:33:05.769Z · LW(p) · GW(p)

Raemon, I had a long time to think on this and I wanted to break down a few points. I hope you will respond and help me clarify where I am confused.

By expected value, don't you mean it in the mathematical sense? For example, take a case where at a casino gambling game you have a slight edge in EV. (Happens when the house gives custom rules to high rollers, on roulette with computer assistance, and blackjack).

This doesn't mean an individual playing with positive EV will accumulate money until they are banned from playing. They can absolutely have a string of bad luck and go broke.

Similarly a person using rationality in their life can have bad luck and receive a bad outcome.

Some of the obvious ones are: if cryonics has a 15 percent chance of working, 85 percent of the futures they wasted money on it. The current drugs that extend lifespan in rats and other organisms that the medical-legal establishment is slow walking studying in humans may not work, or they may work but one of the side effects kills an individual rationalist.

With that said there's another issue here.

There is the assumptions behind rationality, and the heuristics and algorithms this particular group tries to use.

Assumptions:

  1. World is causal.
  2. You can compute from past events general patterns that can be reused.
  3. Individual humans, no matter their trappings of authority, must have a mechanism in order to know what they claim.
  4. Knowing more information relevant to be a decision when making a decision improves your odds, it's not all luck.
  5. Rules not written as criminal law that society wants you to follow may not be to your benefit to obey. Example, "go to college first".
  6. It's just us. Many things by humans are just made up and have no information content whatsoever, they can be ignored. Examples are the idea of "generations" and of course all religion.
  7. Gears level model. How does A cause B. If there is no connection it is possible someone is mistaken.
  8. Reason with numbers. It is possible to describe and implement any effective decisionmaking process as numbers and written rules, reasoning in the open. You can always beat a human "going with their gut", assuming sufficient compute..

I have others but this seems like a start on it

Algorithms

  1. Try to apply bayes theorem
  2. Prediction markets, expressing opinions as probability
  3. What do they claim to know and how do they know it? Specific to humans. This let's you dismiss the advice of whole classes of people as they have no empirical support or are paid to work against you.

Psychologists with their unvalidated and ineffective "talk therapy", psychiatrists in many cases with their obvious crude methods in manipulating entire classes of receptor and lack of empirical tools to monitor attempts at treatment, real estate agents, stock brokers pushing specific securities, and all religion employees.

Note that I will say each of the above is majority not helpful but there are edge cases. Meaning I would trust a psychologist that was an AI system validated against a million patient's outcomes, I would trust a psychiatrist using fMRI or internal brain electrodes, I would trust a real estate agent who is not incentivized for me to make an immediate purchase, I would trust a stock advice system with open source code, and I would trust a religion employee who can show their communication device used to contact a deity or their supernatural powers.

Sorry for the long paragraph but these are heuristics. A truly rational ASI is going to simulate it all out. We humans can at best look if someone is misleading us by looking for outright impossibilities.

  1. Is someone we are debating even responding to our arguments. For example authority figures simply don't engage for questions on cryonics or existential AI risk, or give meaningless platitudes that are not responding to the question asked. Someone doing this is potentially wrong about their opinion.

  2. If an authority figure with a deeply held belief that may be wrong is even updating their belief as evidence is available that invalidates it. Does any authority figure at medical research establishments even know 21CM revived a working kidney after cryo recently? Would it alter their opinion if they were told?

If the assumptions are true, and you pick the best algorithm available, you will win relative to your other humans in expected value. Rationality is winning.

Doesn't mean as an individual you can't die of a heart attack despite a the correct diet while AI stocks are in a winter so you never see the financial benefits. (A gears level model would say A, AI company capital can lead to B, goods and services from AI, and thus also feeds back to A and thus owning shares is a share of infinitely)

Replies from: Raemon
comment by Raemon · 2023-07-25T18:35:34.239Z · LW(p) · GW(p)

I'm not sure I understood the point you're making. 

A point which might be related: I'm not just saying "systemized winning still involves lucks of the dice" (i.e. just because it's positive EV doesn't mean you'll win). I'm saying "studying systemized winning might be negative EV (for a given person in a given point in history." 

Illustrative example: a aspiring-doctor from the distant past might have looked at a superstitious shaman and thought "man, this guy's arguments make no sense. Shamanism seems obviously irrational". And the aspiring doctor goes to reason about medicine from first principles... and invents leeching/bloodletting. He might have some methods/mindsets that are "an improvement" over the shaman's mindset, but the shaman might have generations of accumulated cultural tips/tricks that tend to work even if his arguments for them are really bad. See Book Review: The Secret Of Our Success [LW · GW], although also the counterpoint Reason isn't magic [LW · GW].

comment by df fd (df-fd) · 2023-07-25T02:31:14.396Z · LW(p) · GW(p)

Yes. This is what I was looking for. It makes way more sense now. I broadly agree with everything said here. Thank you for clarifying.

By the way, I think you should consider rewriting the side note re autistic nerd. I am still a bit confused reading that.

Replies from: Valentine
comment by Valentine · 2023-07-25T14:49:16.621Z · LW(p) · GW(p)

By the way, I think you should consider rewriting the side note re autistic nerd. I am still a bit confused reading that.

FWIW, I found the comment crystal clear.

CFAR's very first workshops [LW · GW] had a section on fashion. LukeProg gave a presentation on why fashion was worth caring about, and then folk were taken to go shopping for upgrades to their wardrobe. Part of the point was to create a visible & tangible upgrade in "awesomeness".

At some point — maybe in those first workshops, I don't quite recall — there was a lot of focus on practicing rejection therapy. Folk were taken out to a place with strangers and given the task of getting rejected for something. This later morphed into Comfort Zone Expansion (CoZE) and, finally, into Comfort Zone Exploration [? · GW]. The point here was to help folk cultivate courage.

By the June 2012 workshop I'd introduced Againstness [LW · GW], which amounted to my martial arts derived reinvention of applied polyvagal theory. Part of my intent at the time was to help people get more into their bodies and to notice that yes, your physiological responses actually very much do matter for your thinking.

Each of these interventions, and many many others, were aimed specifically at helping fill in the autistic blindspots that we kept seeing with people in the social scene of rationalists. We weren't particular about supporting people with autism per se. It was just clear that autistic traits tended to synergize in the community, and that this led to points of systematic incompetence that mattered for thinking about stuff like AI. Things on par with not noticing how "In theory, theory and practice are the same" is a joke.

CFAR was responsible for quite a lot of people moving to the Bay Area. And by around 2016 it was perfectly normal for folk to show up at a CFAR workshop not having read the Sequences. HPMOR was more common — and at the time HPMOR encouraged people toward CFAR more than the Sequences IIRC.

So I think the "smart person self-help" tone ended up defining a lot of rationalist culture at least for Berkeley/SF/etc.

…which in turn I think kind of gave the impression that rationality is smart person self-help.

I think we did meaningfully help a lot of people this way. I got a lot of private feedback on Againstness, for instance, from participants months later saying that it had changed their lives (turning around depression, resolving burnout, etc.). Rejection therapy was a game-changer for some folk. I think these things were mostly net good.

But I'm with Raemon on this: For good rationality, it's super important to move past that paradigm to something deeper. Living a better life is great. But lots of stuff can do that. Not as many places have the vision of rationality [LW · GW].

comment by ryan_b · 2023-07-24T13:51:15.270Z · LW(p) · GW(p)

I did not see "winning" defined anywhere in the post

That's because it isn't; insofar as rationality is systematically winning, it is meant to be true for arbitrary definitions of winning.

comment by Gordon Seidoh Worley (gworley) · 2023-07-24T17:02:29.023Z · LW(p) · GW(p)

A few thoughts on this.

This posted reminded me of Eliezer's take against toolbox style thinking [LW · GW]. In particular, it reminded me of the tension within the rationality community between folks who see rationality has the one thing you need for everything and folks who see it as an instrumentally useful thing to pull out in some circumstances.

The former folks form what we call the Core Rationalists. Rationality might not be literally everything, but it's the main thing, and they take an expansive view on the practice of rationality. If something helps them win, they see it as being part of rationality definitionally because it helps them win. This is also where the not-so-straw Vulcan LARPers hang out.

The latter group we might call the Instrumental Rationalists. They care about rationality to the extent it's useful. This includes lots of normal folks who got interested in rationality because it seemed like a useful tool but it's not really central to their identity the way it is for Core Rationalists. This is also the group where the Post/Meta-Rationalists hang out, who can think of as Core Rationalists who realized they should treat rationality as one of many tools and seek to combine it with other things to have a bigger toolbox to use to help them win.

Disagreements between these two groups show up all the time. They often play out in the comments sections of the forum when someone posts something that really gets at the heart of what rationality is. I'm thinking about posts from @[DEACTIVATED] Duncan Sabien [LW · GW], comments from @Said Achmiz [LW · GW], whatever @Zack_M_Davis [LW · GW]'s latest thing is, and of course some of my own posts and comments.

Perhaps this disagreement will persist because there's not really a resolution to it. The difference between these groups is not object level rationality, but how to relate rationality. And both of these groups can be part of the rationality movement even if they sometimes piss each other off because they at least agree on one thing: rationality is really useful.

comment by Christopher King (christopher-king) · 2023-07-24T17:00:48.631Z · LW(p) · GW(p)

My two cents is that rationality is not about being systematically correct, it's about being systematically less wrong. If there is some method you know of that is systematically less wrong than you and you're skilled enough to apply it, you're being irrational. There are some things you just can't predict, but when you can predict them, rationality is the art of choosing to do so.

Replies from: Raemon
comment by Raemon · 2023-07-24T19:59:13.328Z · LW(p) · GW(p)

This feels incomplete to. me, but does feel like it's getting at something interesting and maybe practical. 

Replies from: christopher-king
comment by Christopher King (christopher-king) · 2023-07-24T20:08:58.865Z · LW(p) · GW(p)

Practically, I'm at a similarish place as other LessWrong users, so I usually think about "how can I be even LessWrong than the other users (such as Raemon 😉)". My fellow users are a good approximation to counter-factual versions of me. It's similar to how in martial arts the practitioners try to get stronger than each other.

(This of course is only subject to mild optimization [? · GW] so I don't get nonsense solutions like "distract Raemon with funny cat videos". It is only an instrumental value which must not be pressed too far. In fact, other people getting more rational is a good thing because it raises the target I should reach!)

comment by Viliam · 2023-08-03T11:04:44.319Z · LW(p) · GW(p)

As I see it, "self-help" is short-term/object-level, and "rationality" is long-term/meta-level.

If you want to improve your life, here and now, there is a lot of good specific advice you can follow.

Rationality is the level beyond that. If you get contradictory self-help advice, how will you choose? If you have already fixed the obvious mistakes, and you follow the standard good advice, what next?

There are two kinds of mistakes you want to avoid. One is focusing on insight porn, and neglecting your real life. The other is following the practical advice that seems good, and then stumbling upon advice that is actually really bad, and following it blindly off the cliff.

So I think the optimal approach would start with the specific good advice, but also keep explaining why.

comment by Eli Tyre (elityre) · 2023-07-28T19:38:31.993Z · LW(p) · GW(p)

Strong agree: I said more or less the same thing a few months ago:  A note on “instrumental rationality”

comment by Review Bot · 2024-06-11T01:20:15.207Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by Ariel G. (ariel-g) · 2023-07-28T08:06:19.759Z · LW(p) · GW(p)

So basically, post hoc, ergo propter hoc (post hoc fallacy)

If winning happened after rationality (in this case, any action you judge to be rational under any definition you prefer) it does not mean it happened because of it.

comment by dr_s · 2023-07-25T07:21:22.562Z · LW(p) · GW(p)

For me the obvious problem with "rationality is winning" as a soundbite is that the figure of the "winner" in our culture is defined in a frankly toxic way and using that word obfuscates the huge asymmetries at play. Terminal values are still a thing; rationality is about pursuing one's own goals as best as possible. That doesn't necessarily mean winning. If you're an oil CEO with the goal of making a buttload of money and screw everything else and I'm an environmentalist genuinely preoccupied with saving the Earth from climate change, even my most rational approach is climbing up a very steep hill while you can just fund some crank to spread comforting bad science, bribe a few politicians and call it a day.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-07-25T14:03:18.294Z · LW(p) · GW(p)

Don’t mistake me, and think that I’m talking about the Hollywood Rationality stereotype that rationalists should be selfish or shortsighted. If your utility function has a term in it for others, then win their happiness. If your utility function has a term in it for a million years hence, then win the eon.

“Newcomb’s Problem and Regret of Rationality” [LW · GW]

Replies from: dr_s
comment by dr_s · 2023-07-25T18:06:55.416Z · LW(p) · GW(p)

I'm not arguing that having altruistic goals isn't rational though. I'm arguing that altruistic (and thus morally constrained) goals are harder to achieve and thus all else equal if it's Selfish Rationalist vs Altruistic Rationalist of equal skill, the former actually wins more often than not.

Replies from: Raemon, SaidAchmiz
comment by Raemon · 2023-07-25T18:22:09.977Z · LW(p) · GW(p)

I think the distinction here isn't intrinsically about altruism, it's about the complexity/difficulty of the thing you're trying to achieve. I do think altruism tends to have more complexity baked into it than selfishness of a corresponding scale, but it depends on the particular altruism/selfishness. Helping one person do an object level thing for a day is easier than going to the moon, even if you're doing the latter for selfish reasons.

Replies from: dr_s
comment by dr_s · 2023-07-27T12:31:16.678Z · LW(p) · GW(p)

Obviously, what I'm talking about applies to goals of comparable grandeur. Big civilizational aims. "I want to help the local poor by working in a soup kitchen" is obviously easier than "I want to annex Ukraine to my empire", even though the former is altruistic and the latter just selfish and cruel aggrandizement.

comment by Said Achmiz (SaidAchmiz) · 2023-07-25T18:15:11.092Z · LW(p) · GW(p)

Fine and well, but that’s got nothing to do with the definition of “winning” or “winner”, semantic concerns, etc. Some goals are harder to achieve than others, that’s all.

Replies from: dr_s
comment by dr_s · 2023-07-27T12:32:01.820Z · LW(p) · GW(p)

The point is that if you say "doing X is winning" then immediately people will drift to "whoever is winning is doing X" - which is a fallacy, but you only can see that if you notice all the asterisks that come with that first statement.

comment by Herb Ingram · 2023-07-24T07:34:42.101Z · LW(p) · GW(p)

While I completely agree in the abstract, I think there's a very strong tendency for systems-of-thought, such as propagated on this site, to become cult-like. There's a reason why people outside the bubble criticize LW for building a cult. They see small signs of it happening and also know/feel the general tendency for it, which always exists in auch a context and needs to be counteracted.

As you point out, the concrete ways of thinking propagated here aren't necessarily the best for all situations and it's another very deep can of worms to be able to tell which situations are which. Also, it attracts people (such as myself to some degree) who enjoy armchair philosophizing without actually ever trying to do anything useful with that. Akrasia is one thing, not even expecting to do anything useful with some knowledge and pursuing it as a kind of entertainment is another still.

So there's two ways to frame the message: one is saying that "rationality is about winning", which is a definition that's very hard to attack but also vague in it's immediate and indisputable consequences for how one should think, and also makes it hard to tell if "one is doing it right".

The other way is to impose some more concrete principles and risk them becoming simplified, ritualized, abused and distorted to a point where they might do net harm. This way also makes it impossible to develop the epistemology further. You pick some meta-level and propose rules for thinking at that level which people eventually and inevitably propagate and defend with the fervor of religious belief. It becomes impossible to improve the epistemology at that point.

The meme ("rationality") has to be about something in order to spread and also needs some minimum amount of coherence. "It's about winning" seems to do this job quite well and not too well.

comment by Adrien Sicart · 2023-07-24T10:20:09.898Z · LW(p) · GW(p)

When it comes to rationality, the Black Swan Theory ( https://en.wikipedia.org/wiki/Black_swan_theory ) is an extremely useful test.

A truly rational paradigm should be built with anti-fragility in mind, especially towards Black Swan events which would challenge its axiomatic.

Replies from: None
comment by [deleted] · 2023-07-24T17:22:19.518Z · LW(p) · GW(p)

A black swan is generally an event we knew was possible that had less than the majority of the probability mass.

The flaw with them is not actually an issue with rationality(or other forms of decision making) but due to human compute and memory limits.

If your probability distribution for each trading day on a financial market is p=0.51 +, p=0.48 -, p=0.01 black swan, you may simply drop that long tail term from your decisionmaking. Only considering the highest probability terms is an approximation and is arguably still "rational" since you are reasoning on math and evidence, but you will be surprised by the black swan.

This leads naturally into the next logical offshoot. A human meat computer doesn't have the memory or available compute to consider every low probability long tail event, but you could build an artificial system that does. Part of the reason AI is so critically important and directly relevant to rationality.

Now a true black swan, one we didn't even know was possible? Yeah you are going to be surprised every time. If aliens start invading from another dimension you need to be able to rapidly update your assumptions about how the universe works and respond accordingly. Which rationality, vs alternatives like "the word of the government sanctioned authority on a subject is truth", adapts well too.

This is where being too overconfident hurts. In the event of an ontology breaking event like the invasion example, if you believe p=1.0 the laws of physics as discovered in the 20th century are absolute and complete, what you are seeing in front of your eyes as you reload your shotgun, alien blood splattered everywhere, can't be real. Has to be some other explanation. This kind of thinking is suboptimal.

Similarly if you have the same confidence in theories constructed on decades of high quality data and carefully reasoned on, with lots of use of mathematical proofs, as some random rumor you hear online, you will see nonexistent aliens everywhere. You were not weighting your information inputs by probability.

Replies from: Adrien Sicart
comment by Adrien Sicart · 2023-09-02T16:19:05.583Z · LW(p) · GW(p)

A Black Swan is better formulated as:
- Extreme Tail Event : Probabilities cannot compute in current paradigm. Its weight is p<Epsilon.
- Extreme Impact if it happens : Paradigm Revolution.
- Can be rationalised in hindsight, because there were hints. "Most" did not spot the pattern. Some may have.

If spotted a priori, one could call it a Dragon King: https://en.wikipedia.org/wiki/Dragon_king_theory

The Argument:
"Math + Evidence + Rationality + Limits makes it Rational to drop Long Tail for Decision Making"
is a prime example of an heuristic which fails into what Taleb calls "Blind Faith in Degenerate MetaProbabilities".

It is likely based on an instance of {Absence of Evidence is Evidence of Absence : Ad Ignorantiam : Logical Fallacy}

The central argument of Anti-Fragility is that Heuristics allocating some resources to Black Swans / Dragon Kings studies & contingency plans are infinitely more rational than "drop the long tail" heuristics.

comment by StartAtTheEnd · 2023-07-24T16:16:00.374Z · LW(p) · GW(p)

I think most people are missing the psychological effects of rationalism and irrationalism, and not seeing that this is an equation which depends on itself. Religion is irrational, but it might give you the psychological defense you need to cope with or overcome something. Rationality might harm your sense of wonder and inherent spiritual beliefs which are wrong but nonetheless helpful. So irrationality might be rational, and rationality might be irrational.

Irrationality is a sort of overfitting, into your life and society. But why should you be correct in a manner which is so general that you're not specialized to life, society, mental well-being and winning?