Rationality Is Not Systematized Winning

post by namespace (ingres) · 2018-11-11T22:05:19.153Z · LW · GW · 20 comments

This is a link post for http://www.thelastrationalist.com/rationality-is-not-systematized-winning.html

Contents

20 comments

"Rationality is systematized winning" is a slogan that was adopted to patch a bug in human cognition. Namely our endless capacity to delude ourselves about how we did in an attempt to save face. The concept seems to have been absorbed, but I'm skeptical it's translated into more effective action. Certainly it produced many essays on why winning isn't happening. But the fact that we've been publishing essentially the same essay for a decade now implies something fairly fundamental is wrong. This slogan was chosen because it patches the bug, but I fear at the cost of neutering our ability to focus.

20 comments

Comments sorted by top scores.

comment by FeepingCreature · 2018-11-12T00:11:03.783Z · LW(p) · GW(p)

“Look, if I go to college and get my degree, and I go start a traditional family with 4 kids, and I make 120k a year and vote for my favorite political party, and the decades pass and I get old but I'm doing pretty damn well by historical human standards; just by doing everything society would like me to, what use do I have for your 'rationality'? Why should I change any of my actions from the societal default?”

You must have an answer for them. Saying rationality is systematized winning is ridiculous. It ignores that systematized winning is the default, you need to do more than that to be attractive. I think the strongest frame you can use to start really exploring the benefits of rationality is to ask yourself what advantage it has over societal defaults. When you give yourself permission to move away from the "systematized winning" definition, without the fear that you'll tie yourself in knots of paradox; it's then that you can really start to think about the subject concretely.

I mean, isn't the answer to that, as laid out in the Sequences, that Rationality really doesn't have anything to offer them? Tsuyoku Naritai, Something to Protect, etc. - Eliezer made the Sequences because he needed people to be considering the evidence that AI was dangerous and was gonna kill everyone by default, so short-term give money to MIRI and/or long-term join up as a researcher. "No one truly searches for the Way until their parents have failed them, their Gods are dead and their tools have shattered in their hands." I think it's fair that the majority of people don't have problems with that sort of magnitude of impact in their lives; and in any case, anyone who cared that much would already have gone off to join an EA project. I'm not sure that Eliezer-style rationality needs to struggle for some way to justify its existence when the explicit goal of its existence has already largely been fulfilled. Most people don't have one or two questions in their life that they absolutely, pass-or-die need to get right, and the answer is nontrivial. The societal default is a time-tested satisficing path.

When you are struggling to explain why something is true, make sure that it actually is true.

Replies from: taymon-beal, Pattern, DanielFilan
comment by Taymon Beal (taymon-beal) · 2018-11-12T19:58:39.592Z · LW(p) · GW(p)

There's an argument to be made that even if you're not an altruist, that "societal default" only works if the next fifty years play out more-or-less the same way the last fifty years did; if things change radically (e.g., if most jobs are automated away), then following the default path might leave you badly screwed. Of course, people are likely to have differing opinions on how likely that is.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-11-12T20:19:03.129Z · LW(p) · GW(p)

Does LW-style rationality give you any major advantage in figuring out what to do as a consequence of major automation, though?

Replies from: habryka4
comment by habryka (habryka4) · 2018-11-12T22:35:13.043Z · LW(p) · GW(p)

I think the meme-set of "expect AI to be a really big deal and put a lot of your effort into steering how AI goes" does do that on expectation.

Replies from: Kaj_Sotala, SaidAchmiz
comment by Kaj_Sotala · 2018-11-13T13:01:16.384Z · LW(p) · GW(p)

I don't feel like my work on AI has given me any particular advantage in figuring out how to deal with automation, especially since the kind of AI we're thinking about is mostly AGI and job-threatening automation is mostly narrow AI. I don't think I have a major advantage in figuring out which jobs seem likely to persist and which ones won't - at least not one that would be a further advantage on top of just reading the existing expert reports on the topic.

I think that the main difference between me and the average expert-report-reading, reasonably smart person is that I'm less confident in the expert opinion telling us anything useful / anybody being able to meaningfully predict any of this, but that just means that I have even less of an idea of what I should do in response to these trends.

Replies from: Raemon
comment by Raemon · 2018-11-13T22:35:53.095Z · LW(p) · GW(p)

I think of the LW-style rationality as giving you the set of tools to realize in the first place when the default-path available to you is likely to be insufficient, and the impetus to actually do something differently.

I _think_ it should still be a useful skill for evaluating, acting upon and various career stuff. I'm not 100% sure it's better than having domain expertise in what-things-are-likely-to-be-automated-last, but I think at least being calibrated-about-uncertainty and able to make some generally useful, broad strategic decisions.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-11-15T03:29:43.253Z · LW(p) · GW(p)

I agree that it's useful in realizing that the default path is likely to be insufficient. I'm not sure that it's particularly useful in helping figure out what to do instead, though. I feel like there have been times when LW rationality has even been a handicap to me, in that it has left me with an understanding of how every available option is somehow inadequate, but failed to suggest anything that would be adequate. The result has been paralysis, when "screw it, I'll just do something" would probably have produced a better result.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-11-15T16:32:33.915Z · LW(p) · GW(p)

It seems to me that this is mostly orthogonal to “LW rationality” (at least, the “classic” form of it), and is a matter of mindset. I’ve long (always?) been of the “screw it, I’ll just do something” mindset; I can report that it works quite well and has produced good results; I have never experienced any disconnect between it and, say, anything in the Sequences.

LW rationality has … left me with an understanding of how every available option is somehow inadequate, but failed to suggest anything that would be adequate

Well, there’s something odd about that formulation, isn’t it? You’re treating “adequacy” as a binary property, it seems; but that’s not inherent in anything I recognize as “LW rationality”! Surely the “pure” form of the instrumental imperative to “maximize expected utility” (or something similar in spirit if not in implementation details) doesn’t have any trouble whatsoever with there being multiple options, all of which are somehow less than ideal. Pick whatever’s least bad, and go to it…

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-11-16T12:52:15.116Z · LW(p) · GW(p)
Well, there’s something odd about that formulation, isn’t it? You’re treating “adequacy” as a binary property, it seems; but that’s not inherent in anything I recognize as “LW rationality”.

Well, let's use the automation thing as an example.

I know that existing track records for how much career security etc. various jobs offer, aren't going to be of much use. I also know that existing expert predictions on which jobs are going to stay reliable, aren't necessary very reliable either.

So now I know that I shouldn't rely on the previous wisdom on the topic. The average smart person reading the news has probably figured this out too, with all the talk about technological unemployment. I think that LW rationality has given me a slightly better understanding of the limitations of experts, so compared to the average smart person, I know that I probably shouldn't rely too much on the new thinking on the topic, either.

Great. But what should I do instead? LW rationality doesn't really tell me, so in practice - if I go with the "screw it, I'll just do something" mentality, I just fall back into going with the best expert predictions anyway. Going with the "screw it" mentality means that LW rationality doesn't hurt me in this case, but it doesn't particularly benefit me, either. It just makes my predictions less certain, without changing my actions.

Surely the “pure” form of the instrumental imperative to “maximize expected utility” (or something similar in spirit if not in implementation details) doesn’t have any trouble whatsoever with there being multiple options, all of which are somehow less than ideal. Pick whatever’s least bad, and go to it…

Logically, yes. That's what I do these days.

That said, many people need some reasonable-seeming level of confidence before embarking on a project. "I don't think that any of this is going to work, but I'll just do something anyway" tends to be psychologically hard. (Scott has speculated that "very low confidence in anything" is what depression is.)

My anecdotal observation is that there are some people - including myself in the past - who encounter LW, have it hammered in how uncertain they should be about everything, and then this contributes to driving their confidence levels down to the point where they'll be frequently paralyzed when making decisions. All options feel too uncertain to feel worth acting upon and none of them meets whatever minimum threshold is required for the brain to consider something worth even trying, so nothing gets done.

I say that LW sometimes contributes to this, not that it causes it; it doesn't have that effect on everyone. You probably need previous psychological issues, such as a pre-existing level of depression or generally low self-confidence, for this to happen.

Replies from: SaidAchmiz, clone of saturn
comment by Said Achmiz (SaidAchmiz) · 2018-11-16T17:43:11.458Z · LW(p) · GW(p)

I say that LW sometimes contributes to this, not that it causes it; it doesn’t have that effect on everyone. You probably need previous psychological issues, such as a pre-existing level of depression or generally low self-confidence, for this to happen.

Yes, I think I agree with your view on this. (I’d add a caveat that I suspect it’s not quite depression that does it, but something else, which I’m not sure I can name accurately enough to be useful… I will say this: I was severely depressed around the time I came across LessWrong—and let me tell you, LW rationality definitely did not have this effect you describe on me… Anecdotal observation of others, since then, has confirmed my impression. )

The average smart person reading the news has probably figured this out too … LW rationality doesn’t hurt me in this case, but it doesn’t particularly benefit me

I think you might be—from your LW-rationality-influenced vantage point—underestimating how prevalent various cognitive distortions (or, let’s just say it in plain language: stupidity and wrongheadedness) are in even “average smart people”.

Much of the best of what LW has to offer has always been (as one old post here put it) “rationality as non-self-destruction”. The point isn’t necessarily that you’re rational, and therefore, you win; the point is that by default, you lose, in various stupid and avoidable ways; LW-style rationality helps you not do that.

Now, that might not get you all the way to “winning”. You do still need stuff like “if there aren’t any good options, just take the least bad one and go for it, or at any rate do something instead of just sitting around”, which are, to a large degree, common-sense rules (which, while they would certainly be included in any total, idealized version of “rationality principles”, are by no means unique to LessWrong). But without LessWrong, it’s entirely possible that you’d just fail in some dumb way.

My personal experience is that I know a lot of smart people, and what I observe is that intelligence is no barrier to irrationality and nonsensical beliefs/actions. My impression is that there is a correlation, among the smart people I know, between how consistently they can avoid this sort of thing, and how much exposure they’ve had to LessWrong-style rationality (even if secondhand), or similar ideas.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2018-11-17T08:36:22.927Z · LW(p) · GW(p)

I’d add a caveat that I suspect it’s not quite depression that does it, but something else, which I’m not sure I can name accurately enough to be useful…

This sounds right to me. Something in the rough space of depression, but not quite the same thing.

I think you might be—from your LW-rationality-influenced vantage point—underestimating how prevalent various cognitive distortions (or, let’s just say it in plain language: stupidity and wrongheadedness) are in even “average smart people”.

That's certainly possible, and I definitely agree that there are many kinds of wrongheadedness that are common in smart people but seem to be much less common among LW readers.

That said, my impression of "average smart people" mostly comes from the people I've met at university, hobbies, and the like. I don't live in the Bay or near any of the rationalist hubs. So most of the folks I interact with, and am thinking about, aren't active LW readers (though they might have run across the occasional LW article). It's certainly possible that I'm falling victim to some kind of selection bias in my impression of the average smart person, but I doubt that being too influenced by LW rationality is the filter in question.

Much of the best of what LW has to offer has always been (as one old post here put it) “rationality as non-self-destruction”. The point isn’t necessarily that you’re rational, and therefore, you win; the point is that by default, you lose, in various stupid and avoidable ways; LW-style rationality helps you not do that.

Hmm. "Rationalists might not win, but at least they don't lose just because they're shooting themselves in the foot." I like that, and think that I agree.

comment by clone of saturn · 2018-11-16T23:15:18.378Z · LW(p) · GW(p)

I think that LW ra­tion­al­ity has given me a slightly bet­ter un­der­stand­ing of the lim­it­a­tions of ex­perts, so com­pared to the av­er­age smart per­son, I know that I prob­ably shouldn’t rely too much on the new think­ing on the topic, either.

Great. But what should I do in­stead? LW ra­tion­al­ity doesn’t really tell me, so in prac­tice—if I go with the “screw it, I’ll just do some­thing” men­tal­ity, I just fall back into go­ing with the best ex­pert pre­dic­tions any­way.

It seems like LW rationality would straightforwardly tell you that this means you ought to keep your eggs in multiple different baskets rather than investing everything in the single top expert opinion. (Assuming you're risk-averse, which it sounds like you are.)

comment by Said Achmiz (SaidAchmiz) · 2018-11-12T23:04:33.590Z · LW(p) · GW(p)

Well… but that only applies to a very small subset of people—even relative to all people who are likely ever to be interested in “rationality”!

Edit: To be clear, I actually think the answer to Kaj’s question is “yes, it does”—just not for this reason!

Replies from: habryka4
comment by habryka (habryka4) · 2018-11-12T23:23:15.124Z · LW(p) · GW(p)

True, I also agree with the edit in that it's also useful for other reasons, but the expected inferential distance on that was larger, so I figured I would better make the easy-to-make point than none at all.

comment by Pattern · 2018-11-12T06:40:56.274Z · LW(p) · GW(p)
I mean, isn't the answer to that, as laid out in the Sequences, that Rationality really doesn't have anything to offer them?

I disagree. Here's an example from the same piece:

people have a odd tendency to be okay with letting single random outcomes decide their success, even when it's unnecessary.
I suspect if this is common in gaming it's common in real life too. That people are getting so invested into singular outcomes because they've staked too much on them.

This is 1) testable and 2) actionable. Are there people who don't need this advice? Perhaps. But could a lot of people use this? I think so. (The first time through I read this as "don't just have one plan - it could fail. What will you do if it doesn't work?", though it's more general than that.)

what use do I have for your 'rationality'? Why should I change any of my actions from the societal default?”

I think letting someone else decide what your victory looks like (to you) is a really bad idea.

comment by DanielFilan · 2018-11-14T07:59:38.553Z · LW(p) · GW(p)

Why should I change any of my actions from the societal default?

If you invest in index funds you'll probably be richer than if you invest in other things. [EDIT: well, this is only true modulo tax concerns, but grokking the EMH is still very relevant to investing] That's advice that you can get from other sources, but that I got from the rationality community, that would be useful to me even if I wasn't trying to save the world.

A separate point is that I think contact with the rationality community got me to consider whether 'it made sense to get'/'I really wanted' things out of my life that I hadn't previously considered e.g. that I wanted to be an effective altruist and help save the world. I do think that this sort of counts as 'winning', although it's stretching the definition.

comment by Sinal · 2018-11-14T07:36:43.045Z · LW(p) · GW(p)

In one sense, "Rationality" is used to signify that something is part of the community that is associated with EA, Lesswrong, SSC, rationalist tumblr, secular solstice etc. If someone asked me "I have a successful life, what use is Rationality?" in this sense, I would probably reply there might not be any use at all. I happen to like rationalist tumblr and SSC and I like going to the SSC meetups at David Friedman's house, but you may or may not. Whatever floats your boat.

Now, if somebody ACTUALLY asked me "what use is rationality?" then I might say that I think education (not just schooling) is important because it helps us make better decisions, or I might say that scientific progress helps bring about new technologies that save lives and increase wealth. Or maybe, depending on context I may say something specific like "you're not going to solve your emotional issues with your ex by wishful thinking" or "if you don't understand your code how are supposed to debug it?" but only if I actually think they need to improve this region of their map in order to achieve their goals. If someone says, "No wait, what I really meant is, what use is instrumental rationality?" and actually means it, I would just facepalm because they're essentially asking why they should bother trying to get better at achieving their goals.

My point, is regarding what rationality actually is, I see no reason to exclude things that aren't part of Lesswrong et al. in trying to achieve my goals or in trying to learn. Should there be a community that is focused on giving people the cognitive skills to become more knowledgeable and more formidable? Yes, but we are not the only the only community focused on that, nor should we be. This is okay.

comment by philh · 2018-11-15T15:29:03.933Z · LW(p) · GW(p)

Saying rationality is systematized winning is ridiculous. It ignores that systematized winning is the default, you need to do more than that to be attractive. I think the strongest frame you can use to start really exploring the benefits of rationality is to ask yourself what advantage it has over societal defaults.

I don't think systematized winning is the default. Some people follow societal defaults and win systematically, but I think that more people follow societal defaults and just do pretty okay.

comment by namespace (ingres) · 2018-11-14T00:23:58.355Z · LW(p) · GW(p)

While writing the about page for the upcoming Whistling Lobsters 2.0 forum, I took a shot at giving a brief history of and definition of rationality. The following is the section providing a definition. I think I did an okay job:

The Rationalist Perspective

Rationality is related to but distinct from economics. While they share many ideas and goals, rationality is its own discipline with a different emphasis. It has two major components, instrumental and epistemic rationality. Instrumental means "in the service of", it's about greater insight in the service of other goals. Epistemic means "related to knowledge", and focuses on knowing the truth for its own sake. Instrumental rationality might be best described as "regret minimization". Certainly this phrase captures the key points of the rationalist perspective:

  • Rationality cares about opportunity cost, which is the biggest shared trait with economics. Rationality is not skepticism, skeptics only care about not-losing. Rationalists care about winning, which means that the failure to realize full benefits is incorporated into the profit/loss evaluation.

  • A rationalist should never envy someone else just for their choices. Consider Spock, the 'rational' first officer of the USS Enterprise in Star Trek. Often Spock will insist against helpful action because it would "be illogical". The natural question is "Illogical to whom?". No points are awarded for fetishism. If there are real outcomes to consider, perhaps you hold yourself back for some social benefit, that is all well and good. But there is nothing noble in doing things that make you or the world worse off because you've internalized fake rules.

  • Long term thinking. Regret is generally something you start doing after you've had a bit of experience, it's something you need to think about early to avoid. You don't regret wasting your 20's until you're in your 30's. Regret is about your life path, which is utility vs. time. Most economics focuses on one shot utility maximization scenarios, or iterated games. But the real world has every kind of game imaginable just about, and your 'score' is how you perform on all of them.

comment by norswap · 2018-11-14T10:19:11.699Z · LW(p) · GW(p)

I'm reading this, and it seems very reasonable, and then:

Changing our perspective might have significant benefits. Systematized winning is not an actionable definition. Most domains already have field specific knowledge on how to win, and in aggregate these organized practices are called society. The most powerful engine of systematized winning developed thus far is civilization.

So, assume civilization is a set of guidelines that dictate a course of actions. Just like rationality in fact. How can this beat rationality? If it dictates the correct course of actions, rationality will too. And often, rationality can suggest something more effective.

The possible counters c are: (a) rationality is hard work, and mostly sticking with civilization is fine. (b) Or you're not a good enough rationalist (or have enough good information) to beat civilizational guidelines.

But the article does not really suggest those. It says civilization is already winning. Well, it all hinges on the definition of winning. But it's quite clear you can achieve better outcomes through rationality if that's what you care about and are not put off by the extra work (counter (a)).

The counters are interesting but ultimately irrelevant. You can actually rationality arrive at (a): determining that the cost incurred by practicing rationality is more than the benefits accrued. That being said, it's so general a statement, I don't think anyone it can be true for anyone capable to think the thoughts. You can also rationally arrive at (b), and in fact, if it's true you should: civilization IS evidence, and it has to be valued accurately. If civilization guidelines keep trumping your best guesses, the weight of civilizational evidence should increase accordingly.