If rationality is purely winning there is a minimal shared art
post by whpearson · 2017-01-19T21:39:28.299Z · LW · GW · Legacy · 14 commentsContents
All the virtues makes sense when the context is trying to figure out something very hard and important with other people who are also trying to figure out the same thing and you've got a fair amount of time and resources, with no immediate threats to your lives. None 14 comments
This sites strapline is refining the art of human rationality. It assumes there is something worth talking about.
However I've been seeing "rationality as winning" bare of any context. I think the two things are in conflict.
Let me take the old favourite of the twelve virtues of rationality as representative of the arts (although I could probably find similar situations where any proposed tool doesn't win). Can I find contexts in which humans will not win if they adopt these virtues?
1) Curiousity. There are lots of instances in history where curiousity may have killed you, cramping your winning. Wandering in the wrong part of town, trying out the wrong berry. Curiousity can also be not useful if most facts are boring or useless. I am happy to be ignorant of many facts about the world (who won various sports games, colour of peoples socks).
2&3&4) Relinquishment/lightness/evenness. You might avoid going see the evidence or disbelieve evidence presented by someone you suspect of being a clever charlatan. If you "see" a woman in box get cut in half you shouldn't automatically think that the magician can actually do magic. There is no magic data to evidence converter. See also defying the data.
5) Argument. This is not a useful virtue if you are alone on a desert island. It also might not be worth engaging with people who are trying to waste your time or distract you from something else. See also trolls.
6) Empiricism. This is great if you've got plenty of time. Collecting data always takes time and energy, if you need these things to survive, I wouldn't recommend empiricism.
7) Simplicity: Simplicity is predictable. If you involved in a contest with someone you may need to chose to be unpredictable in order to win.
8) Humbleness: This is a sign of weakness and may be exploited by your opponents in some social situations.
9 & 10) Perfection & Precision: There is an aphorism for this. In social games you don't need to be perfect or infinitely precise just better or faster than your opponent. Anything more is wasted effort.
11) Scholarship: This doesn't make much sense for people on the poverty line.
That said:
All the virtues makes sense when the context is trying to figure out something very hard and important with other people who are also trying to figure out the same thing and you've got a fair amount of time and resources, with no immediate threats to your lives.
Which was the context of the people who were originally made the site, but if rationality is supposed to be a general recipe for winning, the art has to be contextualized away into pretty much nothing. What art covers working on your farm close to starvation and also trying to figure out artificial intelligence's impact on the world?
I personally want to get back to that original context. I think that discussions of rationality are far more fruitful between people who share the same contexts and constraints.
14 comments
Comments sorted by top scores.
comment by shev · 2017-01-20T00:26:50.930Z · LW(p) · GW(p)
I think you've subtly misinterpreted each of the virtues (not that I think in terms the twelve-virtue list is special; they're just twelve good aspects of rational thought).
The virtues apply to your mental process for parsing and making predictions about the world. They don't exactly match the real-world usages of these terms.
Consider these in the context of winning a game. Let's talk about a real-world game with social elements, to make it harder, rather than something like chess. How about "Suppose you're a small business owner. How do you beat the competition?"
1) Curiosity: refers to the fact that you should be willing to consider new theories, or theories at all instead of intuition. You're willing to consider, say, that "customers return more often if you make a point to be more polite". The arational business owner might lose because they think they treat people perfectly fine, and don't consider changing their behavior.
2-4) Relinquishment/lightness/evenness refers to letting your beliefs be swayed by the evidence, without personal bias. In your example: seeing a woman appear to be cut in half absolutely does not cause you to think she's actually cut in half. That theory remains highly unlikely. But it does mean that you have to reject theories that don't allow the appearance of that, and go looking for a more likely explanation. (If you inspect the whole system in detail and come up with nothing, maybe she was actually cut in half! But extraordinary claims require extraordinary evidence, so you better ask everyone you know, and leave some other very extreme theories (such as 'it's all CGI') as valid, as well.)
In my example, the rational business-owner acts more polite to see if it helps retain customers, and correctly (read: mathematically or pseudo-mathematically) interprets the results, being convinced only if they are truly convincing, and unconvinced if they are truly not. The arational business owner doesn't check, or does and massages the results to fit what they wanted to see, or ignores the results, or disbelieves the results because they don't match their expectations. And loses.
5) Argument - if you don't believe that changing your behavior retains customers, and your business partner or employee says they do, do you listen? What if they make a compelling case? The arational owner ignores them, still trusting their own intuition. The rational owner pays attention and is willing to be convinced - or convince them of the opposite, if there's evidence enough to do so. Argument is on the list because it's how two fallible but truth-seeking parties find common truth and check reasoning. Not because arguing is just Generally A Good Idea. It's often not.
6) Empiricism - this is about debating results, not words. It's not about collecting data. Collecting data might be a good play, or it might not. Depends on the situation. But it's still in the scope of rationalism to evaluate whether it is or not.
7) Simplicity - this doesn't mean "pick simple strategies in life". This means "prefer simple explanations over complex ones". If you lower your prices and it's a Monday and you get more sales, you prefer the lower prices explanation over "people buy more on Mondays" because it's simpler - it doesn't assume invisible, weird forces; it makes more sense without a more complex model of the world. But you can always pursue the conclusion further if you need to. It could still be wrong.
8) Humility - refers to being internally willing to be fallible. Not to the social trait of humility. Your rational decision making can be humble even if you come across, socially, as the least humble person anyone knows. The humble business owner realizes they've made a mistake with a new policy and reverses it because not doing so is a worse play. The arational business owner keeps going when the evidence is against them because they still trust their initial calculation when later evidence disagrees.
9-10) Perfectionism/Precision: if it is true that in social games you don't need to be perfect, just better than others, then "perfect play" is maximizing P(your score is better than others), not maximizing E(your score). You can always try to play better, but you have to play the right game.
And if committing N resources to something gets a good chance of winning, while committing N+1 gets a better chance but has negative effects on your life in other ways (say, your mental health), then it can be the right play to commit only N. Perfect and precise play is about the larger game of your life, not the current game. The best play in the current game might be imperfect and imprecise, and that's fine.
11) Scholarship - certainly it doesn't always make sense when weighed against other things. Until it does. The person on the poverty line who learns more when they have time gains powers the others don't have. It may unlock doors out that others can't access. As with everything else, it must be weighed against the other exigencies of their life.
(Also, by the way, I'm not sure what your title means. Maybe rephrase it?)
Replies from: whpearson, username2↑ comment by whpearson · 2017-01-21T19:55:24.948Z · LW(p) · GW(p)
I think I agree most of your examples. So I think we maybe talking past each other a bit.
My point with going the 12 rationalities was to try and show that what are "winning strategies" is contextual. Lets take one example for I am short of time.
8) Humility - refers to being internally willing to be fallible. Not to the social trait of humility. Your rational decision making can be humble even if you come across, socially, as the least humble person anyone knows. The humble business owner realizes they've made a mistake with a new policy and reverses it because not doing so is a worse play. The arational business owner keeps going when the evidence is against them because they still trust their initial calculation when later evidence disagrees.
But internally willing to be fallible requires showing the external signs of fallibility or lying. If a person is looking for a contractor to do a job and asks some contractors of equal ability how long it will take. If one person say accurately 8-12 weeks (depending on other things going on and acquiring supplies) and another person says 8-10 weeks max, the 8-10 week person might get the job. Because they were more arrogant about their abilities/self-perception (I believe this sort of thing happens all the time in government contracts, with people picking the smallest unrealistic quote).
If you are a politician and change your stated beliefs based on evidence you might be accused of flip-flopping. Changing your beliefs a lot is a sign of ignorance and no one wants ignorant leaders.
(Also, by the way, I'm not sure what your title means. Maybe rephrase it?)
I'll have a think about it. Thanks for the feedback!
Replies from: shev↑ comment by shev · 2017-01-21T22:16:56.510Z · LW(p) · GW(p)
Well - I'm still getting the impression that you're misunderstanding the point of the virtues, so I'm not sure I agree that we're talking past each other. The virtues, as I read them, are describing characteristics of rational thought. It is not required that rational thinkers appear to behave rationally to others, or act according to the virtues, at all. Lying very well may be a good, or the best, play in a social situation.
Appearing rational may be a good play. Demonstrating rationality can cause people to trust you and your ability to make good decisions not swayed by whim, bias, or influence. But there are other effective social strategies (politicians, for instance, tend to get by much more on rhetoric than reasoning).
So if you're talking about 'rationality as winning', these virtues are characteristics of the mental program you run to win better. They may or may not correlate with how rational you appear to others. If you're trying to find ways to appear more rational, then certainly, look at the virtues as a list of "things to display". But if you're trying to behave rationally, ignore the display part and focus on how you reason when confronted with optimization problems in your life (in which 'winning' is 'playing optimally', or more optimally than you otherwise would).
They're all sort of intrinsic to good reasoning, too, though in the Yudkowsky post this is heavily concealed under grandiloquent language.
I guess I'd put it like this: consider the simplistic model where human minds consist of:
- a bundle of imperfect computing machinery that makes errors, is swayed by biases and emotional response, etc, and
- a bundle of models for looking at the world that you get to make your decisions according to, of which several seem reasonable and all can change over time
And you're tasked with trying to optimize the way you play in real-world situations. Well, the best players are the ones who optimize their computing machinery, use their machinery to correctly parse and sift through models, and accurately update their models to reflect the way the world appears to (probably) work, because they will over time come to the best determinations of plays in reality and thus, probabilistically, 'win'.
So optimizing your imperfect computing machinery is "perfectionism", "precision", and "scholarship", and the unnamed one which I would dub "playing to win" (letting winning be your metric, not your own reasoning). Correctly managing your models (according to, like, information theory, which should be optimal) is "lightness", "relinquishment", "evenness", "empiricism", and "simplicity". And then, knowing that you don't have all the models or that your models may yet change, and having, to play optimally anyway, you compute that you need to actively seek new models ('curiosity'), play as though your data may still be wrong ('humility'), and let the world challenge your models and give you data that your observations do not ('argument').
And the idea is that these abilities are intrinsic to winning, if this is a good approximation of humans (which I think it is). So they describe virtues of how humans manage this game of imperfect machinery and models, which may only correlate with external behavior.
Replies from: whpearson↑ comment by whpearson · 2017-01-22T01:24:57.747Z · LW(p) · GW(p)
Aside: The nub of my problem with rationality as winning is this: I think it important that people believe what I say, so I strive for something close to quaker-ism. So I might lose short term about some of the things I care about. I really want a group of people that I can trust to be truth seeking and also truth saying. LW had an emphasis for that and rationalists seem to be slipping away from it with "rationality is about winning".
Throughout evolutionary history we have not seen creatures with better models winning over creature with worse models. Cheap and stupid sometimes in some contexts wins over expensive and smart.
If you lie, you do not get accurate argument as they will argue with your lies rather than with your truth. So do you tell the truth to get a better model or do you lie to "win"?
There is a the concept of [Energy returned on energy invested. I think there is the same concept for model building if the cost of building models does not pay off in value, then model building is not winning.
For we,the educated, wealthy, literate with a vast amount of useful information very easily accessible, the value we can get for the expense paid in doing research (at least up to a certain stage, when the VROVI is low because the low hanging fruit has gone), it makes sense to embody some of the virtues. Unless we start to look too weird because we can not hide it and people stop trusting or relating to us. But for a lot of humanity not connected to the internet the probability of creating valuable models is low (you get spirits/ancestor worship/ etc), so they can win by not doing too much modelling and doing what they know and surviving. So are we talking about "human rationality/winning" or "privileged people rationality/winning"?
I'm sorry I've not had enough time to put into this reply, but I think there is value in keeping the conversation flowing.
Replies from: shev↑ comment by shev · 2017-01-22T01:53:42.089Z · LW(p) · GW(p)
Well. We should probably distinguish between what rationality is about and what LW/rationalist communities are about. Rationality-the-mental-art is, I think, about "making optimal plays" at whatever you're doing, which leads to winning (I prefer the former because it avoids the problem where you might only win probabilistically, which may mean you never actually win). But the community is definitely not based around "we're each trying to win on our own and maximize our own utility functions" or anything like that. The community is interested in truth seeking and exploring rationality and how to apply it and all of those things.
Evolution doesn't really apply. If some species could choose the way they want to evolve rationally over millions of years I expect they would clobber the competition at any goal they seek to achieve. Evolution is a big probabilistic lottery with no individuals playing it.
If you're trying to win at "achieve X", and lying is the best way to do that, then you can lie. If you're trying to win at "achieve X while remaining morally upright, including not lying", or whatever, then you don't like. Choosing to lie or not parameterizes the game. In either game, there's a "best way to play", and rationality is the technique for finding it.
Of course it's true that model-building may not be the highest return activity towards a particular goal. If you're trying to make as much money as possible, you'll probably benefit much more from starting a business and re-investing the profits asap and ignoring rationality entirely. But doing so with rational approaches will still beat doing so without rational approachs. If you don't any particular goal, or you're just generally trying to learn how to win more at things, or be generally more efficacious, then learning rationality abstractly is a good way to proceed.
"Spending time to learn rationality" certainly isn't the best play towards most goals, but it appears to be a good one if you get high returns from it or if you have many long-term goals you don't know how to work on. (That's my feeling at least. I could be wrong, and someone who's better at finding good strategies will end up doing better than me.)
In summary, "rationality is about winning" means if you're put in situations where you have goals, rational approaches tend to win. Statistically. Like, it might take a long time before rational approaches win. There might not be enough time for it to happen. It's the 'asymptotic behavior".
An example: if everyone was caring a lot about chess, and your goal was to be the best at chess, you can get a lot of the way by playing a lot of chess. But if someone comes along who also played a lot of games, they might start beating. So you work to beat them, and they work to beat you, and you start training. Who eventually wins? Of course there are mental faculties, memory capabilities, maybe patience, emotional things at work. But the idea is, you'll become the best player you can become given enough time (theoretically) via being maximally rational. If there are other techniques that are better than rationality, well, rationality will eventually find them - the whole point is that finding the best techniques is precisely rational. It doesn't mean you will win, there are cosmic forces against you. It means you're optimizing your ability to win.
It's analogous to how, if a religion managed to find actually convincing proof of a divine force in the universe, that force would immediately be the domain of science. There are no observable phenomena that aren't the domain of science. So the only things that can be religious are things you can't possible prove occur. Equivalently, it's always rational to use the best strategy. So if you found a new strategy, that would become the rationalists' choice to. So the rationalist will do at least as well as you, and if you're not jumping to better strategies when they come along, the rationalist will win. (On average, over all time.)
Replies from: whpearson↑ comment by whpearson · 2017-01-22T23:37:21.662Z · LW(p) · GW(p)
Well. We should probably distinguish between what rationality is about and what LW/rationalist communities are about.
Rationalists aren't about rationality? Back in 2007 I don't think there was a split. Maybe we need to rename rationalists if "rationality is winning" is entrenched.
LWperson: I'm a rationalist, I really care about AIrisk.
PersonWhohasReadSomeRationalityStuff: So you will lie to get whatever you want, why should I think AIrisk is as important as you say and give you money?
LWPerson: Sigh...
Rationality-the-mental-art is, I think, about "making optimal plays" at whatever you're doing, which leads to winning (I prefer the former because it avoids the problem where you might only win probabilistically, which may mean you never actually win).
I consider every mental or computational action a "play" because it uses energy and can have a material impact on someones goals. So being more precise in your thinking or modelling is also a 'play' even before you make a play in the actual game.
Evolution doesn't really apply. If some species could choose the way they want to evolve rationally over millions of years I expect they would clobber the competition at any goal they seek to achieve. Evolution is a big probabilistic lottery with no individuals playing it.
I think you missed my point about evolution.
Your version of rationality sounds a lot like fitness in evolution. We don't not what it is but it is whatever it is that survives (wins). So if we look at evolution and the goal is survival, lots of creatures manage to survive while not having great modelling capability. This is because modelling is hard and expensive.
Fitness is also not a shared art. Ants telling birds how to be "fit" would not be a productive conversation.
I've run out of time again. I shall try and respond to the rest of your post later.
Replies from: shev↑ comment by shev · 2017-01-23T01:20:46.684Z · LW(p) · GW(p)
You had written
"I really want a group of people that I can trust to be truth seeking and also truth saying. LW had an emphasis for that and rationalists seem to be slipping away from it with "rationality is about winning"."
And I'm saying that LW is about rationality, and rationality is how you optimally do things, and truth-seeking is a side effect. And the truth-seeking stuff in the rationality community that you like is because "a community about rationality" is naturally compelled to participate in truth-seeking, because it's useful and interesting to rationalists. But truth-seeking isn't inherently what rationality is.
Rationality is conceptually related to fitness. That is, "making optimal plays" should be equivalent to maximizing fitness within one's physical parameters. More rational creatures are going to be more fit than less rational ones, assuming no other tradeoffs.
It's irrelevant that creatures survive without being rational. Evolution is a statistical phenomenon and has nothing to do with it. If they were more rational, they'd survive better. Hence rationality is related to fitness with all physical variables kept the same. If it cost them resources to be more rational, maybe they wouldn't survive better, but that wouldn't be keeping the physical variables the same so it's not interesting to point that out.
If you took any organism on earth and replaced its brain with a perfectly rational circuit that used exactly the same resources, it would, I imagine, clobber other organisms of its type in 'fitness' by so incredibly much that it would dominate its carbon-brained equivalent to the point of extinction in two generations or less.
I didn't know what "shared art" meant in the initial post, and I still don't.
Replies from: whpearson↑ comment by whpearson · 2017-01-23T09:39:38.098Z · LW(p) · GW(p)
I didn't know what "shared art" meant in the initial post, and I still don't.
So the art of rationality are techniques that we share to help each other "win" in our contexts. The thrust of my argument has been that I think rationality is a two place word. That you need a defined context to be able to talk about what "wins". Why? Results like there is no such thing as a free lunch. If you point me at AIXI as optimal I'll point out that it only says that there is no better algorithm over all problems, but that that is consistent with there being lots of other equally bad algorithms.
If you took any organism on earth and replaced its brain with a perfectly rational circuit that used exactly the same resources, it would, I imagine, clobber other organisms of its type in 'fitness' by so incredibly much that it would dominate its carbon-brained equivalent to the point of extinction in two generations or less.
This would only be by definition. Which I don't think is a necessarily a mathematically sensible definition (all the problems in the world might have sufficient shared context).
comment by Viliam · 2017-01-20T13:47:12.746Z · LW(p) · GW(p)
All the virtues makes sense when the context is trying to figure out something very hard and important with other people who are also trying to figure out the same thing
Indeed, without honestly trying to solve an important problem, knowing about biases can hurt people. (Technically, the problem doesn't have to be important, but with unimportant people are even more likely to optimize for things other than solving the problem.)
What art covers working on your farm close to starvation and also trying to figure out artificial intelligence's impact on the world?
That would be the art of non-self-destruction.
comment by Qiaochu_Yuan · 2017-01-20T04:19:17.084Z · LW(p) · GW(p)
I have never thought of the twelve virtues as representative of what I think rationality is, and I've never heard anyone seriously defend them. They were written in a different time and for a different context than the one we're in.
I think the phrase "the art of rationality" points to something coherent and large, and that "winning" or even "systematized winning" is not a good description of it. Winning is at best a diagnostic. I think a better description is something like this: we live in a world we were never intended to inhabit. There are challenges we face now that are totally unlike the challenges our ancestors faced and were selected for facing. The fact that we can do anything about this is pretty amazing, but we can probably do a lot better than what we do by default, by inventing and learning mental motions and patterns of mental motions our ancestors never needed, suited to our new environment (in the same way that parkour consists of physical motions our ancestors never needed, suited to the new environment of the urban jungle).
Replies from: philh↑ comment by philh · 2017-01-20T11:13:38.220Z · LW(p) · GW(p)
I agree that "winning" isn't a good description, but to me it's more a guiding principle than a diagnostic.
It feels like the kind of advice which is universally applicable but almost entirely useless, like "between consenting adults, there's no right or wrong way to have sex; do whatever works for you". (I feel like there must be a word for this kind of thing. It's maybe not quite a platitude, but I guess it's a subset of platitudes.)
That is, "rationality is about winning" doesn't really help to point someone at how to win; but if you ever go "we're not trying to win here, we're trying to be rational", you've gotten confused somewhere.
comment by [deleted] · 2017-01-19T22:19:39.591Z · LW(p) · GW(p)
Late me take the old favourite
Should be "let", I think?
close the starvation and also
close "to" starvation?
Hm, you bring up a valid point about how the 12 virtues as listed don't really correspond with directly solving a lot of practical (EX: farm problems).
I think that's okay. For the most part, I think about instrumental rationality as applied cognitive science in the CFAR-esque vein, where utilizing research in habits and motivation can have nice flow-through effects / higher applicability across multiple sectors.
TAPs, for example, can help people remember to do things, regardless of context.
I think you're advocating for moving more towards a specific problem context so that we can discuss common problems? (So that solutions are more applicable to the group?)
I think that a lot of the LW/rationality value, though does come from these sort of higher-level things, where we can apply them to lots of contexts.
Replies from: whpearson