Explicit and tacit rationality
post by lukeprog · 2013-04-09T23:33:29.127Z · LW · GW · Legacy · 77 commentsContents
Three methods for training rationality Talking-rationality and winning-rationality can come apart Explicit and tacit rationality Final scattered thoughts None 77 comments
Like Eliezer, I "do my best thinking into a keyboard." It starts with a burning itch to figure something out. I collect ideas and arguments and evidence and sources. I arrange them, tweak them, criticize them. I explain it all in my own words so I can understand it better. By then it is nearly something that others would want to read, so I clean it up and publish, say, How to Beat Procrastination. I write essays in the original sense of the word: "attempts."
This time, I'm trying to figure out something we might call "tacit rationality" (c.f. tacit knowledge).
I tried and failed to write a good post about tacit rationality, so I wrote a bad post instead — one that is basically a patchwork of somewhat-related musings on explicit and tacit rationality. Therefore I'm posting this article to LW Discussion. I hope the ensuing discussion ends up leading somewhere with more clarity and usefulness.
Three methods for training rationality
Which of these three options do you think will train rationality (i.e. systematized winning, or "winning-rationality") most effectively?
- Spend one year reading and re-reading The Sequences, studying the math and cognitive science of rationality, and discussing rationality online and at Less Wrong meetups.
- Attend a CFAR workshop, then spend the next year practicing those skills and other rationality habits every week.
- Run a startup or small business for one year.
Option 1 seems to be pretty effective at training people to talk intelligently about rationality (let's call that "talking-rationality"), and it seems to inoculate people against some common philosophical mistakes.
We don't yet have any examples of someone doing Option 2 (the first CFAR workshop was May 2012), but I'd expect Option 2 — if actually executed — to result in more winning-rationality than Option 1, and also a modicum of talking-rationality.
What about Option 3? Unlike Option 2 or especially Option 1, I'd expect it to train almost no ability to talk intelligently about rationality. But I would expect it to result in relatively good winning-rationality, due to its tight feedback loops.
Talking-rationality and winning-rationality can come apart
I've come to believe... that the best way to succeed is to discover what you love and then find a way to offer it to others in the form of service, working hard, and also allowing the energy of the universe to lead you.
Oprah isn't known for being a rational thinker. She is a known peddler of pseudoscience, and she attributes her success (in part) to allowing "the energy of the universe" to lead her.
Yet she must be doing something right. Oprah is a true rags-to-riches story. Born in Mississippi to an unwed teenage housemaid, she was so poor she wore dresses made of potato sacks. She was molested by a cousin, an uncle, and a family friend. She became pregnant at age 14.
But in high school she became an honors student, won oratory contests and a beauty pageant, and was hired by a local radio station to report the news. She became the youngest-ever news anchor at Nashville's WLAC-TV, then hosted several shows in Baltimore, then moved to Chicago and within months her own talk show shot from last place to first place in the ratings there. Shortly afterward her show went national. She also produced and starred in several TV shows, was nominated for an Oscar for her role in a Steven Spielberg movie, launched her own TV cable network and her own magazine (the "most successful startup ever in the [magazine] industry" according to Fortune), and became the world's first female black billionaire.
I'd like to suggest that Oprah's climb probably didn't come merely through inborn talent, hard work, and luck. To get from potato sack dresses to the Forbes billionaire list, Oprah had to make thousands of pretty good decisions. She had to make pretty accurate guesses about the likely consequences of various actions she could take. When she was wrong, she had to correct course fairly quickly. In short, she had to be fairly rational, at least in some domains of her life.
Similarly, I know plenty of business managers and entrepreneurs who have a steady track record of good decisions and wise judgments, and yet they are religious, or they commit basic errors in logic and probability when they talk about non-business subjects.
What's going on here? My guess is that successful entrepreneurs and business managers and other people must have pretty good tacit rationality, even if they aren't very proficient with the "rationality" concepts that Less Wrongers tend to discuss on a daily basis. Stated another way, successful businesspeople make fairly rational decisions and judgments, even though they may confabulate rather silly explanations for their success, and even though they don't understand the math or science of rationality well.
LWers can probably outperform Mark Zuckerberg on the CRT and the Berlin Numeracy Test, but Zuckerberg is laughing at them from atop a huge pile of utility.
Explicit and tacit rationality
Patri Friedman, in Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality, reminded us that skill acquisition comes from deliberate practice, and reading LW is a "shiny distraction," not deliberate practice. He said a real rationality practice would look more like... well, what Patri describes is basically CFAR, though CFAR didn't exist at the time.
In response, and again long before CFAR existed, Anna Salamon wrote Goals for which Less Wrong does (and doesn't) help. Summary: Some domains provide rich, cheap feedback, so you don't need much LW-style rationality to become successful in those domains. But many of us have goals in domains that don't offer rapid feedback: e.g. whether to buy cryonics, which 40-year investments are safe, which metaethics to endorse. For this kind of thing you need LW-style rationality. (We could also state this as "Domains with rapid feedback train tacit rationality with respect to those domains, but for domains without rapid feedback you've got to do the best you can with LW-style "explicit rationality".)
The good news is that you should be able to combine explicit and tacit rationality. Explicit rationality can help you realize that you should force tight feedback loops into whichever domains you want to succeed in, so that you can have develop good intuitions about how to succeed in those domains. (See also: Lean Startup or Lean Nonprofit methods.)
Explicit rationality could also help you realize that the cognitive biases most-discussed in the literature aren't necessarily the ones you should focus on ameliorating, as Aaron Swartz wrote:
Cognitive biases cause people to make choices that are most obviously irrational, but not most importantly irrational... Since cognitive biases are the primary focus of research into rationality, rationality tests mostly measure how good you are at avoiding them... LW readers tend to be fairly good at avoiding cognitive biases... But there a whole series of much more important irrationalities that LWers suffer from. (Let's call them "practical biases" as opposed to "cognitive biases," even though both are ultimately practical and cognitive.)
...Rationality, properly understood, is in fact a predictor of success. Perhaps if LWers used success as their metric (as opposed to getting better at avoiding obvious mistakes), they might focus on their most important irrationalities (instead of their most obvious ones), which would lead them to be more rational and more successful.
Final scattered thoughts
- If someone is consistently winning, and not just because they have tons of wealth or fame, then maybe you should conclude they have pretty good tacit rationality even if their explicit rationality is terrible.
- The positive effects of tight feedback loops might trump the effects of explicit rationality training.
- Still, I suspect explicit rationality plus tight feedback loops could lead to the best results of all.
- I really hope we can develop a real rationality dojo.
- If you're reading this post, you're probably spending too much time reading Less Wrong, and too little time hacking your motivation system, learning social skills, and learning how to inject tight feedback loops into everything you can.
77 comments
Comments sorted by top scores.
comment by Qiaochu_Yuan · 2013-04-10T00:35:15.055Z · LW(p) · GW(p)
I'd particularly expect many people to have good tacit rationality without having good explicit rationality in domains where success is strongly determined by "people skills." This is the kind of thing I expect LWers to be particularly bad at (being neurotypical helps immensely here) and is not the kind of thing that most people can explain how they do (I think it takes place almost entirely in System 1).
When evaluating the relationship between success and rationality it seems worth keeping in mind survivorship bias. For example, a small number of people can be wildly successful in finance through sheer luck due to the large number of people in finance and the randomness of finance. Those people don't necessarily have any rationality, explicit or otherwise, but you're more likely to have heard of them than a random person in finance. But I don't know enough about Oprah to say anything about how much of her being promoted to our collective attention constitutes survivorship bias and how much is genuine evidence of her competence.
One setting where explicit rationality seems instrumentally more useful than tight feedback loops is in determining which tight feedback loops to expose yourself to, e.g. determining whether you should switch from one domain to a very different domain, and if so, which different domain you should switch to. IIRC there are various instances of well-known and respected scientists doing good work in one field and then going on to spout nonsense in another, and this seems like the kind of thing that explicit rationality could help prevent.
Agree that I am spending too much time reading LessWrong, though. I've been quantifying this using RescueTime and the numbers aren't pretty.
Replies from: atucker↑ comment by atucker · 2013-04-10T02:34:27.103Z · LW(p) · GW(p)
When evaluating the relationship between success and rationality it seems worth keeping in mind survivorship bias.
An interesting case is that Will Smith seems likely to be explicitly rational in a way that other people in entertainment don't talk about -- he'll plan and reflect on various movie-related strategies so that he can get progressively better roles and box office receipts.
For instance, before he started acting in movies, he and his agent thought about what top-grossing movies all had in common, and then he focused on getting roles in those kinds of movies.
http://www.time.com/time/magazine/article/0,9171,1689234,00.html
Replies from: MrMind↑ comment by MrMind · 2013-04-10T09:01:16.293Z · LW(p) · GW(p)
An interesting case is that Will Smith seems likely to be explicitly rational in a way that other people in entertainment don't talk about
In the same venue, I've been impressed by Greene's account of 50 Cent he made in the book "The 50th law". If that's really 50's way of thinking, it's brutally rational and impressively strategical.
comment by John_Maxwell (John_Maxwell_IV) · 2013-04-10T03:26:02.461Z · LW(p) · GW(p)
Do you want to get more specific about what you mean by "tight feedback loops"? I spent a few years focusing on startup things, and I don't think "tight feedback loops" are a good characterization. It can take a lot of work to figure out whether a startup idea is viable. That's why it's so valuable to gather advance data when possible (hence the lean startup movement). If you want "tight feedback loops", it seems like trying to master some flash game would offer a much better opportunity.
As far as I can tell, what actual entrepreneurs have that wannabee entrepreneurs don't is the ability to translate their ideas in to action. They're bold enough to punch through unendorsed aversions, they're not afraid to make fools of themselves, they don't procrastinate, they actually try stuff out, and they push on without getting easily discouraged. You could think of these skills as being multipliers on rationality... if your ability to act on your ideas is 0, it doesn't matter how good your ideas are and you should focus on improving your ability to act, not improving your ideas. It might help to start distrusting yourself whenever you say "I'll do X" and think "hm... am I really going to do X? What's the first step? When and how am I going to take that step? Why am I not taking it now? If I'm not going to take it now, will I ever take it?" (Relevant.)
BTW, one possible explanation for why some people are able to make good decisions in practice but not in theory could be the near/far thing Robin Hanson likes to bring up.
Lots of people are successful at many things, but that doesn't mean that for any particular person, like Oprah, there will be generalizable insights about success to be gathered from their life. For example, maybe what caused Oprah to skyrocket to billionaire status (instead of being a regular old driven, fairly successful person) was that she came up with a great gimmick. I'm not sure studying her example would provide much insight in to how to be successful for non-talk show people. But if you think it would, there are lots of biographies of famous, successful people you could mine for success insights.
Replies from: matt, NancyLebovitz↑ comment by matt · 2013-05-07T07:09:45.359Z · LW(p) · GW(p)
They're bold enough to punch through unendorsed aversions, they're not afraid to make fools of themselves, they don't procrastinate, they actually try stuff out, and they push on without getting easily discouraged.
For what it's worth, I'm a pretty successful entrepreneur and I'd say this more like:
They manage on the whole to punch through many of their unendorsed aversions (at least the big ones that look like they're getting in the way), they're just as afraid to make fools of themselves as you are but they have ways of making themselves act anyway most of the time, they keep their procrastination under control and manage to spend most of their time working, they actually try stuff out, and they have ways to push through their discouragement when it strikes.
(Your version scans better.)
I'm commenting mostly against a characterisation of this stuff being easy for successful entrepreneurs. If you try something entrepreneurial and find that it's hard, that's not very useful information and it doesn't mean that you're not one of the elect and should give up - it's bloody hard for many successful people, but you can keep working on your own systems until they work (if you try to just keep working I think you'll fail - go meta and work on both what's not working to make it work better and on what is working to get more of it).
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-07T07:12:41.470Z · LW(p) · GW(p)
Thanks! Yes, I agree that it's possible to get better at most of those things through deliberate effort, which includes system-building, and it's a good point that people shouldn't be dissuaded just 'cause it doesn't seem to come to them naturally.
↑ comment by NancyLebovitz · 2013-04-10T17:27:57.951Z · LW(p) · GW(p)
Here's something I heard about Oprah which is consistent with the wikipedia article but not included in it. People had been talking about wanting more positive talk shows, so Oprah decided to have one. This is a rationality skill because she explored giving people what they said they wanted instead of being offended that they didn't like what she was already doing.
It's possible that her gimmick was the result of some thought about the question of how to do a positive talk show while keeping it interesting.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-04-10T20:48:02.925Z · LW(p) · GW(p)
Seems plausible. "Figure out what people want and give it to them" is a widely repeated success principle for salespeople and entrepreneurs. See Paul Graham on making something people want.
comment by elharo · 2013-04-10T06:50:49.152Z · LW(p) · GW(p)
What's the evidence that rationality leads to winning? I don't think that claim has been demonstrated.
This whole post seems very circular to me. You believe that rationality leads to winning, in fact you seem to believe that rationality is a necessary condition for winning, so when you see someone win, you conclude that therefore they are rational. And by extension whatever they are doing is rational. And if what the winners do doesn't look rational to us at first glance, we look deeper and rationalize as necessary until we can claim they are rational.
This reminds me not a little of classical economists who believe that people are rational consumers, and therefore treat anything consumers do, no matter how ridiculous, as expressing hidden preferences. I.e. they believe that rational consumers maximize their utility, so they contort utility functions such that they are maximized by whatever consumers choose, rather than recognizing the fact that consumers are often irrational. For instance, if consumers are willing to pay $50 for a bottle of bourbon they won't pay $10 for, they assert that consumers are buying a status symbol rather than a bottle of bourbon. You're proposing an equivalent sort of hidden rationality.
But rationality is not winning. It has a specific and reasonable definition. Epistemic rationality is the ability to form correct beliefs about the world. Instrumental rationality is the ability to choose and take reasonable actions toward ones goals.
Whether either or both of these characteristics leads one to "win" or accumulate massive piles of utility, is an empirical question. If demonstrably irrational people nonetheless manage to win, then we have good evidence that rationality is not a necessary condition for winning. Perhaps it is not even correlated to winning. Worse yet, perhaps it is anti-correlated; and irrational people are more likely to win than irrational people. Or perhaps not. Maybe there are just a lot more irrational people in the world than rational, so more winners are drawn from this larger pool. Perhaps more rational people have a higher probability of winning. But either way, whether rationality leads to winning or losing and if so by how much, is an empirical question to be answered by measurement and research, not something to be blithely asserted.
Replies from: MrMind↑ comment by MrMind · 2013-04-10T09:16:36.564Z · LW(p) · GW(p)
But either way whether rationality leads to winning is an empirical question to be answered by measurement and research, not something to be blithely asserted.
As with anything, but if you unpack "rationality" to "having the right model of the world", the opposite of "rationality leads to winning" is "the world is made of an explicitly anti-rational force", that is "magic". I would assign a very low probability to it: counter-examples like Oprah seems to elevate more the probability of "people compartimentalize" than "the universe guides her energy".
Replies from: Viliam_Bur, elharo↑ comment by Viliam_Bur · 2013-04-10T11:04:07.173Z · LW(p) · GW(p)
Also, we should not neglect the base rates. If more than 99% of people on this planet are irrational by LW standards, then we should not be surprised by seeing irrational people among the most successful ones, even if rationality increases the probability of success.
In other words, if you would find that (pulling the numbers out of hat) 99% of all people are irrational, but "only" 90% of millionaires are irrational, that would be an evidence that rationality does lead to (increased probability of) winning.
Also, in real humans, rationality isn't all-or-nothing. Compare Oprah with an average person from her reference group (before she became famous). Is she really less rational? I doubt it.
Replies from: gwern, elharo↑ comment by gwern · 2013-04-10T16:02:31.380Z · LW(p) · GW(p)
Also, in real humans, rationality isn't all-or-nothing. Compare Oprah with an average person from her reference group (before she became famous). Is she really less rational? I doubt it.
That seems entirely possible. Consider the old chestnut that entrepreneurs are systematically overoptimistic about their chances of success and that startups and similar risks are negative expected value. Rational people may well avoid such risks precisely because they do not pay, but of the group of people irrational enough to try, a few will become billionaires. Voila! Another example: any smart rational kid will look at career odds and payoffs to things like being a musician or a talk show host, and go 'screw that! I'm going to become a doctor or economist!', and so when we look at mega-millionaire musicians like Michael Jackson or billionaire talk show hosts like Oprah... (We are ignoring all the less rational kids who wanted to become an NFL quarterback or a rap star and wind up working at McDonald's.)
Another point I've made in the past is that since marginal utility seems to diminish with wealth, you have to seriously question the rationality of anyone who does not diversify out of whatever made them wealthy, and instead go double or nothing. Did Mark Zuckerberg really make the rational choice to hold onto Facebook ownership percentages as much as possible even when he was receiving offers of hundreds of millions? Yes, he's now currently a billionaire because he held onto it and worth increased some orders of magnitude, but social networks have often died - as he ought to know, having crushed more than his fair share of social networks under his heel! In retrospect, we know that no one (like Google+) has killed Facebook the way Facebook killed Myspace. But only in retrospect.
Or since using these past examples may not be convincing to people since it's too easy to think "obviously holding onto Facebook was rational, gwern, don't you remember how inevitable it looked back in 2006?" (no, I don't, but I'm not sure how I could convince you otherwise), let's use a more current example... Bitcoin.
At least one LWer currently holds something like >500 bitcoins, which at the current MtG price could be sold for ~$120,000. His net worth independent of his bitcoins is in the $1-10,000 range as best as I can estimate. I am sure you are seeing where I am going with this: if bitcoin craters, he will lose something like 90% of his current net worth, but if bitcoin gains another order, he could become a millionaire.
So here's my question for you, if you think that it's obvious that Oprah must have been rational, and was not merely an irrational risk-seeker who among other things got lucky: right now, without the benefit of hindsight or knowledge of inevitability of Bitcoin's incredibly-obvious-success/obviously-doomed-to-failure, is it rational for him to sell or to keep his bitcoins? Is he more like Zuckerberg, who by holding makes billions; or more like all the failed startup founders who reject lucrative buyouts and wind up with nothing?
It is rational for him to:
[pollid:428]
Suppose he holds, and Bitcoin craters down to the single dollar range or less for an extended time period; do you think people will regard his decision as:
[pollid:429]
Suppose he holds, and Bitcoin gains another order of magnitude (>$1000) for an extended time period; do you think people will regard his decision as:
[pollid:430]
Suppose he sells, and Bitcoin craters down to the single dollar range or less for an extended time period; do you think people will regard his decision as:
[pollid:431]
Suppose he sells, and Bitcoin gains another order of magnitude (>$1000) for an extended time period; do you think people will regard his decision as:
[pollid:432]
Replies from: Luke_A_Somers, Viliam_Bur, Luke_A_Somers, RomeoStevens, army1987, army1987↑ comment by Luke_A_Somers · 2013-04-10T18:22:32.747Z · LW(p) · GW(p)
Do I think people will regard his decision, or would I regard his decision? Are these people general population, or LW? How much do they know about his reasoning process?
Replies from: gwern↑ comment by Viliam_Bur · 2013-04-10T21:39:17.044Z · LW(p) · GW(p)
if you think that it's obvious that Oprah must have been rational
I wrote she is probably more rational than an average person from her reference group (before she became famous); by which I meant: a poor black woman pregnant at age 14. Being overoptimistic does not contradict that.
Replies from: gwern↑ comment by gwern · 2013-04-10T22:33:32.914Z · LW(p) · GW(p)
Being overoptimistic does not contradict that.
No, but it does put pressure on your claim. You have to be very optimistic or very risk-seeking to ride your risky career all the way up past instant-retirement/fuck-you money levels (a few millions) to the billions point, and not sell out at every point before then to enjoy your gains. What fraction of the general population ever founds a startup or new company or takes an equivalent risk? Her career pushes Oprah way out onto the tail.
Now, maybe the average black pregnant teenager is so irrational in so many ways that their average problems make Oprah on net more rational even though she's lunatically optimistic or risk-seeking (although here we should question how irrational having a kid is, given issues like welfare and local cultures and issues discussed in Promises I Can Keep and marriage gambits and that sort of thing), but it's going to be much harder to establish that about an Oprah-with-lunatic-risk-appetite rather than what we started with, the Oprah-who-is-otherwise-looking-pretty-darn-rational.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-12T21:55:32.401Z · LW(p) · GW(p)
Is retiring relatively young a more rational choice than continuing to work at something you like?
Replies from: gwern↑ comment by gwern · 2013-04-12T22:33:00.588Z · LW(p) · GW(p)
It seems like pretty remarkable luck if the thing you want to do most in the world is also what you're currently being paid to do.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-04-12T22:37:31.172Z · LW(p) · GW(p)
On the other hand, how good are people who retire at finding what they want most to do?
A person who's more rational than average (especially about introspection) might do well to retire, but most people might be rationally concerned that they'd just drift.
Replies from: gwern↑ comment by gwern · 2013-04-13T01:38:02.688Z · LW(p) · GW(p)
I don't know what population-wide aggregates might look like. At least in Silicon Valley, there apparently are many people who have retired early and have the ability and inclination to express any dissatisfaction online in places where I might read them, but I can't think of any who have said things like "My life has been miserable since I cashed out my millions of dollars of Google shares and I have nothing to do with myself."
Retiring early means you have the money for doing a great many things, and you are still in physical & mental shape to enjoy it; Twain:
“The whole scheme of things is turned wrong end to. Life should begin with age & its privileges and accumulations, & end with youth & its capacity to splendidly enjoy such advantages. As things are now, when in youth a dollar would bring a hundred pleasures, you can’t have it. When you are old, you get it & there is nothing worth buying with it then. It’s an epitome of life. The first half of it consists of the capacity to enjoy without the chance; the last half consists of the chance without the capacity.”
And what factors enabled this early retirement in the first place? A motivated intelligent person (albeit with a bad appetite for risk and inability to cash out) can find plenty of rewarding things to occupy themselves with, like charity or education. Steve Woziak and Cliff Stoll immediately come to mind, but I'm sure you can name others.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-04-13T01:49:51.489Z · LW(p) · GW(p)
There's a selection effect on such wishes, though. Only a small fraction of humans ① survive to such an age and ② retire with "privileges and accumulations"; many who would desire such a goal do not achieve it.
Replies from: gwern↑ comment by Luke_A_Somers · 2013-04-10T20:45:23.036Z · LW(p) · GW(p)
So, I said he'd be considered rational in all cases except hold/fail. That's because people will take his success as evidence that he knows what he's doing, and if he sells then he's doing what 'everyone else' (i.e. > 99.9% of the world) would do, so even if it doesn't work out that way they'd probably give him some slack.
Also, I think it's rational for him to diversify, but it's not a bad idea for him to maintain significant holdings.
↑ comment by RomeoStevens · 2013-04-10T18:42:07.731Z · LW(p) · GW(p)
why is buying and selling binary? he should clearly rebalance.
↑ comment by A1987dM (army1987) · 2013-04-14T21:06:05.140Z · LW(p) · GW(p)
It is rational for him to:
Expanding on RomeoStevens' comment... Maths time! Suppose that he has now 10,000 dollars and 500 bitcoins, each bitcoin now costs $100, and that by the end of the year a bitcoin will cost $10 with probability 1/3, $100 with probability 1/3, and $1000 with probability 1/3. Suppose also that his utility function is the logarithm of his net worth in dollars by the end of the year. How many bitcoins should he sell to maximize his expected utility? Hint: the answer isn't close to 0 or to 500. And I don't think that a more realistic model would change it by that much.
Replies from: gwern, Tuxedage↑ comment by gwern · 2013-04-14T21:55:57.046Z · LW(p) · GW(p)
Khoth suggests modeling it as starting with an endowment of $60k and considering the sum of the 3 equally probable outcomes plus or minus the difference between the original price and the closing price, in which case the optimal number of coins to hold seems to be 300:
last $ sort $ map (\x -> (log(60000 - 90*x) + log(60000) + log(60000 + 900*x), x)) [0..500]
(34.11321061509552,300.0)
Of course, your specific payoffs and probabilities imply that one should be buying bitcoins since in 1/3 of the outcomes the price is unchanged, in 1/3 one loses 90% of the invested money, and in the remaining 1/3, one instead gains 1000% of the invested money...
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-15T08:18:43.739Z · LW(p) · GW(p)
I've fiddled around a bit, and ISTM that so long as the probability distribution of the logarithm of the eventual value of bitcoins is symmetric around the current value (and your utility function is logarithm), you should buy or sell so that half of your current net worth is in dollars and half is in bitcoins.
↑ comment by A1987dM (army1987) · 2013-04-13T18:18:37.053Z · LW(p) · GW(p)
I come from the future. Do I try to compensate for hindsight bias, or do I abstain from answering the polls altogether?
Replies from: gwern, Dorikka↑ comment by gwern · 2013-04-13T18:30:58.427Z · LW(p) · GW(p)
Even after the 'crash', the equivalent figure is still like $50k and so the question remains germane. If you want to answer it, feel free. (The raw poll data includes timestamps, so if anyone thinks that answers after time X are corrupting the results, they can always drop such entries.)
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-13T18:54:58.165Z · LW(p) · GW(p)
Okay. I answered the questions except the first (per RomeoStevens) and the last (I'd expect people to be roughly equally split in that situation).
↑ comment by elharo · 2013-04-10T12:36:40.076Z · LW(p) · GW(p)
Basic statistics question: if we find that 99% of all people are irrational, but "only" 90% of millionaires are irrational, is that evidence that rationality does lead to (increased probability of) winning, or is it only evidence that rationality is correlated with winning? For instance, how do I know that millionaires aren't more rational simply because they can afford to go to CFAR workshops and have more freetime to read LessWrong?
I.e. knowing only that 99% of all people are A but "only" 90% of millionaires are A, how do I adjust my respective probabilities that
- A --> millionaires
- Millionaires --> A
- Unknown factor C causes both A and millionaires
It feels like I ought to assign some additional likelihood to each of these 3 cases, but I'm not sure how to split it up. Maybe the answer is simply, "gather more evidence to attempt to tease out the proper causal relationship".
Replies from: IlyaShpitser, Viliam_Bur↑ comment by IlyaShpitser · 2013-04-15T07:18:16.247Z · LW(p) · GW(p)
This is a causal question, not a statistical question. You answer by implementing the relevant intervention, usually by randomization, or maybe you find a natural experiment, or maybe [lots of other ways people thought of].
You can't in general use observational data (e.g. what you call "evidence") to figure out causal relationships. You need causal assumptions somewhere.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-04-16T15:18:38.414Z · LW(p) · GW(p)
You can't in general use observational data (e.g. what you call "evidence") to figure out causal relationships. You need causal assumptions somewhere.
What do you think of this challenge, to detect causality from nothing but a set of pairs of values of unnamed variables?
Replies from: IlyaShpitser, gwern↑ comment by IlyaShpitser · 2013-04-16T17:30:33.499Z · LW(p) · GW(p)
You can do it with enough causal assumptions (e.g. not "from nothing"). There is a series of magical papers, e.g. this:
http://www.cs.helsinki.fi/u/phoyer/papers/pdf/hoyer2008nips.pdf
which show you can use additive noise assumptions to orient edges.
I have a series of papers:
http://www.auai.org/uai2012/papers/248.pdf
http://arxiv.org/abs/1207.5058
which show you don't even need conditional independences to orient edges. For example if the true dag is this:
1 -> 2 -> 3 -> 4, 1 <- u1 -> 3, 1 <- u2 -> 4,
and we observe p(1, 2, 3, 4) (no conditional independences in this marginal), I can recover the graph exactly with enough data. (The graph would be causal if we assume the underlying true graph is, otherwise it's just a statistical model).
People's intuitions about what's possible in causal discovery aren't very good.
It would be good if statisticians and machine learning / comp. sci. people came together to hash out their differences regarding causal inference.
↑ comment by gwern · 2013-04-16T16:06:19.201Z · LW(p) · GW(p)
Gelman seems skeptical.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-04-16T16:30:17.977Z · LW(p) · GW(p)
I saw that, but I didn't see much substance to his remarks, nor in the comments.
Here is a paper surveying methods of methods of causal analysis for such non-interventional data, and summarising the causal assumptions that they make:
"New methods for separating causes from effects in genomics data"
Alexander Statnikov, Mikael Henaff, Nikita I Lytkin, Constantin F Aliferis
↑ comment by Viliam_Bur · 2013-04-10T14:11:20.753Z · LW(p) · GW(p)
It feels like I ought to assign some additional likelihood to each of these 3 cases, but I'm not sure how to split it up.
Two things:
1) Your prior probabilities. If before getting your evidence you expect that hypothesis H1 is twice as likely as H2, and the new evidence is equally likely under both H1 and H2, you should update so that the new H1 remains twice as likely as H2.
2) Conditional probabilities of the evidence under different hypotheses. Let's suppose that hypothesis H1 predicts a specific evidence E with probability 10%, hypothesis H2 predicts E with probability 30%. After seeing E, the ratio between H1 and H2 should be multiplied by 1:3.
The first part means simply: Before the (fictional) research about rationality among millionaires was made, which probability would you assign to your hypotheses?
The second part means: If we know that 99% of all people are irrational, what would be your expectation about % of irrational millionaires, if you assume that e.g. the first hypothesis "rationality causes millionaires" is true. Would you expect to see 95% or 90% or 80% or 50% or 10% or 1% of irrational millionaires? Make your probability distribution. Now do the same thing for each one of the remaining hypotheses. -- Ta-da, the research is over and we know that the % of irrational millionaires is 90%, not more, not less. How good were the individual hypotheses at predicting this specific outcome?
(I don't mean to imply that doing either of these estimates is easy. It is just the way it should be done.)
Maybe the answer is simply, "gather more evidence
Gathering more evidence is always good (ignoring the costs of gathering the evidence), but sometimes we need to make an estimate based on data we already have.
↑ comment by elharo · 2013-04-10T12:03:20.233Z · LW(p) · GW(p)
It does not follow that the opposite of "rationality leads to winning" is "the world is made of an explicitly anti-rational force". Were I to discover that rationality does not lead to winning, or worse yet that irrationality leads to winning, I would find it much more likely that incorrect beliefs enable people to take actions that lead them to winning rather than that the world is made of an explciitly irrational force. For example, if people believe they are average or below, they may be less aggressive and settle for less than if, by virtue of Dunning-Krueger, they believe they are exceptional and try for more. I don't know that this is true--I might not assign it even a 50% probablility of being true-- but it's not self-evidently false. The question of whether rationality leads to winning, or which parts of rationality lead to winning, is an empirical question, not a logical question.
Another example: it depends on where you set the bar for winning. For example, suppose we set the bar for winning at a billion dollars. Rational people, acting to maximize their own utility, may well choose to plug away in a Fortune 500 company for 40 years, putting away a nice chunk of change in a 401K, investing in index funds, spend no more than 30% of take home pay on housing, and retire at 60 with a few million dollars in the bank. But by this standard they haven't "won" even though they maximized their expected utility at low risk. Irrational people may sink their savings into a startup, and just might hit it big and "win". Then again they may lose it all and die alone in a hole. But the winners will still be comprised of irrational people. I.e. playing the lottery is almost as irrational as you can get, and every single lottery winner is irrational.*
Perhaps the mistake here is in in looking at winners individually. Dare I say it, could this be selection bias? What we need to do to figure out if rationality helps us win or not, is not look at the winners and ask what they did to help them win. Rather we need to get a large sample of rational and irrational people, and sum the winnings and losing of each and see who comes out ahead per capita. Perhaps the mean rational person finishes life a few million utilons ahead, and the average irrational person finishes life a few million utilons behind, but the people who are a few billion utilons ahead are all irrational. I don't know the answer to this question, but I'd be really curious to find out.
- Actually there are some sometimes rational reasons to play the lottery, especially the big Powerball types. It's uncommon; but I do know of one case where a rational investment group played the lottery and won. I know of no rational reasons for playing the smaller daily Pick 3/Pick 4 lotteries.
↑ comment by MrMind · 2013-04-11T10:01:27.932Z · LW(p) · GW(p)
Were I to discover that rationality does not lead to winning, or worse yet that irrationality leads to winning, I would find it much more likely that incorrect beliefs enable people to take actions that lead them to winning rather than that the world is made of an explciitly irrational force
The only way I could see this to work is if there's some force that looks at my model of the world and makes it systematically wrong, no matter how updated it is. That is, only if there's some anti-bayesian principle at work. But I think there's a difference here in what we understood to be rationality: indeed you write
For example, if people believe they are average or below, they may be less aggressive and settle for less than if, by virtue of Dunning-Krueger, they believe they are exceptional and try for more
Let it be clear that I do not conflate being rational with being cautious, being reasonable or even having common sense: I intend it to have the pure meaning of "having the correct model of the world". If in some endeavour those who try more (aggressively) achieve more, it means that the probability of success is low but not impossibly low, and it follows that the rational thing is to try more. Those who try less are maybe being prudent, but in so doing they are underestimating their probability of success (or overestimating the probability of failure), that is: they do not have the correct model of the world, and this leads to irrational behaviour.
The second example (hitting the "big idea" or winning at the lottery) is a case in which the winning strategy is uncomputable, but by sheer brute forcing there is someone who will hit it. That's admittedly a case in which winning was not due to rationality, but note that it wasn't due to irrationality either: it was due to pure luck of finding oneself in the only global optimum.
I'll specify better my position: let's conflate luck with a resource of some kind (it's a sort of better positioning in a potential space). There are domains in which having the correct model of the world leads to better chance of winning, and there are other domains in which this is indifferent (an impartial beauty contest, a lottery). But there are never domains in which having the correct model of the world leads to a more probable loss. So rationality leads always to a better or equal probability of winning.
This, as I agreed, is an empirical question, but one which, if defeated, will imply the existence of the irrational force aformentioned.
ETA thinking about it, entering a lottery is an irrational behaviour that leads to winning, but only for the person who will eventually win. So, in the domain of "the possible bet that's possible to buy", there is an irrational behaviour that leads to winning. But in this case there is an irrational force that promotes anti-bayesian behaviour: the State (or the casino, etc).
comment by Shmi (shminux) · 2013-04-10T16:24:08.671Z · LW(p) · GW(p)
I'd love to hear a story (or maybe stories) of someone becoming an LW regular, improving their "rationality skills" and going on to "win" (define and achieve their personal goals), thanks to those skills.
Here I explicitly exclude LW-related goals, such as understanding the sequences, attending a CFAR workshop or being hired by CFAR, signing up for cryonics or figuring out how to donate more to GiveWell. Instead, I'd love to hear how people applied what they learned on this site to start a business, make money, improve their love life (hooking up with an LW poly does not count for this purpose), or maybe to take over the world.
Hopefully some of the stories have already been posted here, so links would be appreciated.
Replies from: None, arundelo, lukeprog, Zian↑ comment by [deleted] · 2013-04-11T02:19:37.996Z · LW(p) · GW(p)
When I found LW, I was confused and nonambitious; my goal was to survive on as little money as possible (to ironically humiliate the people who say $17/hr is the minimum living wage), and maybe make a few video games or something, and I spent most of my free time on 4chan and arguing about radial politics on the internet.
Since coming to LW, I've used LW-far-epistemic rationality to figure out a great deal of philosophical confusions and understand a great deal more about those big questions. (this doesn't count, but it should be mentioned)
More specifically and interestingly: it took explicit LW rationality for me to:
Think rationally about balancing my resources (time, money) and marginal utility, to great productivity benefit.
Step up to run the vancouver LW meetup.
Make and maintain a few really valuable friends (mostly through the meetup).
Respond positively to criticism at work, so that I've become much more valuable than I was 6 months ago, in a way that has been recognized and pointed out.
Achieve lightness and other rationality virtues in exploring design concepts at work, taking design criticism, not getting caught in dead ends. I explicitly apply much of what I've learned at LW to my work, though I'm unsure how much of that is just how I verally describe things I'd do anyways.
Become poly with my wife rationally and in a controlled manner.
Switch my goals from unambitious to working on the biggest problems I can find, like taking over the world. (this is actually hard).
start using beeminder, get a smartphone, use pomodoro, and use remember the milk, for large measurable (just look at my beeminder graphs) improvement in personal project productivity, sex life, excercise, etc.
Actually put in the hard work and strategic criticism-seeking that it took to actually get a really good job.
Take more rational risks and make better small decisions every day
Actually ask and get a cute girl's number today (yay)
Of course, my ambition has scaled way faster than my achievement, so despite the semi-impressive list above, I feel like I'm way behind where I should be.
Replies from: shminux, MugaSofer↑ comment by Shmi (shminux) · 2013-04-11T06:10:06.208Z · LW(p) · GW(p)
Impressive! I think CFAR could use a testimony like this.
Replies from: None↑ comment by [deleted] · 2013-04-11T19:36:01.385Z · LW(p) · GW(p)
Then again, it may be a lucky draw that finding LW occurred almost exactly at the lowest point in my historical ambition; for 4 or 5 years before that, my goal was to take over the world and dismantle civilization for the good of mankind. It wasn't just an idle "goal" either; I really worked at it. Still, I'd be much more effective at such post-LW than I was then.
↑ comment by MugaSofer · 2013-04-12T14:28:51.251Z · LW(p) · GW(p)
Switch my goals from unambitious to working on the biggest problems I can find, like taking over the world. (this is actually hard).
Drat, is it?
Hmm, I'd be interested in your thoughts if this is a serious goal of yours, but OTOH broadcasting his sort of thing is pretty obviously a Bad Idea.
On the other hand, more people could sort of accelerate things. I mean, it would take [REDACTED] [REDACTED] [REDACTED] ... EY could probably [REDACTED] [REDACTED] but he seems busy (with FAI; might be easier with a world, though) ... I'd say LW would be an ideal recruiting ground for help creating a singleton.
ETA: edited for [REDACTED].
Replies from: None↑ comment by [deleted] · 2013-04-13T18:08:19.778Z · LW(p) · GW(p)
OTOH broadcasting his sort of thing is pretty obviously a Bad Idea.
Is it? Did you take me seriously? It's only a bad idea if you take me seriously enough to try to stop me. Especially considering the following sentences:
Also, I meant that the thing that is hard is to switch your goals to the highest value thing available.
I just used "taking over the world" as an ironic example. (also, causing FAI to happen is basically taking over the world, with "world" defined a little bit more broadly than "human society").
Replies from: MugaSofer↑ comment by lukeprog · 2013-04-11T00:43:04.260Z · LW(p) · GW(p)
The Motivation Hacker is one such story, though it focuses on the relevance of the stuff in my procrastination post rather than on the relevance of the rest of LW.
↑ comment by Zian · 2013-04-25T05:19:55.045Z · LW(p) · GW(p)
Is it possible to post anonymously but link it to my account? Some of the stuff I'd like to say aren't things I want the general public to link directly to me, even though LessWrong played a possibly significant positive role.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-25T05:23:55.494Z · LW(p) · GW(p)
Not sure what you are trying to achieve. And what you mean by link. You can certainly create another account and mention the original one in the profile.
comment by Oligopsony · 2013-04-10T01:56:06.777Z · LW(p) · GW(p)
If Rationality is Winning, or perhaps more explicitly Making The Decisions That Best Accomplish Whatever Your Goals Happen to Be, then Rationality is so large that it swallows everything. Like anything else, spergy LW-style rationality is a small part of this, but it seems to me that anything which one can meaningfully discuss is going to be one such small portion. One could of course discuss Winning In General at a sufficiently high level of abstraction, but then you'd be discussing spergy LW stuff by definition - decision theory, utility, and so on.
If businessfolk are rather rational at running businesses, but no rational than anyone else about religion, or if people who have become experts on spergy LW stuff are no more winningful about their relationships, &c. &c. this (to my mind) brings into question the degree to which a General Rationality-as-Winningness skill exists. You acknowledge the distinction between explicit and tacit rationality, but do you expect successful entrepreneurs to be relatively more successful in their marital life? When you say you want to teach tacit rationality, do you mean something distinct from Teaching People How To Do Things Good?
Replies from: Jayson_Virissimo, jooyous↑ comment by Jayson_Virissimo · 2013-04-10T02:47:03.652Z · LW(p) · GW(p)
I never did find out if any sizable fraction of Less Wrongers would bite this bullet. That is to say, to affirm the claim that, all else equal, a person with more physical strength is necessarily more rational.
Replies from: Richard_Kennaway, DaFranker↑ comment by Richard_Kennaway · 2013-04-10T08:31:13.057Z · LW(p) · GW(p)
I don't see a bullet. Obviously, other things matter as well as rationality. Rationality, even instrumental rationality, is not defined as winning. Those who speak as if it was are simply wrong, and your example is the obvious refutation of such silliness.
Replies from: MrMind↑ comment by MrMind · 2013-04-10T09:04:52.229Z · LW(p) · GW(p)
Well, the assumption here is that a better knowledge of the world gives you a better chance of achieving your goal, so rationality equals more winning only in strategical domains. Which I suspect are the majority in today's environment, but still being better looking / stronger / better armed etc. counts in certain other domains.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-04-10T09:34:48.951Z · LW(p) · GW(p)
Well, the assumption here is that a better knowledge of the world gives you a better chance of achieving your goal, so rationality equals more winning only in strategical domains.
That sentence should end at the comma. Rationality never "equals" more winning. It is, or should be, a cause (among others) of more winning. That is not a relationship that can be called "equals".
Replies from: MrMind↑ comment by MrMind · 2013-04-11T10:07:47.591Z · LW(p) · GW(p)
That is not a relationship that can be called "equals"
It's an incorrect translation of a figure of speech that exists in Italian but apparently not in English: the correct formulation is "rationality never decreases your probability of winning".
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-04-11T10:35:06.031Z · LW(p) · GW(p)
I'm curious to know what the literal Italian would be. In English, people do often say "X is Y", "X equals Y", "X is objectively Y" (political historians will recognise that one), X means Y, etc. when X and Y are different things that the speaker is rhetorically asserting to be so closely connected as to be the same thing. For example, the extreme environmental slogan, "a rat is a pig is a dog is a boy". I believe it is a figure of speech better avoided.
Replies from: MrMind↑ comment by MrMind · 2013-04-11T12:34:48.631Z · LW(p) · GW(p)
Well, if you're curious: "essere razionali equivale a vincere maggiormente solo nei domini strategici". 'equivale' I translated to 'equals', but a more precise meaning would be on the line of 'implies', 'leads to'. It's used most often when listing the component of a process: "A equivale a B che equivale a C" usually is in the meaning of "A -> B -> C" rather than "A = B = C".
↑ comment by DaFranker · 2013-04-10T16:00:15.103Z · LW(p) · GW(p)
I think your thought experiment illustrates well that often the "Rationality is Winning" meme doesn't quite carve the space too well. Here, rationality is using the right tactics or, if only one is available, spending the right amount of time on the right tasks proportional to how much they value their goals and how achievable they are.
If we resurrect Alice and Bob as hypothetical monovalue agents who only exclusively value deadlifting X, and who have only one method of attempting/training to deadlift X, then the game tree is skewed, Bob wins faster, Alice is screwed and wins slower. Both are fully rational if they spend all available resources on this goal (since it's all these hypothetical agents value), even though Alice spends more resources for longer before achieving the goal.
For more game theory mumbo-jumbo: I view "rationality" more in terms of how you build and navigate the game tree, rather than a post-hoc analysis of who ended up in the best cell of the payoff matrices. Or, to put it differently, rationality is ending up at the best cell of your payoff matrix, regardless of whether someone else just has +5 on all cells of their matrix or has more options or whatever.
So if my understanding that you were making with this a critique of the "Rationality is winning" meme, I agree that it's a bit misleading and simplistic, but it still is "taking the best course of action with the resources and options available to you, reflectively and recursively including how much you spend figuring out which courses of action are better" - Expected Winning Within Available Possible Futures
↑ comment by jooyous · 2013-04-10T07:10:40.931Z · LW(p) · GW(p)
This is a really good point and it is also related to Manfred's comment that I don't personally know how to reconcile with some of the points in the article. On one hand, I would like to have a lot of money because a lot of inconvenient things would suddenly become much easier. On the other hand, I would have to do other inconvenient things, like manage a lot of money. Also, I don't think I would be happy doing Oprah's job, even if it resulted in a lot of money. Basically, I would not mind lots of money but it is not currently a priority. So I don't know if I'm actually winning or not, oops.
Therefore, a poll!
How successful are you? [pollid:426]
From a fame, money or bragging rights perspective, how ambitious are your current goals?[pollid:427]
comment by elharo · 2013-04-10T07:05:34.558Z · LW(p) · GW(p)
You say that, "I know plenty of business managers and entrepreneurs who have a steady track record of good decisions and wise judgments, and yet they are religious, or they commit basic errors in logic and probability when they talk about non-business subjects."
You must know different business managers and entrepreneurs than I do. I can think of few if any business managers and entrepreneurs who have a steady track record of good decisions and wise judgments. There are some common positive characteristics I see in the business managers I know, and another group of common characteristics I see in the entrepreneurs I know (nor do the two groups share the same set of common characteristics, I might add) but in neither group are good decisions and wise judgments part of those common characteristics.
I do see a lot of hindsight bias and survivorship bias in both groups though. Out of a large pool of managers and entrepreneurs, the successful ones inevitably attribute their success to personal characteristics and skill, but it's not at all obvious they aren't just the lucky ones who happened to stumble into a profitable opportunity. One frequent characteristic of successful entrepreneurs is that they have tried many things, and usually failed at more of them they've succeeded at. If they were both rational and able to apply rationality to their plans, you'd expect them to succeed a lot more often.
Replies from: falenas108↑ comment by falenas108 · 2013-04-10T17:25:10.237Z · LW(p) · GW(p)
One frequent characteristic of successful entrepreneurs is that they have tried many things, and usually failed at more of them they've succeeded at. If they were both rational and able to apply rationality to their plans, you'd expect them to succeed a lot more often.
How do you know their success rate isn't much higher than those who aren't successful, but the base success rate is so low even those who do better are still less than 50%?
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-04-10T18:17:54.826Z · LW(p) · GW(p)
Alternately, what if the business equivalent of rapid prototyping is the optimal strategy? Giving up early enough that you can move on to something else can be the best option.
comment by [deleted] · 2013-04-10T03:47:49.347Z · LW(p) · GW(p)
In short, she had to be fairly rational, at least in some domains of her life.
Not a comment about Oprah. Repeat, not a comment about Oprah. Once more: not a comment about Oprah.
But a comment about the idea that rationality leads to success. Deception and violence also lead to success. These problem solvers are systemized winning: IF (application of fraud) THEN (goal met) ELSE (blame others). "Violence isn’t the only answer, but it is the final answer." - Jack Donovan. Violence and deception are social skills. When talking rationality comes apart, these two are means for winning rationality.
"When you are not practicing, remember, someone somewhere is practicing, and when you meet him he will win." - Ed Macauley. Practice the means to not be taken in by deception or undone by violence. Of course neither I nor anyone anywhere would advocate deceiving others or being violent, ever, under any circumstances, no matter how rational and life-saving and winning.
Replies from: khafra↑ comment by khafra · 2013-04-10T11:55:15.589Z · LW(p) · GW(p)
Deception and violence also lead to success. These problem solvers are systemized winning.
Martial-Art-Of-Rationality-Wise, this reminds me of people in epistemically vicious arts who say that western boxers couldn't beat them "on the street," because they could just gouge their eyes, bite them, and kick them in the cojones. It turns out that, if a strategy is available to everyone, it gets exploited until it's no longer an overwhelming advantage.
Whether that's because everyone's increased their use of violence and deception, or because they've coordinated to lower the marginal effectiveness of an additional unit of violence, is immaterial. Either way, violence and deception aren't a $20 bill lying on the ground, waiting for someone to pick it up. That wouldn't be a nash equilibrium.
comment by Bugmaster · 2013-04-10T19:44:54.936Z · LW(p) · GW(p)
Which of these three options do you think will train rationality (i.e. systematized winning, or "winning-rationality") most effectively?
One of these things is not like the others. I can read the Sequences for free, and I can attend a workshop relatively cheaply, but the time and money investments into a startup are quite significant. Most people cannot afford them. Eating cake is not an option for them; they can barely afford bread.
comment by Manfred · 2013-04-10T02:39:06.962Z · LW(p) · GW(p)
One simplification I think you're making that raises some problems is money. Why Oprah? Why not Charles Wuorinen, who makes excellent musical decisions and has many learned skills? Who has access to tight feedback loops as soon as someone else listens to what he writes? Who is really good at what he does? Because Oprah's skills are better for collecting slips of green paper.
Now, one can collect quite a few slips of green paper and still, say, suffer from depression, or just generally be unhappy. Perhaps we could even claim that Wuorinen is happier than Oprah - I wouldn't know, but it doesn't sound outlandish. But you, you are someone trying to save the world, and you have excellent uses for slips of green paper. So maybe you were just focused on the skills related to slips of green paper because of what you (or other world-savers) could do with those skills.
And so I propose a definition of tacit rationality that takes this into account: The skills that to you would be highly valuable.
comment by gothgirl420666 · 2013-04-11T17:26:20.994Z · LW(p) · GW(p)
Explicit rationality can help you realize that you should force tight feedback loops into whichever domains you want to succeed in, so that you can have develop good intuitions about how to succeed in those domains.
A realization:
PUA, or at least what seems to me to be the core concept of some schools of PUA, makes a ton of sense when viewed in this light. Trying to pick up a stranger in a bar is probably the tightest feedback loop possible for social skills, and like the OP says, social skills are massively important for success and happiness. Therefore, going out every Friday night and brazenly flirting with as many attractive people as possible seems like an incredibly good way to rapidly improve your life and chances of success. I was always baffled and repelled by the "self-actualization teachings disguised as advice on how to get laid" nature of some PUA, but now it seems really really desirable.
comment by Kaj_Sotala · 2013-04-14T19:01:02.237Z · LW(p) · GW(p)
Another interesting example of the utility of tight feedback loops, this time as applied to education, is extreme apprenticeship. I've been taking one math class built around the XA method, and it has felt considerably more useful and rewarding than ordinary math classes.
Among other things, XA employs bidirectional feedback loops - student-to-teacher and teacher-to-student. Students are given a lot of exercises to do from day one, but the exercises are broken into small chunks so that the students can get a constant sense of making progress, and so that they can clearly articulate the thing that they didn't understand in case they run into trouble. While students can just do the exercises by themselves if they wish, there are also scheduled exercise sessions during which constant help is available. When the exercises do get done, they are checked by the teaching staff and the students are requested to redo them with corrections in case there are major flaws.
Because the exercises are also returned each week, the teaching staff gets constant feedback on the things that the students are having difficulties with, and the content of the lectures can be modified on the fly. In general, lectures are kept to a minimum, and tend to build on content that the students already learned from the exercises rather than introduce entirely new material.
comment by NancyLebovitz · 2013-04-12T22:03:53.513Z · LW(p) · GW(p)
Hypothesis for what tacit rationality might be: glomming onto accurate premises about what actions are likely to achieve one's goals without having a conscious process for how one chooses premises.
comment by atucker · 2013-04-10T02:35:55.651Z · LW(p) · GW(p)
I think it would probably be worth going into a bit more about what delineates tacit rationality from tacit knowledge. Rationality seems to me to apply to things that you can reflect about, and so the concept of things that you can reflect about but can't necessarily articulate seems weird.
For instance, at first it wasn't clear to me that working at a startup would give you any rationality-related skills except insofar as it gives you instrumental rationality skills, which could possibly just be explained as better tacit knowledge -- you know a bajillion more things about the actual details necessary to run a business and make things happening.
There's actually a ton of non-tacit knowledge potential powerups from running a startup though! That probably even engage reflection!
For instance, a person could learn what it feels like when they're about to be too tired to work for the rest of the day, and learn to stop before then so that they could avoid burnout. This would be a reflective skill (noticing a particular sensation of tiredness), and yet it would be nigh impossible to articulate (can you describe what it feels like to almost be unable to work well enough that I can detect it in myself?).
comment by roland · 2013-04-11T00:10:32.115Z · LW(p) · GW(p)
I take objection on Mark Zuckerberg and Oprah:
I would ascribe Oprah's success more to her being a charismatic personality/communicator and Zuckerberg didn't even make FB, he paid some guys to make it, the idea of fb wasn't new(there was Orkut, Myspace) it's just the one that ended up taking off. And the latter was due to the good implementation, fb was always simple, fast and snappy. I don't know if Zuckerberg was the one that drove this point or if he had just good guys on the technical side who understood what matters. The same can be said for Google, it was just one simple idea, the pagerank algorithm, that's all. They even wanted to sell it to Yahoo for a couple millions before it became the success it is nowadays.
If you want good examples of rationality in Business I would vote for Warren Buffett applying Value Investing. This is really a guy who has a consistent track record of beating the market for decades. Another one is Ray Dalio. Ray Dalio has published his principles, if you ever read them you will see that they match very well with what is preached in LW. Both follow certain principles consistently over decades. Charles Munger(Buffett's partner) has also written down his thinking, very rational stuff.
comment by TheMatrixDNA · 2013-04-12T06:48:18.106Z · LW(p) · GW(p)
What means "allowing "the energy of the universe" to lead her." ? We can make an analogy. Imagine a fetus inside the womb. The mother's womb is the energy of the universe leading the fetus. Normally, fetus have no rationality and free will, they can't do nothing. But adult humans are different, they can drive their destiny, taking decisions. Since that fetus do nothing he will be successful in relation to its mother's state but, if its mother is poor, ugly, living at non-hospitable location, he will be not successful in relation to its destiny. Oprah must had to be a collaborator when molested and learned survivor bias and maybe got something as reward by the molesters. To her, the molesters were the energy of the Universe and one must be submitted to it, dancing this music, for to be rewarded. Then, " give to people what they want".
But...did Oprah the right thing to do? Yes if we are merely animals, and not if we have post-death existence, be it as a soul or genetically imprinted into our offspring. Because the Earth's biosphere is the womb of human beings and it is product of chaos due the fall of our ancestors non-biological astronomical systems (see Matrix/DNA models for understanding it). The biosphere's laws are not the laws of the Universe at ordered state. I think that if I am a fetus with intelligence, knowing that my mother's womb is making me poor, ugly, for to be a slave, I will fight against these " universal energies" and i will be born different than my parents, their immediate world, on which I will be not successful. But I did it right in relation to long term destiny...and if there is long term destiny, Oprah is doing everything wrong...