A Call for Constant Vigilance
post by katydee · 2013-04-03T09:52:06.181Z · LW · GW · Legacy · 29 commentsContents
29 comments
Related to: What Do We Mean By "Rationality?"
Rationality has many facets, both relatively simple and quite complex. As a result, it can often be hard to determine what aspects of rationality you should or shouldn't stress.
An extremely basic and abstract model of how rationality works might look a little something like this:
- Collect evidence about your environment from various sources
- Update your model of reality based on evidence collected (optimizing the updating process is more or less what we know as epistemic rationality)
- Act in accordance with what your model of reality indicates is best for achieving your goals (optimizing the actions you take is more or less what we know as instrumental rationality)
- Repeat continually forever
29 comments
Comments sorted by top scores.
comment by RomeoStevens · 2013-04-03T18:28:41.846Z · LW(p) · GW(p)
Myself and a few others I have spoken with have noticed a "level up" effect. That is, you grind away at this stuff and one day you suddenly notice that you are noticing and applying the lessons much more effortlessly than before. It feels awesome and is worth striving for.
Replies from: shminux, katydee↑ comment by Shmi (shminux) · 2013-04-03T20:33:56.191Z · LW(p) · GW(p)
Yes, it does feel awesome. This discontinuity of the effort -> outcome map ([almost] nothing... nothing... nothing... jump!) to me is an instance of the Hegelian/Marxian quantity->quality conversion, something that jumps at me again and again in different contexts. I wonder if there is a way to formalize it.
Replies from: moridinamael, satt↑ comment by moridinamael · 2013-04-04T01:46:31.335Z · LW(p) · GW(p)
I wish that I understood this post. I am upvoting you in the hopes that you feel obligated to explain further.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-04-04T09:31:05.412Z · LW(p) · GW(p)
My understanding of the "quantity to quality conversion" phrase is that in many situations the relation between some inputs and outputs is not linear. More specifically, there are many situations where at the beginning the relation seems linear, but later at some point the increase of outputs becomes incredibly huge (incredibly = for people who based their models on extrapolating the linear relationship at the beginning). Even more specifically, you can have one input "A" that has obvious effect on "X", but almost zero effect on "Y" and "Z". Then at some moment with additional increases of "A" also "Y" and "Z" start growing (which was totally unexpected by the old model).
Specific example: You start playing piano. At the beginning, it feels like it has a simple linear impact on your life. You spend 1 hour playing piano, you get an ability to play a simple song quite well. You spend 2 hours playing piano, you get an ability to play another simple song quite well. Extrapolate this, and you get a model. According to this model, after spending 80000 hours playing piano, you would expect to be able to play 80000 simple songs quite well. -- What happens in reality is that you get an ability to play any simple song well just by looking at the music sheets, an ability to play very complex music, an ability to make money by playing the music, you become famous, get a lot of social capital, lot of friends, lot of sex, lot of drugs, etc. (Both non-linear outputs, and the outputs not predicted by the original model.)
A similar pattern appears in many different situations, so some people invented a mysteriously sounding phrase to describe it. Now it seems like some law of nature. But maybe it is just a selection effect (some situations develop like this, and we notice "oh, the law of quantity to quality conversion", other situations don't, and we ignore them).
In other words, "quantity" seems to mean "linear model", "quality" means "model", and the whole phrase decoded means "if you change variables enough, you may notice that the linear model does not reflect reality well (especially in situations where the curve starts growing slowly, and then it grows very fast)".
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-04T16:33:44.841Z · LW(p) · GW(p)
I was more after some discontinuity than a simple nonlinearity, like a quadratic or even an exponential dependence. And you are right, the selection effect is at work, but it's not a negative in this case. We want to select similar phenomena and find a common model for them, in order to be able to classify new phenomena as potentially leading to the same effects.
For example, if you look at some new hypothetical government policy which legislates indexing the minimum savings account rate to, say, inflation, you should be able to tell whether after a sizable chunk of people shift their savings to this guaranteed investment, the inflation rate will suddenly skyrocket (it happened before in some countries).
Or if you connect billions of computers together, whether it will give rise to a hive mind which takes over the world (it has not happened, despite some dire predictions, mostly in fictional scenarios).
Another example: if you trying to "level up", what factors would hasten this process, so you don't have to spend 10k hours mastering something, but only, say, 1000.
If you pay attention to this leveling effect happening in various disparate areas, you might get your clues from something like stellar formation, where increasing metallicity significantly decreases the mass required for a star to form (a dust cloud "leveling up").
Classifying, modeling and constructing successful predictions for this "quantity to quality conversion" would be a great example of useful applied philosophy.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-04-04T17:15:08.870Z · LW(p) · GW(p)
There are (at least) two different things going on here that I think it's valuable to separate.
One is, as you say, the general category of systems whose growth rate expressed in delivered value "skyrockets" in some fashion (positive or negative) at an unexpected-given-our-current-model inflection point. I don't know if that's actually a useful reference class for analysis (that is, I don't know if an analysis of the causes of, say, runaway inflation will increase our understanding of the causes, say, a runaway greenhouse effect), any more than the class of systems with linear growth rates is, but I'll certainly agree that our ability to not be surprised by such systems when we encounter them is improved by encountering other such systems (that is, studying runaway inflation may teach me to not simply assume that the greenhouse effect is linear).
The other has to do with perceptual thresholds and just-noticable differences. I may experience a subjective "quantity to quality" transition just because a threshold is crossed that makes me pay attention, even if there's no significant inflection point in the growth curve of delivered value.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-04T17:25:17.498Z · LW(p) · GW(p)
I don't know if that's actually a useful reference class for analysis
I don't know, either, but I feel that some research in this direction would be justified, given the potential payoff.
The other has to do with perceptual thresholds and just-notic[e]able differences.
This might, in fact, be one of the models: the metric being observed hides the "true growth curve". So a useful analysis, assuming it generalizes, would point to a more sensitive metric.
↑ comment by satt · 2013-04-04T14:14:05.158Z · LW(p) · GW(p)
Phase transitions?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-04T15:08:12.389Z · LW(p) · GW(p)
Right, it works for a bunch of specific instances of this phenomenon, but how do you construct a model which describes both phase transitions and human learning (and a host of other similar effects in totally dissimilar substrates)?
↑ comment by katydee · 2013-04-03T18:35:38.575Z · LW(p) · GW(p)
Very interesting. What skills or practices have you noticed this "level up" associated with in particular?
Replies from: RomeoStevens↑ comment by RomeoStevens · 2013-04-03T21:10:02.136Z · LW(p) · GW(p)
Planning fallacy.
Being more automatically strategic.
Not falling for mysterious answers.
More consciously noticing the difference between positive/normative statements, or when they are mixed up.
More consciously noticing connotation.
Noticing yourself rationalizing.
There might be others, availability bias. :p
comment by someonewrongonthenet · 2013-04-04T22:25:10.449Z · LW(p) · GW(p)
You might be offended, angry, hurt, or otherwise emotionally compromised. Similarly, you might be sleepy, inebriated, hungry, or otherwise physically compromised. You might be overconfident in your ability to handle a certain type of problem or situation, and hence not bother to think of other ways that might work better.[1]
This is in principle good advice, but I'd like to add a note of caution here - I feel that most "rationalists" actually follow it too closely, and end up losing (and rationalists should win).
Evolutionary process have produced a brain which has different specialized modules for dealing with different situations, and the "purpose" of these modules is more in line with instrumental rationality, not epistemic rationality. Consequentially, a good epistemic rationalist often must suppress the contribution of many of these modules (overconfidence, emotion, etc).
The instrumental rationalist, on the other hand, better play close attention to emotions and overconfidence. Don't forget Egan's law - given human cognitive limitations, someone who applies sound epistemic rationality to full effect is not going to behave too differently from the highly successful person next to them who does not care about epistemic rationalisty at all. In other words, subtracting within reasonable bounds the effects of luck and privilege, anyone who you'd gladly trade most aspects of your life with is a superior instrumental rationalist, regardless of intelligence or learning.
Although I do think overall, instrumental rationalism does improve when epistemic rationality improves, I think that some of the tensions between them have the unfortunate result of making strong epistemic rationalists err in systematic ways when it comes to instrumental rationality.
What does this mean practically? It means you have emotions for a reason. The parts of your brain which generate emotion are the ones which are calibrated for social behavior. If you feel yourself getting angry, it is likely that the behaviors which anger produces (confronting the aggressor) will in fact produce a positive result. Similarly, if you are sad, sad behaviors (crying, seeking support or temporarily withdrawing from the social scene, depending on the situation) will likely produce a positive result.
Same goes for cognitive biases. Fundamental attribution error produces positive results because it's better to assume that actions are innate to people rather than a result of random circumstances, since the latter don't hold any predictive value. The action resulting from overconfidence bias (risk taking) produces positive results as well. I can't even think of any biases that don't follow this pattern.
Behaviorally speaking, an instrumental rationalist should not correct a bias unless they have understood the reason the bias evolved and have adjusted the other variables accordingly. For example, if you are epistemically well calibrated in confidence, take care not to let that translate into instrumental underconfidence. I think the notion that the portions of your psyche which are useful when it comes to logic, reason, epistemic rationalisty, etc...will understand enough and react quickly enough to match the performance of systems which are specialized for this purpose is a bit misguided, and it is extremely important to let the appropriate systems guide behavior when comes to instrumental rationality.
Caveat - Of course, your brain is designed to make viable offspring in the ancestral environment. 1) The environment has changed and 2) your goal isn't necessarily to have offspring. But still - there is a good deal of overlap between the two utility functions.
Replies from: army1987, Viliam_Bur↑ comment by A1987dM (army1987) · 2013-04-05T12:13:28.634Z · LW(p) · GW(p)
subtracting within reasonable bounds the effects of luck and privilege
That sounds like an overwhelming exception to me.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2013-04-05T19:56:59.990Z · LW(p) · GW(p)
Yes, it is an overwhelming exception. In the real world these differences always exist, and you'll have to use your intuition to correct for them.
I'm trying to make the least convenient possible world where two randomly selected people are pulled from a crowd and are given the same, luckless task and one does better. Existing differences in brain-biology, priors, and previously gained knowledge still apply, while differences in resources and non-brain-related biology should be factored out. In these unnatural conditions, when it comes to that specific task, the one who did better is by definition a superior instrumental rationalist.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-04-06T19:45:11.090Z · LW(p) · GW(p)
Agreed, but actually I would call a world where if people who chew gum get more throat abscesses one could reliably conclude that refraining to chew gum is the right choice to prevent throat abscesses a more convenient world than ours.
↑ comment by Viliam_Bur · 2013-04-05T09:28:45.233Z · LW(p) · GW(p)
given human cognitive limitations, someone who applies sound epistemic rationality to full effect is not going to behave too differently from the highly successful person next to them who does not care about epistemic rationalisty at all
If it increases the probability of winning like that highly successful irrational person, it's still worth doing. I mean, if an irrational person has a 20% chance of becoming highly successful, and a rationality training could increase it to 40%, then I would prefer to take that rationality training, even if the rewards for the "winners" in both categories are the same.
But yes, we should remember that we use the human hardware, so we don't consistently overestimate the benefits of learning some rationality. Ideas which would work great for a self-improving AI may have less impressive impact on the sapient apes.
Replies from: someonewrongonthenet↑ comment by someonewrongonthenet · 2013-04-05T19:47:55.715Z · LW(p) · GW(p)
If it increases the probability of winning like that highly successful irrational person, it's still worth doing. I mean, if an irrational person has a 20% chance of becoming highly successful, and a rationality training could increase it to 40%, then I would prefer to take that rationality training, even if the rewards for the "winners" in both categories are the same.
The idea here is that even if "rationality training" (or even general intelligence) gives people an overall advantage, there is a possibility that there are systematic disadvantages in some areas which arise when a person repeatedly uses reason to override emotion and instinct.
Relying on reason and suppressing emotion and instinct is a cultural value, especially for people who call themselves "rationalists". We need to be aware of the pitfalls of doing that too much, because instrumentally speaking instinct and emotion do play a part in "computing" rational behavior.
comment by Shmi (shminux) · 2013-04-03T22:26:29.370Z · LW(p) · GW(p)
How has your own advice been working for you? Any examples would be great.
Replies from: katydee↑ comment by katydee · 2013-04-04T09:55:36.622Z · LW(p) · GW(p)
Can you be more specific as to what you mean? This question seems confused to me, but the fact that it's being upvoted means that others likely have similar questions, so I'd like to know as much as possible about what you're asking me before answering.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-04-04T16:11:33.998Z · LW(p) · GW(p)
Presumably, you have noticed some of the issues you describe in your own behavior, not just in others (unless you are far more rational than everyone else). For example, you might have caught yourself "looking for new tricks", or forgetting to "repeat continually forever," or noticing only in retrospect that you were "emotionally compromised" in a certain situation, or some other pitfall you describe in your post. After realizing what happened, you (presumably) did what you preached: "practice and changing your mindset" and found that it worked for you personally after awhile. For example, you may have noticed that your training paid off and you behaved much more rationally in a situation similar to where before you had had lost your cool completely.
So, I asked you to share some examples where what you advocate actually worked for you.
Replies from: katydee↑ comment by katydee · 2013-04-05T10:31:51.893Z · LW(p) · GW(p)
Okay, I'll take a stab at answering. I'm kind of loath to do this because one of the main points of this post is that specific techniques are overemphasized and I think specific examples won't help with this, but perhaps a more expansive description on my part can avoid that pitfall.
In 2010, I read Patri Friedman's Self-Improvement or Shiny Distraction, which I consider to be an essentially correct indictment of things around here, or at least things around here circa 2010. This is the post that sort of jolted me out of complacency with regards to my own training.
In my experience with the martial arts, I consistently apply things that I've drilled a lot (to the point where it takes conscious effort to not do some things-- I was once called up to be a dummy for someone demonstrating a certain type of deceptive fencing attack and found it very difficult to not parry the attack, deception or no, since I had drilled the parry to that particular deception so often) I inconsistently apply things that I don't drill, and I don't apply things that I don't drill.
Rationality is, in my experience, very much the same (others have noticed this too). I consistently apply thought patterns and principles that I've invested serious time and effort into drilling, I occasionally apply thought patterns and principles that I've thought about a fair amount but haven't put really serious effort into, and I don't apply thought patterns or principles that I've heard of but not really thought about. I'm actually rather embarrassed that I didn't notice this until reading Patri's post in 2010, but so it goes.
One example of a specific rationality skill that I have invested time and effort into drilling is that of keeping my identity small. I read a lot and I read fast, and hence when I was first linked to a Paul Graham essay I read all of them in one sitting. Keep Your Identity Small stuck with me the most, but for a while it was something I sort of believed in but hadn't applied. Here's some evidence of me not having applied it-- note the date.
However, at one point in early 2011 I noticed myself feeling personally insulted when someone was making fun of a group that I used to belong to, and more importantly I noticed that that was something that I wasn't supposed to do anymore. How could this be?
Well, quite frankly, it was because despite high degrees of theoretical knowledge about rationality, I lacked the practice hours required to be good at it. Unfortunately, most rationality skills are rare enough that knowing a little bit beyond a password-guessing level makes you seem very advanced relative to others. But rationality, except in certain competitive situations, isn't about being better than others, it's about being the best you can be.
So to make a long story short, I devised methods and put in the practice hours and got better, and now I actually know a few things instead of sort of knowing a few things. I winced at how low-level I used to be when I read that post from 2010, but all in all that's probably a good sign. After all, if I didn't think my old writing was silly and confused, wouldn't that indicate that I hadn't been progressing since then? Three years of progression should yield noticeably different results.
Replies from: maia, keefe↑ comment by keefe · 2013-04-25T14:26:59.968Z · LW(p) · GW(p)
I spent a fair amount of time in martial arts and have a similar attitude toward generalization of kata/form. This idea is standing behind my consistent emphasis on the benefits of coding (particularly TDD) for this community. It builds thought patterns that are useful for tasks that computers typically perform better.
comment by A1987dM (army1987) · 2013-04-03T16:46:19.223Z · LW(p) · GW(p)
“Collect evidence about your environment from various sources; update your model of reality based on evidence collected; act in accordance with what your model of reality indicates is best for achieving your goals; repeat continually forever” would be a great candidate for The One Sentence.
↑ comment by katydee · 2013-04-03T20:34:57.672Z · LW(p) · GW(p)
Having the right goals is somewhat separate from (my view of) rationality in that rationality is a set of methods oriented towards achieving one's goals and can be applied to any sort of goal, right or wrong as that goal may be. While "selecting the right goals" can itself be a goal that you can use rationality to help with, in principle the methods of rationality can be applied to assist you in any goal.
One might (rightly) point out that applying the methods of rationality to goals that are not desirable may be hazardous for you or for those around you, but this is true for nearly any tool. Increasing one's ability to influence the world will always carry a risk of you influencing the world in a negative direction. Luckily, rationality can be used to help verify that what you're doing is likely to have positive consequences-- it is hence one of very few tools that can actually help the user use it better!