Compartmentalization in epistemic and instrumental rationality
post by AnnaSalamon · 2010-09-17T07:02:19.041Z · LW · GW · Legacy · 123 commentsContents
I. Accidental compartmentalization II. Reinforced compartmentalization: Strategies for reducing compartmentalization: None 123 comments
Related to: Humans are not automatically strategic, The mystery of the haunted rationalist, Striving to accept, Taking ideas seriously
I argue that many techniques for epistemic rationality, as taught on LW, amount to techniques for reducing compartmentalization. I argue further that when these same techniques are extended to a larger portion of the mind, they boost instrumental, as well as epistemic, rationality.
Imagine trying to design an intelligent mind.
One problem you’d face is designing its goal.
Every time you designed a goal-indicator, the mind would increase action patterns that hit that indicator[1]. Amongst these reinforced actions would be “wireheading patterns” that fooled the indicator but did not hit your intended goal. For example, if your creature gains reward from internal indicators of status, it will increase those indicators -- including by such methods as surrounding itself with people who agree with it, or convincing itself that it understood important matters others had missed. It would be hard-wired to act as though “believing makes it so”.
A second problem you’d face is propagating evidence. Whenever your creature encounters some new evidence E, you’ll want it to update its model of “events like E”. But how do you tell which events are “like E”? The soup of hypotheses, intuition-fragments, and other pieces of world-model is too large, and its processing too limited, to update each belief after each piece of evidence. Even absent wireheading-driven tendencies to keep rewarding beliefs isolated from threatening evidence, you’ll probably have trouble with accidental compartmentalization (where the creature doesn’t update relevant beliefs simply because your heuristics for what to update were imperfect).
Evolution, AFAICT, faced just these problems. The result is a familiar set of rationality gaps:
I. Accidental compartmentalization
a. Belief compartmentalization: We often fail to propagate changes to our abstract beliefs (and we often make predictions using un-updated, specialized components of our soup of world-model). Thus, learning modus tolens in the abstract doesn’t automatically change your answer to the Wason card test. Learning about conservation of energy doesn’t automatically change your fear when a bowling ball is hurtling toward you. Understanding there aren’t ghosts doesn’t automatically change your anticipations in a haunted house. (See Will's excellent post Taking ideas seriously for further discussion).
b. Goal compartmentalization: We often fail to propagate information about what “losing weight”, “being a skilled thinker”, or other goals would concretely do for us. We also fail to propagate information about what specific actions could further these goals. Thus (absent the concrete visualizations recommended in many self-help books) our goals fail to pull our behavior, because although we verbally know the consequences of our actions, we don’t visualize those consequences on the “near-mode” level that prompts emotions and actions.
c. Failure to flush garbage: We often continue to work toward a subgoal that no longer serves our actual goal (creating what Eliezer calls a lost purpose). Similarly, we often continue to discuss, and care about, concepts that have lost all their moorings in anticipated sense-experience.
II. Reinforced compartmentalization:
Type 1: Distorted reward signals. If X is a reinforced goal-indicator (“I have status”; “my mother approves of me”[2]), thinking patterns that bias us toward X will be reinforced. We will learn to compartmentalize away anti-X information.
The problem is not just conscious wishful thinking; it is a sphexish, half-alien mind that distorts your beliefs by reinforcing motives, angles or approach or analysis, choices of reading material or discussion partners, etc. so as to bias you toward X, and to compartmentalize away anti-X information.
Impairment to epistemic rationality:
- “[complex reasoning]... and so my past views are correct!” (if I value “having accurate views”, and so I’m reinforced for believing my views accurate)
- “... and so my latest original theory is important and worth focusing my career on!” (if I value “doing high-quality research”)
- “... and so the optimal way to contribute to the world, is for me to continue in exactly my present career...” (if I value both my present career and “being a utilitarian”)
- “... and so my friends’ politics is correct.” (if I have value both “telling the truth” and “being liked by my friends”)
Impairment to instrumental rationality:
- “... and so the two-fingered typing method I’ve used all my life is effective, and isn’t worth changing” (if I value “using effective methods” and/or avoiding difficulty)
- “... and so the argument was all his fault, and I was blameless” (if I value “treating my friends ethically”)
- “... and so it’s because they’re rotten people that they don’t like me, and there’s nothing I might want to change in my social habits.”
- “... and so I don’t care about dating anyhow, and I have no reason to risk approaching someone.”
Type 2: “Ugh fields”, or “no thought zones”. If we have a large amount of anti-X information cluttering up our brains, we may avoid thinking about X at all, since considering X tends to reduce compartmentalization and send us pain signals. Sometimes, this involves not-acting in entire domains of our lives, lest we be reminded of X.
Impairment to epistemic rationality:
- We find ourselves just not-thinking about our belief’s real weak points, until we’re worse at such thinking than an unbiased child.
- If we notice inconvenient possibilities, we just somehow don’t get around to following them up;
- If a subject is unusually difficult and confusing, we may either avoid thinking about it at all, or rush rapidly to a fake “solution”. (And the more pain we feel around not understanding it, e.g. because the subject is important to us, the more we we avoid thoughts that would make our non-knowledge salient).
Impairment to instrumental rationality:
- Many of us avoid learning new skills (e.g., taking a dance class, or practicing social banter), because practicing them reminds us of our non-competence, and sends pain signals.
- The longer we’ve avoided paying a bill, starting a piece of writing, cleaning out the garage, etc., the harder it may be to think about the task at all (if we feel pain about having avoided it);
- The more we care about our performance on a high-risk task, the harder it may be to start working on it (so that the highest value tasks, with the most uncertain outcomes, are those we leave to the last minute despite the expected impact of such procrastination);
- We may avoid making plans for death, disease, break-up, unemployment, or other unpleasant contingencies.
Type 3: Wireheading patterns that fill our lives, and prevent other thoughts and actions. [3]
Impairment to epistemic rationality:
- We often spend our thinking time rehearsing reasons why our beliefs are correct, or why our theories are interesting, instead of thinking new thoughts.
Impairment to instrumental rationality:
- We often take actions to signal to ourselves that we have particular goals, instead of acting to achieve those goals. For example, we may go through the motions of studying or working, and feel good about our diligence, while paying little attention to the results.
- We often take actions to signal to ourselves that we already have particular skills, instead of acting to acquire those skills. For example, we may prefer to play games against folks we often beat, request critiques from those likely to praise our abilities, rehearse yet more projects in our domains of existing strength, etc.
Strategies for reducing compartmentalization:
A huge portion of both Less Wrong and the self-help and business literatures amounts to techniques for integrating your thoughts -- for bringing your whole mind, with all your intelligence and energy, to bear on your problems. Many fall into the following categories, each of which boosts both epistemic and instrumental rationality:
1. Something to protect (or, as Napoleon Hill has it, definite major purpose[4]): Find an external goal that you care deeply about. Visualize the goal; remind yourself of what it can do for you; integrate the desire across your mind. Then, use your desire to achieve this goal, and your knowledge that actual inquiry and effective actions can help you achieve it, to reduce wireheading temptations.
2. Translate evidence, and goals, into terms that are easy to understand. It’s more painful to remember “Aunt Jane is dead” than “Aunt Jane passed away” because more of your brain understands the first sentence. Therefore use simple, concrete terms, whether you’re saying “Aunt Jane is dead” or “Damn, I don’t know calculus” or “Light bends when it hits water” or “I will earn a million dollars”. Work to update your whole web of beliefs and goals.
3. Reduce the emotional gradients that fuel wireheading. Leave yourself lines of retreat. Recite the litanies of Gendlin and Tarski; visualize their meaning, concretely, for the task or ugh field bending your thoughts. Think through the painful information; notice the expected update, so that you need not fear further thought. On your to-do list, write concrete "next actions", rather than vague goals with no clear steps, to make the list less scary.
4. Be aware of common patterns of wireheading or compartmentalization, such as failure to acknowledge sunk costs. Build habits, and perhaps identity, around correcting these patterns.
I suspect that if we follow up on these parallels, and learn strategies for decompartmentalizing not only our far-mode beliefs, but also our near-mode beliefs, our models of ourselves, our curiosity, and our near- and far-mode goals and emotions, we can create a more powerful rationality -- a rationality for the whole mind.
[1] Assuming it's a reinforcement learner, temporal difference learner, perceptual control system, or similar.
[2] We receive reward/pain not only from "primitive reinforcers" such as smiles, sugar, warmth, and the like, but also from many long-term predictors of those reinforcers (or predictors of predictors of those reinforcers, or...), such as one's LW karma score, one's number theory prowess, or a specific person's esteem. We probably wish to regard some of these learned reinforcers as part of our real preferences.
[3] Arguably, wireheading gives us fewer long-term reward signals than we would achieve from its absence. Why does it persist, then? I would guess that the answer is not so much hyperbolic discounting (although this does play a role) as local hill-climbing behavior; the simple, parallel systems that fuel most of our learning can't see how to get from "avoid thinking about my bill" to "genuinely relax, after paying my bill". You, though, can see such paths -- and if you search for such improvements and visualize the rewards, it may be easier to reduce wireheading.
[4] I'm not recommending Napoleon Hill. But even this unusually LW-unfriendly self-help book seems to get most points right, at least in the linked summary. You might try reading the summary as an exercise in recognizing mostly-accurate statements when expressed in the enemy's vocabulary.
123 comments
Comments sorted by top scores.
comment by AnnaSalamon · 2010-09-17T17:53:42.028Z · LW(p) · GW(p)
rwallace, as mentioned by whpearson, notes possible risks from de-compartmentalization:
Human thought is by default compartmentalized for the same good reason warships are compartmentalized: it limits the spread of damage.... We should think long and hard before we throw away safety mechanisms, and compartmentalization is one of the most important ones.
I agree that if you suddenly let reason into a landscape of locally optimized beliefs and actions, you may see significant downsides. And I agree that de-compartmentalization, in particular, can be risky. Someone who believes in heaven and hell but doesn’t consider that belief much will act fairly normally; someone who believes in heaven and hell and actually thinks about expected consequences might have fear of hell govern all their actions.
Still, it seems to me that it is within the reach of most LW-ers to skip these downsides. The key is simple: the downsides from de-compartmentalization stem from allowing a putative fact to overwrite other knowledge (e.g., letting one’s religious beliefs overwrite knowledge about how to successfully reason in biology, or letting a simplified ev. psych overwrite one’s experiences of what dating behaviors work). The solution is thus to be really damn careful not to let new claims overwrite old data.
That is: Listen to everything you know, including implicit, near-mode beliefs and desires. Be careful not to block contrary intuitions from view. Be careful not to decide ahead of time that your verbal/symbolic beliefs are accurate and your near-mode mistaken, but to instead hug the query, ask where your intuitions are coming from, and keep your feelings and intuitions in view whether or not you know their source. Don’t let ideology or theory overwrite experience. And keep complex models (“evidence A points to X, while B would seem surprising if X were true...”), rather than rounding your evidence to its approximate conclusion.
Also, especially while you’re building these skills, ask yourself what most people would do in this circumstance, or what you would do with more compartmentalization. And then, if it seems like a better bet, do that thing. Eliezer discusses this as remembering that it all adds up to normality.
Someone should really write a top-level post about relevant safety skills. Phil’s was good; more would be better. Safety skills are important not only for reducing downsides, but also for allowing people to be less afraid, and so more able to acquire (the huge benefits of) rationality.
Replies from: orthonormal, pjeby↑ comment by orthonormal · 2010-09-19T01:03:52.461Z · LW(p) · GW(p)
This reminds me:
When I finally realized that I was mistaken about theism, I did one thing which I'm glad of– I decided to keep my system of ethics until I had what I saw as really good reasons to change bits of it. (This kept the nihilist period I inevitably passed through from doing too much damage to me and the people I cared about, and of course in time I realized that it was enough that I cared about these things, that the universe wasn't requiring me to act like a nihilist.)
Eventually, I did change some of my major ethical beliefs, but they were the ones that genuinely rested on false metaphysics, and not the ones that were truly a part of me.
↑ comment by pjeby · 2010-09-17T18:05:21.722Z · LW(p) · GW(p)
The key is simple: the downsides from de-compartmentalization stem from allowing a putative fact to overwrite other knowledge (e.g., letting one’s religious beliefs overwrite knowledge about how to successfully reason in biology, or letting a simplified ev. psych overwrite one's experiences of what dating behaviors work). So, the solution is to be really damn careful not to let new claims overwrite old data.
This is leaving out the danger that realistic assessments of your ability can be hazardous to your ability to actually perform. People who over-estimate their ability accomplish more than people who realistically estimate it, and Richard Wiseman's luck research shows that believing you're lucky will actually make it so.
I think instrumental rationalists should perhaps follow a modified Tarski litany, "If I live in a universe where believing X gets me Y, and I wish Y, then I wish to believe X". ;-)
Actually, more precisely: "If I live in a universe where anticipating X gets me Y, and I wish Y, then I wish to anticipate X, even if X will not really occur". I can far/symbolically "believe" that life is meaningless and I could be killed at any moment, but if I want to function in life, I'd darn well better not be emotionally anticipating that my life is meaningless now or that I'm actually about to be killed by random chance.
(Edit to add a practical example: a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be... but in the process, achieves a better result than if s/he anticipated performing an average shot. Here, X is the perfect shot, and Y is the improved shot resulting from the visualization. The compartmentalization that must occur for this to work is that the "far" mind must not be allowed to break the golfer's concentration by pointing out that the envisioned shot is a lie, and that one should therefore not be feeling the associated feelings.)
Replies from: AnnaSalamon, AnnaSalamon, JGWeissman, Richard_Kennaway, JenniferRM, Will_Newsome, Valentine↑ comment by AnnaSalamon · 2010-09-17T18:10:34.350Z · LW(p) · GW(p)
I think instrumental rationalists should perhaps follow a modified Tarski litany, "If I live in a universe where believing X gets me Y, and I wish Y, then I wish to believe X". ;-)
Maybe. The main counter-argument concerns the side-effects of self-deception. Perhaps believing X will locally help me achieve Y, but perhaps the walls I put up in my mind to maintain my belief in X, in the face of all the not-X data that I am also needing to navigate, will weaken my ability to think, care, and act with my whole mind.
Replies from: Will_Newsome, pjeby↑ comment by Will_Newsome · 2010-09-18T11:42:04.980Z · LW(p) · GW(p)
You can believe a falsity for sake of utility while alieving a truth for sake of sanity. Deep down you know you're not the best golfer, but there's no reason to critically analyze your delusions if believing so's been shown time and time again to make you a better golfer. The problems occur when your occupation is 'FAI programmer' or 'neurosurgeon' instead of 'golfer'. But most of us aren't FAI programmers or neurosurgeons, we just want to actually turn in our research papers on time.
It's not even really that dangerous, as rationalists can reasonably expect their future selves to update on evidence that their past-inherited beliefs aren't getting them utility (aren't true): by this theory, passive avoidance of rationality is epistemically safer than active doublethink (which might not even be possible, as Eliezer points out). If something forces you to really pay attention to your false belief then the active process of introspection will lead to it being destroyed by the truth.
Added: You know, now that I think about it more, the real distinction in question isn't aliefs and beliefs but instead beliefs and beliefs in beliefs; at least that's how it works when I introspect. I'm not sure if studies show that performance is increased by belief in belief or if the effect is limited to 'real' belief. Therefore my whole first paragraph above might be off-base; anyone know the literature? I just have the secondhand CliffsNote pop-psy version. At any rate the second paragraph still seems reasonably clever... which is a bad sign.
Double added: Mike Blume's post indicates my first paragraph may not have been off the mark. Belief in belief seems sufficient for performance enhancement. Actually, as far as I can tell, Blume's post really just kinda wins the debate. Also see JamesAndrix's comment.
↑ comment by pjeby · 2010-09-17T19:36:26.128Z · LW(p) · GW(p)
Maybe. The main counter-argument concerns the side-effects of self-deception. Perhaps believing X will locally help me achieve Y, but perhaps the walls I put up in my mind to maintain my belief in X, in the face of all the not-X data that I am also needing to navigate, will weaken my ability to think, care, and act with my whole mind.
Honestly, this sounds to me like compartmentalization to protect the belief that non-compartmentalism is useful, especially since the empirical evidence (both scientific experimentation and simple observation) is overwhelmingly in favor of instrumental advantages to the over-optimistic.
In any case, anticipating an experience has no truth value. I can anticipate having lunch now, for example; is that true or untrue? What if I have something different for lunch than I currently anticipate? Have I weakened my ability to think/care/act with my whole mind?
Also, if we are really talking about the whole mind, then one must consider the "near" mind as well as the "far" one... and they tend to be in resource competition for instrumental goals. To the extent that you think in a purely symbolic way about your goals, you weaken your motivation to actually do anything about them.
What I'm saying is, decompartmentalization of the "far" mind is all well and good, as is having consistency within the "near" mind, and in general, correlation of the near and far minds' contents. But there are types of epistemic beliefs that we have scads of scientific evidence to show are empirically dangerous to one's instrumental output, and should therefore be kept out of "near" anticipation.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-09-18T03:32:11.970Z · LW(p) · GW(p)
The level of mental unity (I prefer this to "decompartmentalization") that makes it impossible to focus productively on a learnable physical/computational performance task, is fortunately impossible to achieve, or at least easy to temporarily drop.
↑ comment by AnnaSalamon · 2010-09-17T21:52:49.141Z · LW(p) · GW(p)
(Edit to add a practical example: a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be... but in the process, achieves a better result than if s/he anticipated performing an average shot. Here, X is the perfect shot, and Y is the improved shot resulting from the visualization. The compartmentalization that must occur for this to work is that the "far" mind must not be allowed to break the golfer's concentration by pointing out that the envisioned shot is a lie, and that one should therefore not be feeling the associated feelings.)
It seems to me there are two categories of mental events that you are calling anticipations. One category is predictions (which can be true or false, and honest or self-deceptive); the other is declarations, or goals (which have no truth-values). To have a near-mode declaration that you will hit a hole-in-one, and to visualize it and aim toward it with every fiber of your being, is not at all the same thing as near-mode predicting that you will hit a hole-in-one (and so being shocked if you don't, betting piles of money on the outcome, etc.). But you've done more experiments here than I have; do you think the distinction between "prediction" and "declaration/aim" exists only in far mode?
Replies from: pjeby↑ comment by pjeby · 2010-09-18T02:19:40.051Z · LW(p) · GW(p)
not at all the same thing as near-mode predicting that you will hit a hole-in-one (and so being shocked if you don't, betting piles of money on the outcome, etc.).
To be clear, one is compartmentalizing - deliberately separating the anticipation of "this is what I'm going to feel in a moment when I hit that hole-in-one" from the kind of anticipation that would let you place a bet on it.
This example is one of only many where compartmentalizing your epistemic knowledge from your instrumental experience is a damn good idea, because it would otherwise interfere with your ability to perform.
do you think the distinction between "prediction" and "declaration/aim" exists only in far mode?
What I'm saying is that decompartmentalization is dangerous to many instrumental goals, since epistemic knowledge of uncertainty can rob you of necessary clarity during the preparation and execution of your actual action and performance.
To perform confidently and with motivation, it is often necessary to think and feel "as if" certain things were true, which may in fact not be true.
Note, though, that with respect to the declaration/prediction divide you propose, Wiseman's luck research doesn't say anything about people declaring intentions to be lucky, AFAICT, only anticipating being lucky. This expectation seems to prime unconscious perceptual fitlers as well as automatic motivations that do not occur when people do not expect to be lucky.
I suspect that one reason this works well for vague expectations such as "luck" is that the expectation can be confirmed by many possible outcomes, and is so more self-sustaining than more-specific beliefs would be.
We can also consider Dweck and Seligman's mindset and optimism research under the same umbrella: the "growth" mindset anticipates only that the learner will improve with effort over time, and the optimist merely anticipates that setbacks are not permanent, personal, or pervasive.
In all cases, AFAICT, these are actual beliefs held by the parties under study, not "declarations". (I would guess the same also applies to the medical benefits of believing in a personally-caring deity.)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2010-09-21T20:11:23.718Z · LW(p) · GW(p)
What I'm saying is that decompartmentalization is dangerous to many instrumental goals, since epistemic knowledge of uncertainty can rob you of necessary clarity during the preparation and execution of your actual action and performance.
Compartmentalization only seems necessary when actually doing things; actually hitting golf balls or acting in a play or whatever. But during down time epistemic rationality does not seem to be harmed. Saying 'optimists' indicates that optimism is a near-constantly activated trait, which does sound like it would harm epistemic rationality. Perhaps realists could do as well as or better than optimists if they learned to emulate optimists only when actually doing things like golfing or acting, but switching to 'realist' mode as much as possible to ensure that the decompartmenalization algorithms are running at max capacity. This seems like plausible human behavior; at any rate, if realism as a trait doesn't allow one to periodically be optimistic when necessary, then I worry that optimism as a trait wouldn't allow one to periodically be realistic when necessary. The latter sounds more harmful, but I optimistically expect that such tradeoffs aren't necessary.
Replies from: pjeby↑ comment by pjeby · 2010-09-21T23:53:52.404Z · LW(p) · GW(p)
Saying 'optimists' indicates that optimism is a near-constantly activated trait, which does sound like it would harm epistemic rationality. Perhaps realists could do as well as or better than optimists if they learned to emulate optimists only when actually doing things like golfing or acting,
I rather doubt that, since one of the big differences between the optimists and pessimists is the motivation to practice and improve, which needs to be active a lot more of the time than just while "doing something".
If the choice is between, say, reading LessWrong and doing something difficult, my guess is the optimist will be more likely to work on the difficult thing, while the purely epistemic rationalist will get busy finding a way to justify reading LessWrong as being on task. ;-)
Don't get me wrong, I never said I liked this characteristic of evolved brains. But it's better not to fool ourselves about whether it's better not to fool ourselves. ;-)
↑ comment by JGWeissman · 2010-09-17T22:12:57.020Z · LW(p) · GW(p)
a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be... but in the process, achieves a better result than if s/he anticipated performing an average shot.
Suppose there is a lake between the tee and the hole, too big for the golfer to hit the ball all the way across. Should he envision/anticipate a hole in one, and waste his first stroke hitting the ball into the water, or should he acknowledge that this hole will take multiple strokes, and hit the ball around the lake?
Replies from: pjeby, wedrifid↑ comment by pjeby · 2010-09-18T02:02:48.640Z · LW(p) · GW(p)
Suppose there is a lake between the tee and the hole, too big for the golfer to hit the ball all the way across. Should he envision/anticipate a hole in one, and waste his first stroke hitting the ball into the water, or should he acknowledge that this hole will take multiple strokes, and hit the ball around the lake?
Whatever will produce the better result.
Remember that the instrumental litany I proposed is, "If believing X will get me Y and I wish Y, then I wish to believe X." If believing I'll get a hole in one won't get me a good golf score, and I want to get a good score, then I wouldn't want to believe it.
↑ comment by wedrifid · 2010-09-18T03:31:58.172Z · LW(p) · GW(p)
Suppose there is a lake between the tee and the hole, too big for the golfer to hit the ball all the way across. Should he envision/anticipate a hole in one, and waste his first stroke hitting the ball into the water, or should he acknowledge that this hole will take multiple strokes, and hit the ball around the lake?
Depends. Do you want to win or do you want to get the girl?
↑ comment by Richard_Kennaway · 2010-09-19T22:39:24.701Z · LW(p) · GW(p)
Edit to add a practical example: a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be... but in the process, achieves a better result than if s/he anticipated performing an average shot.
Really? That is, is that what the top golfers report doing, that the mediocre ones don't?
If so, I am surprised. Aiming at a target does not mean believing I'm going to hit it. Aiming at a target means aiming at a target.
Replies from: pjeby↑ comment by pjeby · 2010-09-20T15:00:46.374Z · LW(p) · GW(p)
Really? That is, is that what the top golfers report doing, that the mediocre ones don't?
My understanding is that top golfers do indeed pre-visualize every strike, though I doubt they visualize or expect holes-in-one. AFAIK, however, they do visualize something better than what they can reasonably expect to get, and performance always lags the visualization to some degree.
Aiming at a target does not mean believing I'm going to hit it.
What I'm saying is that if you really aim at it, this is functionally equivalent to believing, in that you are performing the same mental prerequisites: i.e., forming a mental image which you are not designating false, and acting as if it is true. That is more or less what "belief" is, at the "near" level of thinking.
To try to be more precise: the "acting as if" here is not acting in anticipation of hitting the target, but acting so as to bring it about - the purpose of envisioning the result (not just the action) is to call on the near system's memory of previous successful shots in order to bring about the physical states (reference levels) that brought about the previous successes.
IOW, the belief anticipation here isn't "I'm going to make this shot, so I should bet a lot of money", it's, "I'm going to have made this shot, therefore I need to stand in thus-and-such way and use these muscles like so while breathing like this" and "I'm going to make this shot, therefore I can be relaxed and not tense up and ruin it by being uncertain".
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-09-20T16:14:12.912Z · LW(p) · GW(p)
It looks like a stretch to me, to call this a belief.
I've no experience of high-level golf, but I did at one time shoot on the county small-bore pistol team (before the law changed and the guns went away, but that's even more of a mind-killing topic than politics in general). When I aim at a target with the intention of hitting it, belief that I will or won't doesn't come into the picture. Thinking about what is going to happen is just a distraction.
A month ago I made the longest cycle ride I have ever done. I didn't visualise myself as having completed the ride or anything of that sort. I simply did the work.
Whatever wins, wins, of course, but I find either of the following more likely accounts of what this exercise of "belief" really is:
(1) What it feels like to single-mindedly pursue a goal.
(2) A technique to keep the mind harmlessly occupied and out of the way while the real work happens -- what a coach might tell people to do, to produce that result.
In terms of control theory, a reference signal -- a goal -- is not an imagined perception. It is simply a reference signal.
Replies from: pjeby↑ comment by pjeby · 2010-09-20T20:15:30.386Z · LW(p) · GW(p)
It looks like a stretch to me, to call this a belief.
At which point, we're arguing definitions, because AFAICT the rest of your comment is not arguing that the process consists of something other than "forming a mental image which you are not designating false, and acting as if it is true." You seem to merely be arguing that this process should not be called "belief".
What is relevant, however, is that this is a process of compartmentalizing one's thinking, so as to ignore various facts about the situation. Whether you call this a belief or not isn't relevant to the main point: decompartmentalization can be hazardous to performance.
As far as I can tell, you are not actually disputing that claim. ;-)
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-09-20T20:36:16.357Z · LW(p) · GW(p)
You can't call black white and then say that to dispute that is to merely talk about definitions. "Acting as if one believes", if it means anything at all, must mean doing the same acts one would do if one believed. But you explicitly excluded betting on the outcome, a paradigmatic test of belief on LW.
Aiming at a target is not acting as if one were sure to hit the target. Visualising hitting the target is not acting as if one believes one will. These are different things, whatever they are called.
Replies from: pjeby↑ comment by pjeby · 2010-09-20T20:44:45.482Z · LW(p) · GW(p)
You can't call black white and then say that to dispute that is to merely talk about definitions.
Even if you call it "froobling", it doesn't change my point in any way, so I don't see the relevance of your reply... which is still not disputing my point about compartmentalization.
↑ comment by JenniferRM · 2010-09-19T20:50:54.187Z · LW(p) · GW(p)
a golfer envisions and attempts to anticipate every shot as if it were going to be a hole-in-one, even though most of them will not be... but in the process, achieves a better result than if s/he anticipated performing an average shot... The compartmentalization that must occur for this to work is that the "far" mind must not be allowed to break the golfer's concentration by pointing out that the envisioned shot is a lie, and that one should therefore not be feeling the associated feelings.
I think maybe the problem is that different neurological processes are being taken as the primary prototype of "compartmentalization" by Anna and yourself.
Performance enhancing direction of one's attention so as not to be distracted in the N minutes prior to a critical performance seems much different to me than the way the same person might calculatingly speculate about their own performance three days in advance while placing a side bet on themselves.
Volitional control over the contents of one's working memory, with a thoughtful eye to the harmonization of your performance, your moment-to-moment-mindstates, and your long-term-mind-structures (like skills and declarative knowledge and such) , seems like something that would help the golfer in both cases. In both cases there is some element of explicit calculating prediction (about the value of the bet or the golfing technique) that could be wrong, but whose rightness is likely to correlate with success in either the bet or the technique.
Part of the trick here seems to be that both the pro- and the anti-compartmentalization advice are abstract enough that both describe and might inspire good or bad behavior, and whether you think the advice is good or bad depends on which subsets of vaguely implied behavior are salient to you (based on skill estimates, typical situations, or whatever).
Rationalists, especially early on, still get hurt... they just shouldn't get hurt twice in the same way if they're "doing it right".
Any mistake should make you double check both the theory and its interpretation. The core claim of advocates of rationality is simply that there is a "there" there, that's worth pursuing... that seven "rational iterations" into a process, you'll be in much better position than if you'd done ten things "at random" (two of which were basically repetitions of an earlier mistake).
Replies from: pjeby↑ comment by pjeby · 2010-09-20T15:10:21.323Z · LW(p) · GW(p)
In both cases there is some element of explicit calculating prediction (about the value of the bet or the golfing technique) that could be wrong, but whose rightness is likely to correlate with success in either the bet or the technique.
See Seligman's optimism research. Optimists out-perform pessimists and realists in the long run, in any task that requires motivation to develop skill. This strongly implies that an epistemically accurate assessment of your ability is a handicap to actual performance in such areas.
These kinds of research can't just be shrugged off with "seems like something that would help", unless you want to drop epistemic rationality along with the instrumental. ;-)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-09-23T13:18:09.255Z · LW(p) · GW(p)
I'm a fairly good calligrapher-- the sort of good which comes from lots of attentive hours, though not focused experiments.
I've considered it a blessing that my ambition was always just a tiny bit ahead of what I was able to do. If I'd been able to see the difference between what I could do when I started and what I'm able to do now (let alone what people who are much better than I am are able to do), I think I would have given up. Admittedly, it's a mixed blessing-- it doesn't encourage great ambition.
I hear about a lot of people who give up on making music because the difference between the sounds they can hear in their heads and the sounds they can produce at the beginning are simply too large.
In Effortless Mastery, Kenny Werner teaches thinking of every sound you make as the most beautiful sound, since he believes that the effort to sound good is a lot of what screws up musicians. I need to reread to see how he gets from there to directed practice, but he's an excellent musician.
I've also gotten some good results on being able to filter out background noise by using "this is the most beautiful sound I've ever heard" rather than trying to make out particular voices in a noisy bar.
Steve Barnes recommends high goal-setting and a minute of meditation every three hours to lower anxiety enough to pursue the goals. It's worked well for him and seems to work well for some people. I've developed a ugh field about my whole fucking life as a result of paying attention to his stuff, and am currently working on undoing it. Surprisingly, draining the certainty out of self-hatred has worked much better than trying to do anything about the hostility.
A quote about not going head-on against psychological defenses
Replies from: pjeby↑ comment by pjeby · 2010-09-23T14:39:16.931Z · LW(p) · GW(p)
I've considered it a blessing that my ambition was always just a tiny bit ahead of what I was able to do. If I'd been able to see the difference between what I could do when I started and what I'm able to do now (let alone what people who are much better than I am are able to do), I think I would have given up. Admittedly, it's a mixed blessing-- it doesn't encourage great ambition.
That reminds me of another way in which more epistemic accuracy isn't always useful: projects that I never would have started/finished if I had realized in advance how much work they'd end up being. ;-)
↑ comment by Will_Newsome · 2010-09-18T11:27:11.072Z · LW(p) · GW(p)
(I did similarly with the Litany of Gendlin in my post):
Replies from: jimrandomhIf believing something that is false gets me utility,
I desire to believe in that falsity;
If believing something that is true gets me utility,
I desire to believe in that truth;
Let me not become attached to states of belief that do not get me utility.
↑ comment by jimrandomh · 2010-09-18T13:04:01.673Z · LW(p) · GW(p)
I wrote a slightly less general version of the Litany of Gendlin on similar lines, based on the one specific case I know of where believing something can produce utility:
If I can X,
then I desire to believe I can X
If believing that I can not X would make it such that I could not X,
and it is plausible that I can X,
and there are no dire consequences for failure if I X,
then I desire to believe I can X.
It is plausible that I can X.
There are no dire consequences for failure if I X.
The last two lines may be truncated off for some values of X, but usually shouldn't be.
↑ comment by Valentine · 2012-10-23T18:28:42.499Z · LW(p) · GW(p)
I've been wondering about this lately. I don't have a crisp answer as yet, though for practical reasons I'm definitely working on it.
That said, I don't think your golfer example speaks to me about the nature of the potential danger. This looks to me like it's highlighting the value of concretely visualizing goals in some situations.
Here are a few potential examples of the kind of phenomenon that nags at me:
- I'm under the impression that I'm as physically strong as I am because I learned early on how to use the try harder for physical tasks. I noticed when I was a really young kid that if I couldn't make something physically budge and then I doubled my effort, I still had room to ramp up my effort but the object often gave way. (I would regularly test this around age 7 by trying to push buildings over.) Today this has cashed out as simple muscular strength, but when I hit resistance I can barely manage to move (such as moving a "portable" dance floor) my first instinct is still to use the try harder rather than to find an easier way of moving the thing.
- This same instinct does not apply to endurance training, though. I do Tabata intervals and find my mind generating adamant reasons why three cycles is plenty. I attribute this to practicing thinking that I'm "bad at endurance stuff" from a young age.
- Possibly relatedly, I don't encounter injuries from doing weight-lifting at a gym, but every time I start a jogging regimen I get a new injury (illiotibal band syndrome, overstretching a tendon running inside my ankles, etc.). This could be coincidence, but it's a weird one, and oddly consistent.
- My impression is that I am emotionally capable of handling whatever I think I'm emotionally capable of handling, and conversely that I can't handle what I think I can't handle. For instance, when I'm in danger of being rejected in a social setting, I seem to have a good sense of whether that's going to throw me emotionally off-kilter (being upset, feeling really hurt, having a harder time thinking clearly, etc.) and if so by roughly how much. That counts as evidence that I'm just good at knowing the range of emotional impacts I can handle - but the thing is, I seem to be able to game this. If I change how I think about the situation, I'm able to increase or decrease the emotional impact it has on me. Not without bound, but pretty significantly.
- Whether I enjoy an outing with some friends seems to depend at least in part on my anticipation of how much fun we're going to have. If I get excited enough, it takes some pretty major setbacks to keep me from enjoying myself.
I also faintly remember having heard of some research showing that people who think that a puzzle has been solved are better-able to solve it than those who are told it's unsolved. But I could be misremembering this by quite a bit. I do know that some people speculate that the Manhattan Project might owe a lot of its success to rumors that the Nazis already had the bomb and that the Americans were playing catch-up before the Nazis could build one.
comment by darius · 2010-09-17T08:31:39.217Z · LW(p) · GW(p)
'Something to protect' always sounded to me like a term for a defensive attitude, a kind of bias; I have to remind myself it's LW jargon for something quite different. 'Definite major purpose' avoids this problem.
Replies from: EchoingHorror↑ comment by EchoingHorror · 2010-09-20T03:29:31.144Z · LW(p) · GW(p)
I think that, very basically, when it comes to ideas rationalists explicitly don't have anything to protect. Ideas are to be judged by their merits without interference. This has to include the Something to Protect that brought about rationality in the first place, because to the degree that thing isn't rational there is a contradiction in using rationality to protect irrationality, the defensive attitude and bias you mentioned.
Can "definite major purpose" avoid that problem (beyond sounding unlike what is meant)? I'd shorten it to "major purpose" or make it "prime directive" or "main quest" just to avoid anything definite. It should be subject to change with new information or better thinking while the rational methods used to achieve it stay the same.
comment by Kaj_Sotala · 2010-09-17T08:12:09.876Z · LW(p) · GW(p)
I find the analysis presented in this post to be exceptionally good, even by the standards of your usual posting.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2010-09-17T09:30:43.156Z · LW(p) · GW(p)
Seconded: dense with useful content, unlike this comment.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2010-09-17T14:05:49.482Z · LW(p) · GW(p)
If you quoted the most useful sentence of the post, your comment would be more than half as dense, which is still pretty dense.
Replies from: Will_Newsome, Jonathan_Graehl↑ comment by Will_Newsome · 2010-09-17T23:25:43.280Z · LW(p) · GW(p)
But it would be redundant information, making the post/comment system overall less dense, which would make people sad.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-09-18T00:29:35.840Z · LW(p) · GW(p)
Not so -- re-emphasizing what points hit home, and how one plans to apply them, often helps the useful parts stand out for others. Self-help/business seminars standardly have attendees summarize takeaways, and what personal experiments they plan, after each session.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2010-09-18T01:40:07.464Z · LW(p) · GW(p)
Good point.
↑ comment by Jonathan_Graehl · 2010-09-18T03:36:05.071Z · LW(p) · GW(p)
I find both your comments incredibly dense :)
Replies from: wedrifidcomment by AnnaSalamon · 2010-09-18T02:30:56.794Z · LW(p) · GW(p)
For example, we may request critiques from those likely to praise our abilities...
In the spirit of learning and not wireheading, could a couple people for whom this post didn't work well explain what didn't work about it? A few folks praised it, but it seems to be getting less upvotes than other posts, and I'd love to figure out how to make posts that are widely useful.
Replies from: Relsqui↑ comment by Relsqui · 2010-09-18T03:25:12.403Z · LW(p) · GW(p)
Personally, I don't have the foundation in relevant knowledge to easily understand much of the post content, so I'm not qualified to vote on it one way or the other. I may come back later, when I do, and vote then.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-09-18T03:27:20.687Z · LW(p) · GW(p)
Thanks. Was the post useful to you, or just opaque?
Replies from: Relsqui↑ comment by Relsqui · 2010-09-18T04:37:51.414Z · LW(p) · GW(p)
Not entirely opaque, but like reading a language which you've learned the 200 most common words of, enabling you to understand 95% of a text and not come away with the point (because the key parts are in the other 5%). Not an error, just a reader mismatch; it wouldn't have been worth mentioning except that you asked.
Replies from: MichaelVassar↑ comment by MichaelVassar · 2010-09-18T07:23:08.549Z · LW(p) · GW(p)
Have you read the sequences yet? If not, can you suggest a good way to encourage people who haven't yet done so to do so?
Replies from: Relsqui, Relsqui, Relsqui↑ comment by Relsqui · 2010-09-18T19:19:25.183Z · LW(p) · GW(p)
After trying to figure out where the response would be best suited, I'm splitting the difference; I'll put a summary here, and if it's not obviously stupid and seems to garner comments, I'll post the full thing on its own.
I've read some of the sequences, but not all; I started to, and then wandered off. Here are my theories as to why, with brief explanations.
1) The minimum suggested reading is not just long, it's deceptively long.
The quantity by itself is a pretty big hurdle to someone who's only just developing an interest in its topics, and the way the sequences are indexed hides the actual amount of content behind categorized links. This is the wrong direction in which to surprise the would-be reader. And that's just talking about the core sequences.
2) Many of the sequences are either not interesting to me, or are presented in ways that make them appear not to be.
If the topic actually doesn't interest me, that's fine, because I presumably won't be trying to discuss it, either. But some of the sequence titles are more pithy than informative, and some of the introductory text is dissuasive where it tries to be inviting; few of them give a clear summary of what the subject is and who needs to read it.
3) Even the ones which are interesting to me contain way more information, or at least text, than I needed.
I don't think it's actually true that every new reader needs to read all of the sequences. I'm a bad example, because there's a lot in them I've never heard of or even thought about, but I don't think that's true of everyone who walks up to LW for the first time. On the other hand, just because I'd never heard of Bayes's Theorem by name doesn't mean that I need a huge missive to explain it to me. What I turned out to need was an example problem, the fact that the general form of the math I used to solve it is named after a guy called Bayes, and an explanation of how the term is used in prose. I was frustrated by having to go through a very long introduction in order to get those things (and I didn't entirely get the last one).
My proposal for addressing these is to create a single introductory page with inline links to glossary definitions, and from there to further reading. The idea is that more information is available up front and a new reader can more easily prioritize the articles based on their own knowledge and interest; it would also provide a general overview of the topics LW addresses. (The About page is a good introduction to the site, but not the subjects.) On a quick search, the glossary appears to have been suggested before but not yet exist--unless I just can't find it, in which case it's not doing much good. There are parts of this I'm not qualified to do, but I'd be happy to donate time to the ones that I am.
Replies from: MichaelVassar, Will_Newsome, komponisto↑ comment by MichaelVassar · 2010-09-19T17:05:08.399Z · LW(p) · GW(p)
To be clear, do you actually think that time spent reading later posts has been more valuable than marginal time on the sequences would have been? To me that seems like reading Discover Magazine after dropping your intro to mechanics textbook because the later seems to just tell you thinks that are obvious.
Replies from: Relsqui, Perplexed↑ comment by Relsqui · 2010-09-19T20:18:52.134Z · LW(p) · GW(p)
I think some of my time spent reading articles in the sequences was well spent, and the rest was split between two alternatives: 1) in a minority of cases where the reading didn't feel useful, it was about something I already felt I understood, and 2) in a majority of such cases, it wasn't connected to something I was already curious about.
It's explained a bit better in the longer version of the above comment (which now appears to be homeless). But I think the sequences, or at least the admonition to read them all, are targeted at someone who has done some reading or at least thinking about their subjects before. Not because they demand prior knowledge, but because they demand prior interest. You may have underestimated how much of a newbie you have on your hands.
It's not that I'm claiming to be so smart that I can participate fully in the discussions without reading up on the fundamentals, it's that participating or even just watching the discussion is the thing that's piquing my interest in the subjects in the first place. It feels less like asking me to read about basic physics before trying to set up a physics experiment, and more like asking me to read about music theory without ever having heard any music. It's just not as meaningful before having observed what it's good for--and even a highly talented and technical musician would admit that attending a performance with other people is more interesting than doing theory homework, even if they have a very clever theory teacher who makes the lessons into little stories.
Just to put this into perspective, I don't think any of the above is nearly as significant to my reading habits as the simple amount of material in the sequences. I do keep reading bits and pieces, but how much time in a day I'm able or even willing to focus on it is finite. I've spent a lot of time this week reading LW when I could have been out getting vitamin D or practicing the guitar, and at the current rate it would still take me quite a while to get through all the sequences (less, but not a trivial amount, to get through just the core sequences). That's a time commitment it's difficult to justify if I'm to make it before being allowed to discuss the ideas with human beings in the current blog.
I guess there are two theses here: that the sequences are good at bestowing information, but the current posts are better at garnering interest in them; and that the latter is simply more enjoyable, because it's interactive. (I, like some other commenters here, read LW as play, not work; if it weren't fun I wouldn't be here.) If you want to convince people to read the sequences before participating, those are your obstacles.
Replies from: komponisto, MichaelVassar↑ comment by komponisto · 2010-09-20T06:07:51.791Z · LW(p) · GW(p)
and even a highly talented and technical musician would admit that attending a performance with other people is more interesting than doing theory homework, even if they have a very clever theory teacher who makes the lessons into little stories
I am struck by the inclusion of the seemingly unnecessary phrase "with other people", which suggests that your real interest is social in nature. And sure enough, you confirm this later in the comment:
That's a time commitment it's difficult to justify if I'm to make it before being allowed to discuss the ideas with human beings in the current blog.
and
[current posts are] simply more enjoyable, because it's interactive
It seems like an important point, and another argument in favor of additional (sub)forums. About that, I'm not sure what I think yet.
Incidentally, against the notion that attending performances is the most enjoyable part of the musical experience, here is Milton Babbitt on the subject:
Replies from: Relsqui, Relsqui"I can't believe that people really prefer to go to the concert hall under intellectually trying, socially trying, physically trying conditions, unable to repeat something they have missed, when they can sit at home under the most comfortable and stimulating circumstances and hear it as they want to hear it. I can't imagine what would happen to literature today if one were obliged to congregate in an unpleasant hall and read novels projected on a screen.
↑ comment by Relsqui · 2010-09-20T06:25:27.366Z · LW(p) · GW(p)
suggests that your real interest is social in nature
Well, to say it's my "real" interest suggests that my interest in rationality is fake, which is false, but I am indeed a very social critter and a lot of the appeal of LW is being able to discuss, not just absorb. (I even get shiny karma points for doing it well!)
So, yes--and I was actually realizing that myself over the course of writing that comment (which necessarily involved thinking about why I'm here).
It seems like an important point
Despite the above, I'm not actually sure why it is.
and another argument in favor of additional (sub)forums
Well, I voted for 'em, so it's good to hear that's consistent. :)
here is Milton Babbitt
That quote is pretty funny. We clearly differ in at least these two ways: 1) I either don't know or don't care enough about music to be bothered by period distractions from it (I'm not sure how to tell the difference from inside my own head), and 2) I like the noisy hall.
He's right about the novel, though, that would be appalling. (Difference being that verbal language breaks down a lot faster if you miss a piece.)
↑ comment by Relsqui · 2010-09-24T09:07:51.790Z · LW(p) · GW(p)
I can't imagine what would happen to literature today if one were obliged to congregate in an unpleasant hall and read novels projected on a screen.
Oh, my. Fiction put in a good effort, but truth pulls ahead as always:
Nor is it precisely a theatricalization of the novel .... Rather, in “Gatz” ... the text of “The Great Gatsby” is spoken aloud, all forty-nine thousand words of it
↑ comment by MichaelVassar · 2010-09-20T05:41:00.462Z · LW(p) · GW(p)
Thanks for a very thoughtful answer.
Replies from: Relsqui↑ comment by Perplexed · 2010-09-19T20:42:54.386Z · LW(p) · GW(p)
A clever point, but is it really useful to compare the sequences to a textbook? Maybe a textbook at some community college somewhere. I personally found the sequences to be overloaded with anecdote and motivation, and rather lacking in technical substance.
There is one thing that the post and comment part of this site has that the sequences do not have. Dialog. Posters and commenters are challenged to clarify their positions and to defend their arguments. In the sequences, on the other hand, it often seemed that Eliezer was either busy demolishing strawmen, or he was energetically proving some point which I had never really apprehended.
Replies from: timtyler↑ comment by timtyler · 2010-09-20T20:55:05.777Z · LW(p) · GW(p)
The "sequences" posts have comment sections too - no?
There are only a few posts with disabled comments - such as this one:
http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/
Evidently the definition of "rationality" is not up for debate. Perhaps it is the royal "we".
Replies from: matt, Perplexed↑ comment by Perplexed · 2010-09-20T21:03:53.222Z · LW(p) · GW(p)
The "sequences" posts have comment sections too - no?
Yes, but I don't think the discussion was all that vigorous. Eliezer was making a full size posting every day back then. He really didn't have the time to engage commenters, even if the commenters had tried to engage him.
Evidently the definition of "rationality" is not up for debate.
Cute.
Replies from: timtyler↑ comment by Will_Newsome · 2010-09-18T23:05:49.138Z · LW(p) · GW(p)
Good analysis.
My proposal for addressing these is to create a single introductory page with inline links to glossary definitions, and from there to further reading.
Also briefly explaining where the subjects connect to rationality. It's not immediately obvious what e.g. evolutionary biology or quantum physics have to do with human rationality, which probably puts people off. Actually, it's so not-obvious that I think it'd be easy to miss the point if one wasn't somewhat careful about making sure they read most of the posts in the sequence, or the ones explaining how everything's connected.
Replies from: Relsqui, Relsqui↑ comment by Relsqui · 2010-09-19T00:17:34.268Z · LW(p) · GW(p)
By the by, is this a vote for or against making an actual post on this subject (or neither)? I'm trying to get a sense of whether that would be acceptable and useful; I've gotten a handful of upvotes on comments about it, but I don't know if that means to go ahead or not. (This is an area of local etiquette I'm not yet familiar with and don't particularly want to take the karma hit for messing up.)
Replies from: Will_Newsome, Perplexed↑ comment by Will_Newsome · 2010-09-19T01:18:44.162Z · LW(p) · GW(p)
In general, suggestions for site improvements are frowned upon because very few people here are keen on actually implementing them, and the typical response is "Yeah that'd be great, now let's have a long discussion about how great that is and subtle improvements that could make it even better while not actually doing anything."
Less Wrong needs improvements, but more than that it needs people willing to improve it. The Intro Page idea has been around for awhile, but the people who have control over the site have a lot of other stuff to focus on and there's limited time. So overall I don't think a post would be good, but I'm unsure as to how to fix the general problem.
Replies from: Relsqui↑ comment by Relsqui · 2010-09-19T01:30:31.187Z · LW(p) · GW(p)
Thanks, that's the answer I was looking for.
the people who have control over the site have a lot of other stuff to focus on
If it was done on the wiki, would they need to commit time to it? It seems like a dedicated member or set of members could just write the page and present it to the community as a fait accompli. The only reason I haven't done it is that i don't feel I know enough yet. Maybe I'll do it anyway, and that will inspire more experienced LWers to come fix it. ;)
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-09-20T05:01:00.746Z · LW(p) · GW(p)
Yes, write something on the wiki and ask later for it to be placed somewhere useful. There is the problem that the people who need introductions probably aren't going to write them. If you go back to reading the sequences, it would be a good exercise to write summaries.
Replies from: Relsqui↑ comment by Relsqui · 2010-09-20T05:08:49.236Z · LW(p) · GW(p)
There is the problem that the people who need introductions probably aren't going to write them.
Yup. And for people who don't need them, it's pretty tedious.
If you go back to reading the sequences, it would be a good exercise to write summaries.
That occurred to me as well. We'll see how that comes along.
↑ comment by Perplexed · 2010-09-19T01:31:52.382Z · LW(p) · GW(p)
I'll vote for making a post.
I like your characterization of what is "wrong" with the sequences, but I'm not sure what ought to be done about it. I suspect that different people need to read different sequence postings. I would like to have the introduction pages for each sequence be expanded to provide roughly a paragraph of description for each posting in the sequence. If you disagree with the paragraph or don't understand it, then you should probably read that posting.
ETA: After reading Will's comment, I will withdraw my vote. Proceed with caution.
Replies from: Relsqui↑ comment by Relsqui · 2010-09-19T03:51:53.100Z · LW(p) · GW(p)
I suspect that different people need to read different sequence postings.
I agree; that's one of the things I wanted to discuss (and something my solution would theoretically address). I might try to find another useful place to put my longer writeup of the subject, e.g. my own talk page on the wiki.
↑ comment by komponisto · 2010-09-19T02:03:20.658Z · LW(p) · GW(p)
What I turned out to need was an example problem, the fact that the general form of the math I used to solve it is named after a guy called Bayes, and an explanation of how the term is used in prose...(and I didn't entirely get the last one).
You'll want to see this post, if you haven't already.
Replies from: Relsqui↑ comment by Relsqui · 2010-09-18T17:27:27.713Z · LW(p) · GW(p)
I started a reply to this and then noticed that it was getting to be a solid pageful. Is "why don't newbies read the sequences" a sufficiently commonly addressed topic to warrant a post? What I've got so far includes a breakdown of my theory as to the answer, as well as a suggestion for a solution.
Replies from: Perplexed↑ comment by Perplexed · 2010-09-18T17:47:49.658Z · LW(p) · GW(p)
I would like to see that analysis and suggestion very much. But it does sound a bit risky as a topic for a premier top-level post. Why not just present it as a comment?
Replies from: Relsqui↑ comment by Relsqui · 2010-09-18T17:52:06.789Z · LW(p) · GW(p)
The reason would be if it were of interest to the community at large, but I trust your (pl.) judgment if you say it would be better suited to a comment. I'll post when I'm done tinkering with it.
Replies from: None↑ comment by [deleted] · 2010-09-18T18:37:06.676Z · LW(p) · GW(p)
Phenomenon -> Theory(s) -> Experiment!
If you make a post, it would probably benefit from a simple poll.
Replies from: Relsqui↑ comment by Relsqui · 2010-09-18T20:13:44.608Z · LW(p) · GW(p)
Good point. I saw Yvaine post a poll recently, so I have a general idea of how that works here, but if there's anything non-obvious I need to know by all means elucidate. (Similarly, I'd welcome any advice on formatting a useful article.)
comment by datadataeverywhere · 2010-09-18T23:07:36.019Z · LW(p) · GW(p)
Thank you, this was an excellent post. It tied together a lot of discussions that have gone on and continue to go on here, and I expect it to be very useful to me.
Among other things, I suffer from every impairment to instrumental rationality that you mention under Type 2.
The first of those is perhaps my most severe downfall; I term it the "perpetual student" syndrome, and I think that phrasing matches other places where that phrase is used. I'm fantastically good at picking up entry-level understandings of things, but once I lose the rewarding feeling of looking good in comparison to my peers, I slack off. I breezed through the first two years of required courses for degrees in chemistry, physics, mathematics, biology and computer science, but I only finished a degree in one of them. I have a terrible habit of taking introductory dance classes, but then never practicing the dance long enough to get good at it. As a professional researcher, it's even worse; I eagerly attack any problem that requires me to learn more about someone else's field, but actually producing original work inside my own is like pulling teeth. I'm working on it!
I've bookmarked this post, and intend to review it periodically to check off my progress against the problems that you point out.
Replies from: Nonecomment by [deleted] · 2010-09-17T12:29:16.458Z · LW(p) · GW(p)
I like the writing here: very clear and useful.
I have a very simple problem when doing mathematics.
I want to write a proof. But I also want to save time. And so I miss nuances and make false assumptions and often think the answer is simpler than it is. It's almost certainly motivated cognition, rather than inadequate preparation or "stupidity" or any other problem.
I know the answer is "Stop wanting to save time" -- but how do you manipulate your own unvoiced desires?
Replies from: AnnaSalamon, mathemajician, pjeby, None, JoshuaZ, dco, Alicorn, Soki, JGWeissman↑ comment by AnnaSalamon · 2010-09-17T15:25:03.348Z · LW(p) · GW(p)
Do you have any ideas, including guesswork, about where your hurry is coming from? For example, are you in a hurry to go do other activities? Are you stressing about how many problems you have left in your problem set? Do you feel as though you're stupid if you don't immediately see the answer?
Some strategies that might help, depending:
- Block off time, know and visualize that this time is for proof-writing and nothing else (you have this block of time whether you use it or not, and cannot move onto other activities), and visualize that this is the only problem in the world.
- Make a plan for the rest of the day (and write your “must hurry to do” activities down on a list, with their own timeslots) so that you can believe the blocked off time in 1. When your brain tells you you have to hurry and do X, remind it that you’ll do X at 4pm (or whenever), that this is the timeslot for proofs, and that focusing slowly will get the most done.
- Find a context wherein you have the sort of slow, all-absorbing focus that would be helpful here (whether on proof-writing, conversation, or whatever else). Try to understand the relevant variables/mindset and to set up the outside context similarly, and/or copy your internal frame or stance.
- Think of great performers who were utterly absorbed in their tasks, and of the excellence they embodied. Put up their names, photos, or other priming influences. Visualize yourself as embodying that same mindset.
- Use “positive self-talk” to prime yourself as you work, by saying things like “I am moving slowly, with full focus. I am noticing every nuance I can notice. My mission is to do well, regardless of speed.”
- Do your proof with a friend or a student, while showing them what patience looks like and talking about how you’re learning patient, focussed mindsets.
↑ comment by mathemajician · 2010-09-18T12:10:20.326Z · LW(p) · GW(p)
The way it works for me is this:
First I come up with a sketch of the proof and try to formalise it and find holes in it. This is fairly creative and free and fun. After a while I go away feeling great that I might have proven the result.
The next day or so, fear starts to creep in and I go back to the proof with a fresh mind and try to break it in as many ways as possible. What is motivating me is that I know that if I show somebody this half baked proof it's quite likely that they will point out a major flaw it. That would be really embarrassing. Thus, I imagine that it's somebody else's proof and my job is to show why it's broken.
After a while of my trying to break it, I'll then show it to somebody kind who won't laugh at me if it's wrong, but is pretty careful at checking these things. Then another person... slowly my fear of having screwed up lifts. Then I'm ready to submit to publish.
So in short: I'm motivated to get proofs right (I have yet to have a published proof corrected, not counting blog posts) out of a fear of looking bad. What motivates me to publish at all is the feeling for satisfaction that I draw from the achievement. In my moderate experience of mathematicians, they often seem to have similar emotional forces at work.
↑ comment by pjeby · 2010-09-17T15:58:46.230Z · LW(p) · GW(p)
I know the answer is "Stop wanting to save time" -- but how do you manipulate your own unvoiced desires?
If you think of the brain as having two "programming languages": the "far" (symbolic) and "near" (experiential), and the "unvoiced desire" as being something that's running on the "near" system, then what you need to do is translate from the symbolic to the experiential.
In this case, you'd begin by asking what experiences you anticipate will happen if you don't "save time", and what your emotional reaction to those experiences is.
Take care, though, to imagine actually experiencing one specific situation (in sensory detail) where you currently want to "save time", and to anticipate the results in sensory detail as well. Otherwise, you'll only engage the "far" (symbolic) system, and won't get any useful information.
↑ comment by [deleted] · 2010-09-18T12:58:42.234Z · LW(p) · GW(p)
Thanks for all the good advice! I think I'll try blocking off time (I've already started tracking how much time a day I spend actually working and found it was much less than I'd assumed) and also try the two-stage process (first try to get something, then try looking for flaws.)
↑ comment by JoshuaZ · 2010-09-18T12:26:07.094Z · LW(p) · GW(p)
I want to write a proof. But I also want to save time. And so I miss nuances and make false assumptions and often think the answer is simpler than it is. It's almost certainly motivated cognition, rather than inadequate preparation or "stupidity" or any other problem.
At least based on personal introspection, the part of my mind that comes up with proofs feels very similar to engaging in motivated cognition. This is in some ways ok because if a proof is valid then counterarguments aren't something that need to be thought about. But yes, this can lead to the problem of constructing apparently valid proofs that then don't work. One thing that seems to help is to engage in more or less motivated cognition to make a proof and then go through that proof in close detail looking for flaws. So essentially, use motivated cognition to try to get something good, and then use motivated cognition to try to poke holes in it. If you iterate this enough one will generally have an ok proof.
↑ comment by dco · 2010-09-17T18:50:40.722Z · LW(p) · GW(p)
This is a well-known issue. Basically, a mathematical problem tends to involve several non-trivial steps. If you are too pessimistic, it is impossible to see all these steps (because you get bogged down in proving details and lose track of the point of the problem.) On the other hand, if you are too optimistic, you will take too long to debunk an incorrect sequence of steps, leading to the problem you describe.
One solution is to work with someone else, and take turns being optimistic. (E.g., one person proposes a solution, then the other tries to shoot it down; it's much easier to be pessimistic about other people's ideas.) Another solution is what Mr. Weissman proposes: just investigate the problem, look at similar problems, try to falsify the problem, try to prove something stronger, etc.
I'm sure that professional mathematicians deal with this issue all the time, so you might want to ask one of them as well.
↑ comment by Soki · 2010-09-17T21:44:50.838Z · LW(p) · GW(p)
Ask yourself what are the thrilling aspects of what you want to prove. Look for what you cannot explain, but feel is true.
I want to write a proof.
Before writing, you should be satisfied with your understanding of the problem. Try to find holes in it, as if you were a teacher reading some student work.
You should also ask yourself why you want to write a correct proof, and remember that a proof that is wrong is not a proof.
↑ comment by JGWeissman · 2010-09-17T17:09:53.866Z · LW(p) · GW(p)
Instead of setting out to prove a proposition, investigate whether or not it is true. Perhaps genuine curiosity will override your desire to save time.
comment by whpearson · 2010-09-17T10:33:46.325Z · LW(p) · GW(p)
“... and so I don’t care about dating anyhow, and I have no reason to risk approaching someone.”
This doesn't seem like it is a distorted reward pathway. Unless people are valuing being virtuous and not wasting time and money on dating?
If it is a problem it seems more likely to be an Ugh field. I.e. someone who had problems with the opposite sex and doesn't want to explore a painful area.
Apart from that I think rwallace's point needs to be addressed. Lack of compartmentalisation can be a bad thing as well as a good thing. Implicit in this piece is the idea that the good behaviours/ideas will win out over the bad.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-09-17T15:41:00.987Z · LW(p) · GW(p)
“... and so I don’t care about dating anyhow, and I have no reason to risk approaching someone.”
This doesn't seem like it is a distorted reward pathway.
People seem to feel better about not achieving things they “don’t care about” than about ignoring or failing at things they care about. Thus the phenomenon of sour grapes (where, after Aesop’s fox fails to get the grapes, it declares that the grapes “were sour anyway”). I’m not sure if sour grapes arises because we don’t want to expect pain and desire-dissatisfaction in our futures (because one e.g. cares about dating, but plans not to ever work toward it) or because we prefer to think of ourselves as the sorts of people who would act on desires instead of fleeing in fear, or what.
I agree that ugh fields are also involved in the example.
Replies from: Jonathan_Graehl, whpearson↑ comment by Jonathan_Graehl · 2010-09-18T03:35:33.461Z · LW(p) · GW(p)
Sour grapes are essential when they're one shot opportunities that we missed (perfect world: first learn from any mistake, then emotionally salve w/ sour grapes).
They're a detriment when the opportunity is ongoing and, fear of more possible failures considered, likely worth the effort.
Replies from: wedrifid↑ comment by wedrifid · 2010-09-18T04:22:38.570Z · LW(p) · GW(p)
Sour grapes are essential when they're one shot opportunities that we missed (perfect world: first learn from any mistake, then emotionally salve w/ sour grapes).
Sour grapes are never essential. Not only are there better emotional salves it is healthier to just not take emotional damage from missed opportunities or mistakes in the first place. (This is a skill that can be developed.)
Replies from: EchoingHorror, Jonathan_Graehl↑ comment by EchoingHorror · 2010-09-20T04:02:13.265Z · LW(p) · GW(p)
I take the "Meh, I've had worse" approach to deflecting emotional damage. I'm also partial to considering missed opportunities to be trivial additions to the enormous heap of missed opportunities before them.
No need for sour grapes here. In fact, let's keep all grapes sweet and succulent just in case we get them later.
Replies from: Relsqui↑ comment by Jonathan_Graehl · 2010-09-20T22:23:58.873Z · LW(p) · GW(p)
Interesting. Can you be more specific?
I don't feel like I can, or need to, make all of my emotional reactions rational. But if it's easy, of course I prefer to be better integrated.
Replies from: wedrifid↑ comment by wedrifid · 2010-09-21T06:32:19.120Z · LW(p) · GW(p)
People certainly don't need to make their emotional reactions rational if they don't want to - but they can do so to some extent when it helps. This is the cornerstone of things like Cognitive Behavioural Therapy and much of pjeby's mind hacking.
It's hard to describe without going into huge detail but something that works is embracing the frustration in the full degree rather than flinching away from it. Then you can release it. Then rinse and repeat. The emotional trigger is reduced as your mind begins to realise that it really isn't as awful as you thought.
You can also harness the frustration into renewed motivation for reaching the generalised goal that hit a setback or localised failure. This is nearly (but not quite) the opposite of using the frustration to remove your desire for something.
Replies from: Jonathan_Graehl, Relsqui↑ comment by Jonathan_Graehl · 2010-09-21T08:26:07.941Z · LW(p) · GW(p)
I've also read about CBT and agree that it seems helpful. I took from it the idea that if you're avoiding some activity that you think you would probably benefit from, look at the reasons you think it will be hard/painful/whatever, and you should not only think about and defuse them purely intellectually, but also through practice (starting w/ milder efforts) get your toes wet in that direction, comparing the actual results to your overblown negative expectations.
Also, in my experience, I've never been disappointed when I honestly describe some negative emotional reaction I'm already having, and look for some insight into why I'm having it. That is, I'm already feeling terrible, and so coming up with true-seeming stories explaining the feeling (and perhaps deciding that I've learned something, or have some plan for doing better in the future) is a mild relief.
Replies from: wedrifid↑ comment by wedrifid · 2010-09-21T09:46:20.624Z · LW(p) · GW(p)
Also, in my experience, I've never been disappointed when I honestly describe some negative emotional reaction I'm already having, and look for some insight into why I'm having it. That is, I'm already feeling terrible, and so coming up with true-seeming stories explaining the feeling (and perhaps deciding that I've learned something, or have some plan for doing better in the future) is a mild relief.
This reminds me of the popular "what is true is already so; owning up to it doesn't make it worse".
Also, see today's SMBC comic. His timing is incredible. :)
↑ comment by Relsqui · 2010-09-21T06:47:30.353Z · LW(p) · GW(p)
It's hard to describe without going into huge detail but something that works is embracing the frustration in the full degree rather than flinching away from it. Then you can release it.
"I must not be frustrated. .... I will face my frustration, permit it to pass over me and through me ..."
I honestly use the Litany Against Fear quite like this--for frustration, annoyance, pain, or anything else that I have to put up with for a while. The metaphor of passing over and through works well for me.
Replies from: wedrifid↑ comment by wedrifid · 2010-09-21T07:02:31.919Z · LW(p) · GW(p)
My twist on that is that I use 'will' instead of 'must'. Similar to Jonathan I don't think I need to alter my emotional responses and I reject such demands even from myself. "Will", "want" and sometimes "am" all work better for me. (This can just mean leaving off the first sentence there.)
Replies from: Jonathan_Graehl, Relsqui↑ comment by Jonathan_Graehl · 2010-09-21T08:20:20.822Z · LW(p) · GW(p)
I won't look for the study hyperlink, but I was also charmed by something showing that the self-question "will I X?" was interesting in that it actually movtivated people to do X (more so than something like "I must X"). That is, having a curious/wondering tone seemed helpful. I and the reporters of this result may be missing the actual cause, of course.
Replies from: wedrifid↑ comment by Relsqui · 2010-09-21T08:26:28.570Z · LW(p) · GW(p)
That makes sense to me. "Must" implies a moral code; if you decline to accept responsibility from any external moral code, you could interpret it as "must, according to rational methods of achieving my personal goals," but there's no advantage to that circuitous interpretation over the changes you suggest.
Replies from: wedrifid↑ comment by whpearson · 2010-09-17T22:48:46.262Z · LW(p) · GW(p)
Disclaimer: I believe I have a lot less interest in dating than most men. Partially introspection/partially revealed preference when opportunity arose.
I hadn't thought about that view. One thing is that it is worth noting is that it is hard to ignore dating. And people tend to ask for some explanation, I tend to go with, "I haven't found the right person yet," though.
Although what would you say the right response was to not being willing to pay a cost for something? Lets say you want a sports car, you lust after it for a bit. Then you find it costs 3 million dollars, and you could always find better things to do with the money.
Should you then say you don't care about the sports car? Or should you leave it as a nagging desire which will never be fulfilled?
Replies from: mattnewport↑ comment by mattnewport · 2010-09-17T23:03:23.913Z · LW(p) · GW(p)
Should you then say you don't care about the sports car? Or should you leave it as a nagging desire which will never be fulfilled?
This seems like a false dichotomy. My answer to this question is something along the lines of "the current price of a sports car is more than I am currently willing to pay for the pleasure of owning a sports car, in the future circumstances may be different but for now I will make higher expected value choices".
Replies from: whpearson↑ comment by whpearson · 2010-09-17T23:21:29.855Z · LW(p) · GW(p)
To me things and people I care about are those that I willingly expend some mental energy on every so often. So when I care about owning a sports car, every so often it would pop into my head, "Darn, I wish the car was cheaper". As it is unlikely to become so it would be an unfulfilled desire and taking up mental energy for no reason, I could spend that mental energy elsewhere.
Care is different from value. Does that explain what I meant?
Replies from: mattnewport↑ comment by mattnewport · 2010-09-17T23:36:32.713Z · LW(p) · GW(p)
I think I understand what you mean, I just don't think it's a good strategy to try to convince yourself you don't care about something because it is not currently attainable. A better alternative might be to think about what appeals to you about owning a sports car and consider if there are lower cost ways of getting some of the same benefits for example.
Replies from: whpearson↑ comment by whpearson · 2010-09-18T00:16:54.965Z · LW(p) · GW(p)
Oh, I agree. But once you have done so would it be a bad idea to say you no longer care about the sport's car?
Aside: I didn't mean to give the impression it was unattainable. The hypothetical still works if you've got 4 million dollars, you could buy a house and donate some money to xrisk charities, found companies or put it aside for retirement. All better things than the car.
Replies from: mattnewport↑ comment by mattnewport · 2010-09-18T00:57:12.762Z · LW(p) · GW(p)
If you want a sports car that implies that there is some point at which the best marginal use of your next 3 million dollars would be to buy the sports car. If there is no such point then it seems to me that you don't really want it in any meaningful sense.
comment by [deleted] · 2010-10-08T22:24:13.167Z · LW(p) · GW(p)
This post has been very useful to me.
If I had to isolate what was personally most useful (it'd be hard but) I'd pick the combination of your discussion of distorted reward signals and your advice about something to protect. I now notice status wireheading patterns quite frequently (often multiple times daily), and put a stop to them because I recognize they don't work towards what I actually care about (or maybe because I identify as a rationalist, I'm not sure). Either way I appreciate being able to halt such patterns before they grow into larger action patterns.
comment by steven0461 · 2010-09-18T20:55:01.356Z · LW(p) · GW(p)
I suspect that an underrated rationality technique is to scream while updating your plans and beliefs on unpleasant subjects, so that any dismay at the unpleasantness finds expression in the scream rather than in your plans and beliefs.
comment by Johnicholas · 2010-09-17T18:25:36.116Z · LW(p) · GW(p)
This is a great post, and I wish to improve only a tiny piece of it:
"Similarly, we often continue to discuss, and care about, concepts that have lost all their moorings in anticipated sense-experience."
In that sentence, I hear a suggestion the primary or only thing we ought to care about is anticipated sense-experience. However, anticipated sense-experience can be manipulated (via suicide or other eyes-closing techniques), and so cannot be the only or primary thing that we ought to care about.
I admit I don't know precisely what else we ought to care about, but my intuition is that advanced concepts like "anticipated sense-experience" are theoretical constructs, built from a chain of reasoning out from more foundational notions, and must be tested against "common sense", which includes a notion that if you're doing something probably fatal, you should only do it in order to accomplish a goal in the world that you will not experience, rather than a goal in the world where you're surprised to find yourself alive.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-09-18T03:29:10.061Z · LW(p) · GW(p)
anticipated sense-experience can be manipulated
This doesn't require any amendment to the original statement. Once you decide to cope by closing your eyes, your future sense experience options are limited - same with suicide. So neither will often be rationally chosen (except perhaps in a scary movie screening).
Replies from: Johnicholas, wedrifid↑ comment by Johnicholas · 2010-09-18T05:44:55.231Z · LW(p) · GW(p)
You're right, no amendments are necessary; I was answering a subtle implication that I heard in the sentence, and which Anna Salamon probably didn't intend to put there, and it's possible that my "hearing" in this matter is faulty.
However, your comment makes me think I haven't been sufficiently clear: A "quantum" suicide strategy would be combining a lottery ticket with a device that kills you if you do not win the lottery (it doesn't really have anything to do with quantum mechanics).
If we all we cared about was anticipated sense experience, this combination might seem to be a good idea. However, it is (to my common sense, at least) a bad idea; which is an argument that we care about something more than just anticipated sense experience.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-09-18T12:59:40.904Z · LW(p) · GW(p)
It's a good point; thanks. I had indeed missed that when I wrote the sentence.
comment by xamdam · 2010-09-17T20:04:26.305Z · LW(p) · GW(p)
enemy's vocabulary.
Is there a war I missed?
Replies from: AnnaSalamon, mattnewport↑ comment by AnnaSalamon · 2010-09-17T21:35:20.809Z · LW(p) · GW(p)
Perhaps I should have used a different term. I just meant that Think and Grow Rich contains much discussion of e.g. "applied faith", and it is easy to hear terms like that and try to spit out the whole book. But if you listen to the concrete actions it is recommending, rather than allowing yourself to react as to an enemy camp, most of them seem sound.
↑ comment by mattnewport · 2010-09-17T20:09:31.968Z · LW(p) · GW(p)
I wondered about this comment as well. Think and Grow Rich has some fairly serious rationality fails and contains some pretty wacky and unsupported ideas so maybe that's what the comment was getting at.
Replies from: xamdam↑ comment by xamdam · 2010-09-17T20:11:18.792Z · LW(p) · GW(p)
World is rationality fail, by and large. Enemy sounds like there is something extra evil there.
Replies from: mattnewport↑ comment by mattnewport · 2010-09-17T20:16:26.025Z · LW(p) · GW(p)
Agreed.
comment by [deleted] · 2010-09-21T17:17:40.765Z · LW(p) · GW(p)
[2] We receive reward/pain not only from "primitive reinforcers" such as smiles, sugar, warmth, and the like, but also from many long-term predictors of those reinforcers (or predictors of predictors of those reinforcers, or...)
How primitive are these "primitive reinforcers"? For those who know more about the brain, is it known if and how they are reinforced through lower-level systems? Can these systems be (at least partially) brought under conscious control?
comment by Bobertron · 2010-09-17T12:01:46.374Z · LW(p) · GW(p)
Beside the technical posts, LW has many good articles that teach a good mindset for epistemic rationality (like the 12 Virtues and the litanies). Much of this applies to instrumental rationality. But I compartmentalize between epistemic and instrumental rationality. I use different words and thoughts when thinking about believes and actions or plans.
So I have been reading the 12 Virtues and tried to interpret it in terms of plans, actions and activities.
The first virtue (curiosity) would obviously become "something to protect".
The fourth virtue is evenness. One who wishes to believe says, “Does the evidence permit me to believe?” One who wishes to disbelieve asks, “Does the evidence force me to believe?” "
Here you would substitute the self-talk with something like "Am I allowed to do this?" and "Do I have to do this?", so that it's about what you do, not what you believe.
In the virtue of empiricism, it says that one should concentrate on the experience to anticipate and not let the debate become about anything else. A corresponding instrumental virtue would be to concentrate on the desired results of an action or a plan, on what you want to archive.
The virtue of perfectionism could be interpreted in an instrumental way, too. But instead of errors in yourself, you'd think about errors in your actions and behaviour.