Humans are not automatically strategic
post by AnnaSalamon · 2010-09-08T07:02:52.260Z · LW · GW · Legacy · 278 commentsContents
278 comments
Reply to: A "Failure to Evaluate Return-on-Time" Fallacy
Lionhearted writes:
[A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.
A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995....
I’m curious as to why.
Why will a randomly chosen eight-year-old fail a calculus test? Because most possible answers are wrong, and there is no force to guide him to the correct answers. (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.)
Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1] My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective.
To be more specific: there are clearly at least some limited senses in which we have goals. We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.
But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically:
- (a) Ask ourselves what we’re trying to achieve;
- (b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress;
- (c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal;
- (d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past);
- (e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work;
- (f) Focus most of the energy that *isn’t* going into systematic exploration, on the methods that work best;
- (g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;
- (h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;
.... or carry out any number of other useful techniques. Instead, we mostly just do things. We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated with our goal. We do any number of things. But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals.
Why? Most basically, because humans are only just on the cusp of general intelligence. Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out. That is not at all the same as the ability to automatically implement these heuristics. Our verbal, conversational systems are much better at abstract reasoning than are the motivational systems that pull our behavior. I have enough abstract reasoning ability to understand that I’m safe on the glass floor of a tall building, or that ice cream is not healthy, or that exercise furthers my goals... but this doesn’t lead to an automatic updating of the reward gradients that, absent rare and costly conscious overrides, pull my behavior. I can train my automatic systems, for example by visualizing ice cream as disgusting and artery-clogging and yucky, or by walking across the glass floor often enough to persuade my brain that I can’t fall through the floor... but systematically training one’s motivational systems in this way is also not automatic for us. And so it seems far from surprising that most of us have not trained ourselves in this way, and that most of our “goal-seeking” actions are far less effective than they could be.
Still, I’m keen to train. I know people who are far more strategic than I am, and there seem to be clear avenues for becoming far more strategic than they are. It also seems that having goals, in a much more pervasive sense than (1)-(3), is part of what “rational” should mean, will help us achieve what we care about, and hasn't been taught in much detail on LW.
So, to second Lionhearted's questions: does this analysis seem right? Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out? How did you do it? Do you agree with (a)-(h) above? Do you have some good heuristics to add? Do you have some good ideas for how to train yourself in such heuristics?
[1] For example, why do many people go through long training programs “to make money” without spending a few hours doing salary comparisons ahead of time? Why do many who type for hours a day remain two-finger typists, without bothering with a typing tutor program? Why do people spend their Saturdays “enjoying themselves” without bothering to track which of their habitual leisure activities are *actually* enjoyable? Why do even unusually numerate people fear illness, car accidents, and bogeymen, and take safety measures, but not bother to look up statistics on the relative risks? Why do most of us settle into a single, stereotyped mode of studying, writing, social interaction, or the like, without trying alternatives to see if they work better -- even when such experiments as we have tried have sometimes given great boosts?
278 comments
Comments sorted by top scores.
comment by patrissimo · 2010-09-09T18:55:22.430Z · LW(p) · GW(p)
I'm disappointed at how few of these comments, particularly the highly-voted ones, are about proposed solutions, or at least proposed areas for research. My general concern about the LW community is that it seems much more interested in the fun of debating and analyzing biases, rather than the boring repetitive trial-and-error of correcting them.
Anna's post lays out a particular piece of poor performance which is of core strategic value to pretty much everyone - how to identify and achieve your goals - and which, according to me and many people and authors, can be greatly improved through study and practice. So I'm very frustrated by all the comments about the fact that we're just barely intelligent and debates about the intelligence of the general person. It's like if Eliezer posted about the potential for AI to kill us all and people debated how they would choose to kill us instead of how to stop it from happening.
Sorry, folks, but compared to the self-help/self-development community, Less Wrong is currently UTTERLY LOSING at self-improvement and life optimization. Go spend an hour reading Merlin Mann's site and you'll learn way more instrumental rationality than you do here. Or take a GTD class, or read a top-rated time-management book on Amazon.
Talking about biases is fun, working on them is hard. Do Less Wrongers want to have fun, or become super-powerful and take over (or at least save) the world? So far, as far as I can tell, LW is much worse than the Quantified Self & time/attention-management communities (Merlin Mann, Zen Habits, GTD) at practical self-improvement. Which is why I don't read it very often. When it becomes a rationality dojo instead of a club for people who like to geek out about biases, I'm in.
Replies from: FatTonyStarks, orthonormal, roland, Morendil, Apprentice, j-benjamin, DSimon↑ comment by FatTonyStarks · 2010-09-09T21:53:05.784Z · LW(p) · GW(p)
I've disappointed in LessWrong too, and it's caused me to come here more and more infrequently. I'm even talking about the lurking. I used to come here every other day, then every week, then it dropped to once a month. This
I get the impression many people either didn't give a shit or despaired about their own ability to function better through any reasonable effort that they dismissed everything that came along. It used to make me really mad, or sad. Probably I took it a little too personally too, because I read a lot of EY's classic posts as inspiration not to fucking despair about what seemed like a permanently ruined future. "tsuyoku naritai" and "isshou kenmei" and "do the impossible" and all that said, look, people out there are working on much harder problems--there's probably a way up and out for you too. The sadness: I wanted other people to get at least that, and the anger--a lot of LessWrongers not seeming to get the point.
On the other hand, I'm pleased with our OvercomingBias/LessWrong meetup group in NYC. I think we do a good job in-person helping other members with practical solutions to problems--how we can all become really successful. Maybe it's because a lot of our members have integrated ideas from QS, Paleo, and CrossFit, Seth Roberts, and PJ Eby. We've counseled members on employment opportunities, how to deal with crushing student and consumer debts, how to make money, and nutrition. By now we all tend to look down on the kind of despairing analysis that's frequently upvoted on here LW. We talk about FAI sparingly these days, unless someone has a particular insight we think would be valuable. Instead, the sentiment is more, "Shit, none of us can do much about it directly. How 'bout we all get freaking rich and successful first!"
I suspect the empathy formed from face to face contact can be a really great motivator. You hear someone's story from their own mouth and think, "Shit man, you're cool, but you're in bad shape right now. Can we all figure out how to help you out?" Little by little people relate, even the successful ones--we've all been there in small ways. This eventually moves towards, "Can we we think about how to help all of us out?" It's not about delivering a nice tight set of paragraphs with appropriate references and terminology. When we see each other again, we care that our proposed solutions and ideas are going somewhere because we care about the people. All the EvPsych speculation and calibration admonitions can go to hell if doesn't fucking help. But if it does, use it, use it to help people, use it to help yourself, use it to help the future light cone of the human world.
Yet if we're intentional about it I think we can keep it real here too. We can give a shit. Okay, maybe I don't know that. Maybe it takes looking for and rewarding the useful insights and then coming back later and talking about how the insights were useful. Maybe it takes getting a little more personal. Maybe I and my suggestions are full of shit but, hell, I want to figure this out. I used to talk about LessWrong with pride and urge people to come check it out because the posts were great, the commenters /comment scheme is great, it was a shining example of what the rest of the intellectually discursive interwebs could be like. And, man, I'd like it to be that way again.
So damn, what do y'all think?
Replies from: steven0461, patrissimo, olimay, Asymmetric↑ comment by steven0461 · 2010-09-09T22:27:11.411Z · LW(p) · GW(p)
If there are (relative to LW) many good self-help sites and no good sites about rationality as such, that suggests to me LW should focus on rationality as such and leave self-help to the self-help sites. This is compatible with LW's members spending a lot of time on self-help sites that they recommend each other in open threads.
Replies from: AnnaSalamon, patrissimo, olimay↑ comment by AnnaSalamon · 2010-09-10T20:57:49.144Z · LW(p) · GW(p)
My impression is that there are two good reasons to incorporate productivity techniques into LW, instead of aiming for a separate community specialized in epistemic rationality that complements self-help communities.
Our future depends on producing people who can both see what needs doing (wrt existential risk, and any other high-stakes issues), and can actually do things. This seems far higher probability than “our future depends on creating an FAI team” and than “our future depends on plan X” for any other specific plan X. A single community that teaches both, and that also discusses high-impact philanthropy, may help.
There seems to be a synergy between epistemic and instrumental rationality, in the sense that techniques for each give boosts to the other. Many self-help books, for example, spend much time discussing how to think through painful subjects instead of walling them off (instead of allowing ugh fields to clutter up your to do list, or allowing rationalized “it’s all your fault” reactions to clutter up your interpersonal relations). It would be nice to have a community that could see the whole picture here.
↑ comment by steven0461 · 2010-09-10T22:06:10.701Z · LW(p) · GW(p)
Instrumental rationality and productivity techniques and self-help are three different though overlapping things, though the exact difference is hard to pinpoint. In many cases it can be rational to learn to be more productive or more charismatic, but productivity and charisma don't thereby become kinds of rationality. Your original post probably counts as instrumental rationality in that it's about how to implement better general decision algorithms. In general, LW will probably have much more of an advantage relative to other sites in self-help that's inspired by the basic logic/math of optimal behavior than in other kinds of self-help.
Re: 1, obviously one needs both of those things, but the question is which is more useful at the margin. The average LWer will go through life with some degree of productivity/success/etc even if such topics never get discussed again, and it seems a lot easier to get someone to allocate 2% rather than 1% of their effort to "what needs doing" than to double their general productivity.
I feel like noting that none of the ten most recent posts are about epistemic rationality; there's nothing that I could use to get better at determining, just to name some random examples, whether nanotech will happen in the next 50 years, or whether egoism makes more philosophical sense than altruism.
On the other hand, I think a strong argument for having self-help content is that it draws people here.
↑ comment by patrissimo · 2010-09-12T04:01:18.923Z · LW(p) · GW(p)
But part of my point is that LW isn't "focusing on rationality", or rather, it is focusing on fun theoretical discussions of rationality rather than practical exercises that are hard to work implement but actually make you more rational. The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Replies from: AnnaSalamon, AnnaSalamon↑ comment by AnnaSalamon · 2010-09-12T18:24:18.803Z · LW(p) · GW(p)
The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:
- Never attempting to prove empirical facts from definitions;
- Never saying or implying “but decent people shouldn’t believe X, so X is false”;
- Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
- Asking what potential evidence would move you, or would move the other person;
- Not expecting all sides of a policy discussion to line up;
- Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
By all means, let's copy the more effective, doing-oriented aspects of life hacking communities. But let’s do so while continuing to distinguish epistemic rationality as one of our key goals, since, as Steven notes, this goal seems almost unique to LW, is achieved here more than elsewhere, and is necessary for tackling e.g. existential risk reduction.
↑ comment by AnnaSalamon · 2010-09-12T16:42:31.226Z · LW(p) · GW(p)
The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Could you elaborate on what you mean by that claim, or why you believe it?
I love most of your recent comments, but on this point my impression differs. Yes, folks often learn more from practice, exercises, and deep-seated motivation than from having fun discussions. Yes, some self-help communities are better than LW at focussing on practice and life-improvement. But, AFAICT: no, that doesn’t mean these communities do more to boost their participants’ epistemic rationality. LW tries to teach folks skills for thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). And LW, for all its flaws, seems to have had a fair amount of success in teaching its longer-term members (judging from my discussions with many such, in person and online) such skills as:
- Never attempting to prove empirical facts from definitions;
- Never saying or implying “but decent people shouldn’t believe X, so X is false”;
- Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
- Asking what potential evidence would move you, or would move the other person;
- Not expecting all sides of a policy discussion to line up;
- Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
Do you mean: (1) self-help sites are more successful than LW at teaching the above, and similar, subskills; (2) the above subskills do not in fact boost folks’ ability to think non-nonsensically about abstract and tricky issues; or (3) LW may better boost folks’ ability to think through abstract issues, but that ability should not be called “rationality”?
Replies from: lucid_levi_ackerman↑ comment by lucid_levi_ackerman · 2024-11-26T14:00:27.856Z · LW(p) · GW(p)
It's because they take less continued attention/effort and provide more immediate/satisfying results. LW is almost purely theoretical and isn't designed to be efficient. It's an attempt to logically override bias rather than implement the quirks of human neurochemistry to automate the process.
Computer scientists are notorious for this. They know how brains make thoughts happen, but they don't have a clue how people think, so ego drives them to rationalize a framework to perceive the flaws of others as uncuriousness and lack of dedication. This happens because they're just as human as the rest of us, made of the same biological approximation of inherited "good-enoughness." And the smarter they are, the more complex and well-reasoned that rationalization will be.
We all seek to affirm our current beliefs and blame others for discrepancies. It's instinct, physics, chemistry. No amount of logic and reason can override the instinct to defend one's perception of reality. Or other instincts either. Examples are everywhere. Every fat person in the world has been thoroughly educated on which lifestyle changes will cause them to lose weight, yet the obesity epidemic still grows.
Therefore, we study "rationality" to see ourselves as the good-guy protagonists who strive to be "less wrong," have "accurate beliefs," and "be effective at achieving our goals."
It's important work... for computers. For humanity, you're better off consulting a monk.
↑ comment by olimay · 2010-09-10T03:02:56.065Z · LW(p) · GW(p)
I'm surprised that you seem to be saying that LW shouldn't getting more into instrumental rationality! That would seem to imply that you think the good self-help sites are doing enough. I really don't agree with that. I think LWers are uniquely suited to add to the discussion. More bright minds taking a serious, critical look at all thing, and, importantly, urgently looking for solutions contains a strong possibility of making a significant dent in things.
Major point, though, of GGP is not about what's being discussed, but how. He's bemoning that when topics related to self-improvement come up that we completely blow it! A lot of ineffectual discussion gets upvoted. I'm guilty of this too, but this little tirade's convinced me that we can do better, and that it's worth thinking about how to do better.
↑ comment by patrissimo · 2010-09-12T03:59:27.453Z · LW(p) · GW(p)
Instead, the sentiment is more, "Shit, none of us can do much about it directly. How 'bout we all get freaking rich and successful first!"
Well, I think that's the rational thing to do for the vast majority of people. Not only due to public good problems, but because if there's something bad about the world which affects many people negatively, it's probably hard to fix or one of the many sufferers would have already. Whereas your life might not have been fixed just because you haven't tried yet. It's almost always a better use of your resources. Plus "money is the unit of caring", so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.
I suspect the empathy formed from face to face contact can be a really great motivator.
Agreed. Not just a motivator to help other people - but f2f contact is more inherently about doing, while web forums are more inherently about talking. In person it is much more natural to ask about someone's life and how it is going - which is where interventions happen.
Yet if we're intentional about it I think we can keep it real here too.
Perhaps. I think it will need a lot of intentionality, and a combination of in-person meetups and online discussions. I've thought about this as a "practicing life" support group, Eliezer's term is "rationality dojo", either way the key is to look at rationality and success just like any other skill - you learn by breaking it down into practiceable components and then practicing with intention and feedback, ideally in a social group. The net can be used to track the skill exercises, comment on alternative solutions for various problems, rank the leverage of the interventions and so forth.
But the key from my perspective is the website would be more of a database rather than an interaction forum. "This is where you go to find your local chapter, and a list of starting exercises / books to work through together / metrics / etc"
Replies from: None, living_philosophy↑ comment by [deleted] · 2010-12-02T17:06:08.361Z · LW(p) · GW(p)
I'm new here at LW -- are there any chapters outside of the New York meetup?
If not, is there a LW mechanism to gather location info from interested participants to start new ones? Top-level post and a Wiki page?
I created a Wiki to kick things off, but as a newb I think I can't create an article yet, and quite frankly I'm not confident enough that that's the right way to go about it to do it even if I could. So if you've been here longer and think that's the right way, please do it and direct LWers to the Wiki page.
↑ comment by living_philosophy · 2012-10-10T18:20:33.473Z · LW(p) · GW(p)
"money is the unit of caring", so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.
This is false. Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution and hope that it manages to "trickle-down" past all the middle-men and career politicians/activists and eventually is used to purchase food that eventually actually gets to people who need it. The only reason sayings like the above are so common and accepted is because people assume that there are no methods of Direct Action that will directly and immediately alleviate suffering, and are comparing "throwing money at it" to just petitioning, marching, and lengthy talks/debates. Yes, in those instances, years of political lobbying may do a lot less than just using that lobbying money to directly buy necessities for the needy or donating them to an organization who does (after taking a cut for cost of functioning, and to pay themselves), but compared to actually getting/taking the necessary goods and services directly to the needy (and teaching them methods for doing so themselves), it doesn't hold up. Another way of comparison is to ask "what if everyone (or even most) did what people said was best?" If we compared the rule of "donate money to institutions you trust (after working up to the point where you feel wealthy enough to do so)", and "directly applying their time and energy in volunteer work and direct action", one would lead to immediate relief and learning for those in need, and the other would be a long-term hope that the money would work its way through bureaucracies, survive the continual shaving of funds for institutional funding and employee payment, and eventually get used to buy the necessities the people need (hoping that everything they need can be bought, and that they haven't starved or been exposed to the elements enough to kill them).
Replies from: chaosmosis, TheOtherDave↑ comment by chaosmosis · 2012-10-10T18:31:53.161Z · LW(p) · GW(p)
Another way of comparison is to ask "what if everyone (or even most) did what people said was best?" If we compared the rule of "donate money to institutions you trust (after working up to the point where you feel wealthy enough to do so)", and "directly applying their time and energy in volunteer work and direct action", one would lead to immediate relief and learning for those in need, and the other would be a long-term hope that the money would work its way through bureaucracies, survive the continual shaving of funds for institutional funding and employee payment, and eventually get used to buy the necessities the people need (hoping that everything they need can be bought, and that they haven't starved or been exposed to the elements enough to kill them).
This rule is an awful way to evaluate prescriptive statements. For example:
Should I become an artist for a living?
Everyone would die of starvation if everyone did this. Your comparative system prohibits every profession but subsistence agriculture. That means I don't like your moral system and think that it is silly.
Aside from problems like that one, you'll also run into major problems with games theory, such as collective action problems and the prisoner's dilemma. It makes no sense at all to think that by extrapolating from individual action into communal action and evaluating the hypothetical results we will then be able to evaluate which individual actions are good. I don't know why this belief is so common, but it is.
Just-so story: leaders needed to be able to evaluate things this way, evolution had no choice but to give everyone this trait so that the leaders would also receive it. Another just-so story: this is a driving force behind social norms which are useful from an individual perspective because those who violate social norms are outcompeted.
Of course, people who use rules like that to evaluate their actions won't normally run into those sort of silly conclusions. But the reason for that isn't because the rules make sense but because the rules will only be invoked selectively, to support conclusions that are already believed in. It's a way of making personal preferences and beliefs appear to have objective weight behind them, but it's really just an extension of your assumptions and an oversimplification of reality.
Replies from: living_philosophy↑ comment by living_philosophy · 2012-10-11T23:33:54.409Z · LW(p) · GW(p)
Come on now; I had only recently come out of lurking here because I have found evidence that this site and its visitors welcome dissident debate, and hold high standards for rational discussion.
Should I become an artist for a living? -- Everyone would die of starvation if everyone did this. Your comparative system prohibits every profession but subsistence agriculture. That means I don't like your moral system and think that it is silly.
Could you please present some evidence for this? You're claim rests on the assumption that to "do art" or "be an artist" means that you can only do art 24/7 and would obviously just sit there painting until you starve to death. Everyone can be an artist, just make art; and that doesn't exclude doing other things as well. Can I be an artist for a living; can everyone? Maybe, but it sure would be a lot more likely if our society put its wealth and technology towards giving everyone subsistence level comfort (if you disagree that our current technological state is incapable of this, then you'd need to argue for such, and why it isn't worth trying, or doing the most we could anyways). The argument is that if individuals and groups in our society actually did some of the direct actions that could have immediate and life-changing results, rather than trying to "amass wealth for charity" or "petition for redress of grievances" alone, we would see much better results, and our understanding of what world's are possible and within our reach would change as well. One can certainly disagree or argue against this claim, but changing the subject to surviving on art, or just asserting that such actions could only be done on subsistence agriculture, are claims that need some evidence, or at least some more rationale. And, as really shouldn't need stated, "not liking" something doesn't make it less likely or untrue, and calling an argument silly is itself silly if you don't present justification for why you think that is the case.
As for "extrapolating from individual action into communal action", just because it is not a sure-fire way to certain morality (nothing is) doesn't mean that such thought experiments aren't useful for pulling out implications and comparing ideas/methodologies. I certainly wouldn't claim that such an argument alone should convince anyone of anything; as it says, it is just "another way of comparison" to try and explain a viewpoint and look at another facet of how it interacts with other points of view.
I'm sorry, but I have failed to understand your last paragraph. It reeks of sophistry; claiming that there are a bunch of irrational and bias-based elements to a viewpoint you don't like, without actually citing any specific examples (and assuming that such a position couldn't be stated in any way without them). That last sentence is a completely unsupported; it assumes its own conclusion, that such claims only "appear to have objective weight" but really "really just an extension of your assumptions and an oversimplification of reality". Simplified it states: It is un-objective because of its un-objectivity. Evidence and rationale please? Please remember Reverse Stupidity is Not Intelligence
Replies from: chaosmosis↑ comment by chaosmosis · 2012-10-12T04:48:30.178Z · LW(p) · GW(p)
Your first paragraph attacks the validity of the art example; I'm willing to drop that for simplicity's sake.
Your second paragraph concedes that it's not a good way to approximate morality. You say that nothing is. I interpret that as a reason that we shouldn't approach moral tradeoffs with hard and fast decision rules, rather than as a reason that any one particular sort of flawed framework should be considered acceptable. You say that it's a useful thought experiment, I fundamentally disagree. It only muddles the issue because individual actors do not have agency over the actions of each other. I do not see any benefit to using this sort of thought experiment, I only see a risk that the relevancy and quality of analysis is degraded.
You might be misunderstanding my last paragraph. I'm saying that the type of thought experiment you use is one that is normally, almost always, only used selectively, which suggests that it's not the real reason behind whatever position it's being used to advance. No one considers the implications of what would happen if everyone made the same career choices or if everyone made the same lifestyle choices, and then comes to conclusions about what their own personal lives should be like based on those potential universalizations. For example, in response to my claims about art, you immediately started qualifying exactly how much art would be universal and taken as a profession, and added a variety of caveats. But you didn't attempt to consider similar exemptions when considering whether we should view charity donations on a universal level as well, which tells me that you're applying the technique unfairly.
People only ever seem to imagine these scenarios in cases where they're trying to garner support for individual actions but are having a difficult time justifying their desired conclusion from an individual perspective, so they smuggle in the false assumptions that individuals can control other people and that if an action has good consequences for everyone then it's rational for each individual to take that action (this is why I mentioned games theory previously). These false assumptions are the reason that I don't like your thought experiment.
↑ comment by TheOtherDave · 2012-10-10T18:34:13.961Z · LW(p) · GW(p)
Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution
What's your estimate of how much money and how much time I would have to spend to deliver $100 of food directly to a starving person?
Does that estimate change if 50% of my neighbors are also doing this?
↑ comment by living_philosophy · 2012-10-11T22:48:26.622Z · LW(p) · GW(p)
Actually my point is questions like that are already guiding discussion away from alternative solutions which may be capable of making a real impact (outside of needing to "become rich" first, or risk the cause getting lost in bureaucracy and profiteering). Take a group like Food Not Bombs for instance; they diminish the "money spent" part of the equation by dumpstering and getting food donations. The time involved would of course depend on where you live, and how easily you could find corporate food waste (sometimes physically guarded by locks, wire, and even men with guns to enforce artificial scarcity), and transporting it to the people who need it. The more people who join in, would of course mean more food must be produced and more area covered in search of food waste to be reclaimed. A fortunate thing is that the more people pitch in, the shorter it takes to do large amounts of labor that benefits everyone; thus the term mutual aid.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-12T00:40:56.043Z · LW(p) · GW(p)
I'm not even taking the cost of the food into consideration. I'm assuming there's this food sitting here.. perhaps as donations, perhaps by dumpstering, perhaps by theft, whatever. What I was trying to get a feel for is your estimate of the costs of individuals delivering that food to where it needs to go. But it sounds like you endorse people getting together in groups in order to do this more efficiently, as long as they don't become bureaucratic institutions in the process, so that addresses my question. Thanks.
↑ comment by Asymmetric · 2012-11-19T00:59:43.478Z · LW(p) · GW(p)
To people who go to meetups in other parts of the world: are they all like this? How do they vary in terms of satisfaction and progress in achieving goals?
↑ comment by orthonormal · 2010-09-09T22:23:23.895Z · LW(p) · GW(p)
Interestingly, the people who seem most interested in the topic of instrumental rationality never seem to write a lot of posts here, compared to the people interested in epistemic rationality. Maybe that's because you're too busy "doing" to teach (or to ask good open questions), but I'm confident that's not true of all the I-Rationality crowd.
Of course, as an academic, I'm perfectly happy staying on the E-Rationality side.
Replies from: Vladimir_Golovin, patrissimo↑ comment by Vladimir_Golovin · 2010-09-10T05:04:33.085Z · LW(p) · GW(p)
Instrumental rationality is one of my primary interests here, but I don't post much -- the standard here is too high. All I have to offer is personal anecdotal evidence about various self-help / anti-akrasia techniques I tried on myself, and I always feel a bit guilty when posting them because unsubstantiated other-optimizing is officially frowned upon here. Attempting to extract any deep wisdom from these anecdotes would be generalizing from one example.
An acceptable way to post self-help on LW would be in the form of properly designed, properly conducted long-term studies of self-help techniques. However, designing and conducting such studies is a full-time job which ideally requires a degree in experimental psychology.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-09-10T21:09:37.749Z · LW(p) · GW(p)
If that's true, we absolutely need to lower the bar for such posts. Three good sorts of posts that are not terribly difficult are: (1) a review of a good self-help book and what you personally took from it; (2) a few-sentence summary of an academic study on an income-boosting technique, a method for improving your driving safety, or other useful content, with a link to the same; or (3) a description of self-intervention you tried and tracked impacts from, quantified self style.
Replies from: jtolds, xamdam↑ comment by jtolds · 2014-07-30T16:24:50.195Z · LW(p) · GW(p)
When someone says they have anecdotes but want data, I hear an opportunity for crowdsourcing.
Perhaps a community blog is the wrong tool for this? What if we had a tool that supported tracking rationalist intervention efficacy? People could post specific interventions and others could report their personal results. Then the tool would allow for sorting interventions by reported aggregate efficacy. Maybe even just a simple voting system?
That seems like it could be a killer app for lowering the bar toward encouraging newcomers and data-poor interventions from getting posted and evaluated.
↑ comment by xamdam · 2010-09-12T17:35:02.576Z · LW(p) · GW(p)
I have been thinking that LW really needs categorization system for top level post, this would create a way to post on 'lighter' topics without feeling like you're not matching people's expectations.
Replies from: matt↑ comment by matt · 2010-09-13T21:52:25.368Z · LW(p) · GW(p)
Tags
Replies from: xamdam↑ comment by xamdam · 2010-09-14T00:27:12.548Z · LW(p) · GW(p)
Tags do not affect how the site is read by most people, some predefined categories can be used to drive navigation.
Replies from: matt↑ comment by matt · 2010-09-14T00:59:05.634Z · LW(p) · GW(p)
I've had this very failure to communicate with Tom McCabe (so the evidence is mounting that the problem is with me, rather than all of you) - [edit]Tags[/edit] are categories, only with more awesome and fewer constraints. If "predefined categories can be used to drive navigation", then surely [edit]Tags[/edit] can be used to drive navigation, without having to be predefined.
Is the problem just that the commonly used [edit]Tags[/edit] need to be positioned differently in the site layout?
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-09-14T01:19:13.279Z · LW(p) · GW(p)
Comments are categories
Tags are categories.
I think xamdam meant that there should be a category of "lighter" posts that people could opt out of (ie, not see in their feed of new posts) so that they wouldn't have the right to complain that they didn't live up to their expectations. Promotion means that there are two tiers, but I'm not sure whether people read the front page or the new posts.
Incidentally, I think people are using the tags too much for subject matter and not enough for indicating this kind of weight or type of post. For example, I don't see a tag for self-experimentation. If the tags were visible in the article editing mode, that would encourage people to reuse the same tags, which is important for making them function (thought maybe retagging is the only way to go). If predefined tags were visible in the article editing mode, that would encourage posts on those topics; in particular, it could be used to indicate that some things are acceptable, as in Anna's list above.
Replies from: xamdam, matt↑ comment by xamdam · 2010-09-14T02:21:34.377Z · LW(p) · GW(p)
I think xamdam meant that there should be a category of "lighter" posts that people could opt out of (ie, not see in their feed of new posts) so that they wouldn't have the right to complain that they didn't live up to their expectations. Promotion means that there are two tiers, but I'm not sure whether people read the front page or the new posts.
yes
↑ comment by patrissimo · 2010-09-12T04:03:38.679Z · LW(p) · GW(p)
Maybe that's because you're too busy "doing" to teach
I think there is definitely some of that, and I've heard that from other LW "fringers" like myself - people who love the concept of rationality and support the philosophy of LW, but have no time to write posts because their lives are full to the brim with awesome projects.
One problem, i think, is that teaching and writing things up well/usefully is work. I spend time reading and writing blogs, and I do that in my "fun time" because it is fun. Careful writing about practical rationality would be work and come out of my work time, and my work time is very very full. Which suggests that to advance, we need people whose job it is to do this work. Which is part of what we see in the self-improvement world - people get paid to write books and run workshops, and while there is lots of crap out there generally the result is higher quality and more useful material.
↑ comment by roland · 2010-09-12T20:05:55.687Z · LW(p) · GW(p)
I agree 100%. This reminds me about a recent interview with Robin Hanson in which he commented something along the lines of: "If you want to really be rational or scientific you need a process with more teeth, just having a bunch of people who read the same web pages is not enough."
↑ comment by Morendil · 2010-09-11T08:20:07.351Z · LW(p) · GW(p)
When it becomes a rationality dojo instead of a club for people who like to geek out about biases, I'm in.
What does a "rationality dojo" as you envision it look like?
One thing you could do to help LW become more the kind of forum you'd like it to be is write a top-level post.
Another, if you don't want to do that, is to comment somewhere with the kind of top-level topics you would like to see addressed.
Replies from: patrissimo↑ comment by patrissimo · 2010-09-12T04:14:17.322Z · LW(p) · GW(p)
rationality dojo - group of people practicing together to become more rational, not as an intellectual exercise ("I can rattle off dozens of cognitive biases!") but by actually becoming more rational themselves. It would spend a lot more time on boring practical things, and less on shiny ideas. The effort would be directed towards irrationalities weighted by their negative impact on the participant's lives, rather than how interesting they are.
Sure, I will see if I can find the time to write a top-level post on this, thanks for asking.
Replies from: 27chaos↑ comment by Apprentice · 2010-09-10T22:52:09.939Z · LW(p) · GW(p)
Go spend an hour reading Merlin Mann's site and you'll learn way more instrumental rationality than you do here.
Really? Could you point out some posts you think are particularly helpful? Recent posts? I used to read his site and remember finding it gradually more disappointed and dropping it off my list. I don't really remember why, though.
Replies from: patrissimo↑ comment by patrissimo · 2010-09-12T04:14:43.219Z · LW(p) · GW(p)
I thought his recent "time and attention" talk was excellent, and of course his writing on email is classic.
Replies from: Apprentice↑ comment by Apprentice · 2010-09-12T08:31:56.731Z · LW(p) · GW(p)
Ah, his email theory - I used to think that looked like a message from an alien world. Re-reading it briefly now it still looks completely alien, describing a situation I have never found myself in. I just haven't ever had the feeling of being overwhelmed by email or having any sort of management problem with email. Still, I'm sure there are people who do have that problem and find Mann's writings helpful. I remember a guy back in college who swore by this inbox zero stuff. (I also remember having exchanges with him like: "That info you need is in the email I sent you a few days ago." "Uh, could you resend that? I delete all my email.")
I'll see if I can find the time and attention to check out the time and attention video. I would have strongly preferred text, though. Watching 80 minute lectures is not something I can always easily arrange.
Replies from: matt↑ comment by matt · 2010-09-13T22:08:06.601Z · LW(p) · GW(p)
I remember a guy back in college who swore by this inbox zero stuff. (I also remember having exchanges with him like: "That info you need is in the email I sent you a few days ago." "Uh, could you resend that? I delete all my email.")
Mann (after David Allen) recommends processing your email, then moving it out of your inbox to the place it belongs. He does not recommend deleting emails you have not finished with yet.
Replies from: Apprentice↑ comment by Apprentice · 2010-09-13T22:28:13.639Z · LW(p) · GW(p)
Mann has post titles like Inbox Zero: Delete, delete, delete - my friend took that to heart. I'm personally never 'finished with' an email in the sense that I'm confident that I'll never ever want to look at it again. I search through my email archives all the time.
Admittedly, Mann, in that article, says that he archives his mail and doesn't delete it - but he presents that as a "big chicken" option and a couple of paragraphs up he's lambasting "holding" folders.
Anyway, I've got nothing in particular against Mann - I just don't find what he's saying useful or fun (I tried the recommended video but 10 minutes in I turned it off, he didn't seem to be saying anything interesting I hadn't heard before) while I do find LessWrong frequently useful or fun.
↑ comment by J. Benjamin (j-benjamin) · 2022-10-03T20:11:04.093Z · LW(p) · GW(p)
"frustrated by all the comments about the fact that we're just barely intelligent"
From "frustrated" to hinting at your own take just six words later
↑ comment by DSimon · 2010-09-10T20:48:25.466Z · LW(p) · GW(p)
So now you have a highly-voted comment which contains no solutions to the problem but only a criticism of how many highly-voted comments here contain no solutions but only criticisms?
I'm not saying that pointing out that something is wrong without proposing an alternate solution is necessarily a bad idea. In fact, I think it can often be helpful, and I think the specific complaint your comment makes is a good one.
But, I also think that your statement isn't self-consistent. If you only value comments that propose solutions, then propose a solution!
Replies from: patrissimo↑ comment by patrissimo · 2010-09-12T04:16:48.838Z · LW(p) · GW(p)
I implied solutions. Like, people who want to get more rational should go read self-help / life hacking books instead of LW. And, if LW wants to be more useful, it should become more like self-help & life hacking community - focused on practical changes one can make in one's own life, explicit exercises for increasing rationality, groups that work together in-person to provide feedback, monitor performance, provide social motivation, etc.
comment by fhe · 2010-09-10T11:08:37.880Z · LW(p) · GW(p)
I can think of at least 3 ways that people fail to make strategic, effective decisions.
(as the above post pointed out) it's difficult to analyze options (or even to come up with some of them), for any number of reasons: too many of them (and too little time), lack of information, unforeseeable secondary consequences, etc.. One can do one's best in the most rational fashion, but still comes out with a wrong choice. That's unfortunate, but if this is the only kind of mistakes I am making, i am not too worried. it's a matter of learning better heuristics, building better models, gathering more data... or, in the limit, admitting that there's a limit to how much human intelligence and limited time/resources can go, even if correctly applied to problems.
A second, more worrisome, mistake is not to even realize that one can step out of one's immediate reactions, stop whatever one's doing, and think about the rationality of it, and alternatives. This mistake differs from (1). As a hypothetical example, suppose the wannabe comedian generated a list of things he could do, and decided to watch the Garfield cartoon. His choice might be wrong, but it's a conscious, deliberate choice that he made. This is mistake of type (1).
Suppose however, the Garfield idea was the first thing that came to his mind, and after 3 months he was still at it, never stopping to question his own logic. This is mistake of type (2).
Type (2) is more worrisome, because there doesn't seem to be a reliable way that, if left alone, one can break out of it. Douglas Hofstadter (of GEB fame) invented a word "sphexishness", which I think describes this vividly. It's a wonderful label, and I use it to catch myself in the act. Hofstadter coined the word from sphex's (digger wasp) inability to break out from a fixed routine of laying eggs when disturbed by human. Hofstadter gave a spectrum of sphexish behaviors, from a stuck music record to teenagers addicted to video games to mathematicians applying the same trick for new discoveries. (Hofstadter, Metamagical Themas. "On the seeming paradox of mechanizing creativity").
A lot of the 'unstrategic' decisions people make smell of sphexishness. (decision here is a misnomer, as it's a lack of conscious decision that lead them to taking ill-effective actions.)
How do you correct mistakes of such a type? It requires self-awareness. Some kind of an interrupt to break one out of a loop. The ability to spot patterns in unexpected places. Ways to help yourself: hang out with intelligent, observant people (who would do you the favor to point it out for you; return the favor when you see others trapped in such a behavior). Try to develop a mental habit of self-watching.
3.There is yet a third way that people don't do what's best for them: unlike in (1) & (2), they know what they should do, but just can't bring themselves to doing it. Taking the aspiring comedian example again. Does he really think watching Garfield is the best thing to do? I doubt it. He might know that going to an open mike event is better learning, but it's so painful (the anxiety of first time performers, fear of failure) that he procrastinates -- and in the worst way too, by doing something that seems like progress (so he doesn't feel guilty from it), but actually is very ineffective. (The irony is that, the mind is actually doing the rational thing, but on a small scale: pain avoidance. but of course on the larger scale this is detrimental to individual survival, hence irrational.)
This is a situation where the best choice is not hard to figure out, but so difficult (often the difficulty is psychological, but difficult nonetheless) that the mind avoids it. The solutions seems to trick the mind into undertaking it. E.g. some people avoid thinking of taking on a large project (because it would be overwhelming), but work on small pieces of it until they built up momentum (in the form of confidence, or having made too much investment to turn back, or having expectations places on them...).
I suspect type (3) exist because rationality is a recently evolved phenomenon. Our psychology is still by and large that of an unconscious, reactive animal. Rationality and consciousness have to fight every step of the way against some hundreds of thousands of years (much longer if you count the time when we were fish and even before that) of evolved behaviors that were once useful and hard-wired.
Yet therein lies hope too. If we can find the right tricks, push the primitive buttons, we can get such amazing, barbaric, uncontrollable motivation and energy out of ourselves. The buttons might be designed for something else, but our intelligence can use them to achieve what we know is good for us. The image is using sex to encourage people to learn and act rationally (I have no idea how that might work). But the hope is that, consciousness triumphs over the lizard brain in us.
Replies from: LukeStebbing, SystemsGuy, Apprentice, undermind↑ comment by Luke Stebbing (LukeStebbing) · 2010-09-12T20:39:12.263Z · LW(p) · GW(p)
A few years ago, Paul Graham wrote an essay[1] about type (3) failures which he referred to as type-B procrastination. I've found that just having a label helps me avoid or reduce the effect, e.g. "I could be productive and creative right now instead of wasting my time on type-B procrastination" or "I will give myself exactly this much type-B procrastination as a reward for good behavior, and then I will stop."
(Embarrassing aside: I hadn't looked at the essay for several years and only now realized that I've been mentally calling it type-A procrastination this whole time.)
EDIT: The essay goes on to link type-C procrastination with doing the impossible, yielding a nice example of how I-rationality and self-help are linked.
[1] Paul Graham, Good and Bad Procrastination
↑ comment by SystemsGuy · 2014-11-25T19:18:43.023Z · LW(p) · GW(p)
Once I held passing interest in Mensa, thinking that an org of super-smart people would surely self-organize to impact the world (positively perhaps, but taking it over as a gameboard for the new uberkind would work too). I was disappointed to learn that mostly Mensa does little, and when they get together in meatspace it is for social mixers and such. I also looked at Technocracy, which seemed like a reasonable idea, and that was different but no better.
Now I'm a few decades on in my tech career, and I have learned that most technical problems are really people problems in disguise, and solving the organization and motivational aspects are critical to every endeavor, and are essentially my full-time job. What smoker or obese person or spendthrift isn't a Type 3, above? Who doesn't absorb into their lives with some tunnel vision and make type 2 mistakes? Who, as a manager, hasn't had to knowingly make a decision without sufficient information? I know I have audibly said, "We can't afford to be indecisive, but we can afford to be wrong", after I make such decisions, and I mean it.
Reading some of these key posts, though, points out part of the problem faced in this thread: we're trying to operate at higher levels of action without clear connections and action at lower levels. http://lesswrong.com/lw/58g/levels_of_action/
We have a forum for level 3+ thinking, without clear connections to level 1-3 action. The most natural, if not easy, step would be to align as a group in a fashion to impact other policy-making organizations. To me, we are perfecting a box of tools that few are using; we should endeavor to have ways to try them out and hone the cutting edges, and work then to go perform. A dojo approach helps with this by making it personal, but I'm not sure it is sufficient nor necessary, and it is small-scale and from my newbie perspective lacking shared direction.
Take dieting, for a counter example: I can apply rationality and Bayesian thinking to my dietary choices. I recall listening to 4-4-3-2 on Sat morning cartoons, and I believed every word. I read about the perils of meats and fat, and the benefits of vegetable oils and margarine. I heard from the American Heart Association to consume much less fat and trade out for carbs. I learned from the Diabetes Association to avoid simple carbs and use art'f sweeteners. Now I've learned not to blindly trust gov'ts and industries, and have combined personal experience, reading, and internet searching to gain a broader viewpoint that does not agree with any of the above! Much such research is a sifting and sorting exercise at levels 2-4, but with readily available empirical Level 1options, as I can try out promising hypotheses upon myself. As I see what works, and what doesn't, I can adapt my thinking and research. Anybody else can too.
Would a self-help group assist my progress? Well, an accountability group helps, but it isn't necessary. Does it help to "work harder" at level 1 alone? No....key improvements for me have come with improving my habits and managing desire, and then improving how I go about improving those. Does it help to have others assisting at level 3 and up? To an extent, it is good to share via e-mail and anecdote personal experiences, books, and thoughts.
The easy part is the vision, though -- I want to be healthier, lighter, stronger, and live longer. Seems pretty clear and measurable -- weight, blood pressure, cholesterol, 1-mile run time, bench-press pounds.
So what is the vision here? What are our relevant and empirically measurable goals?
↑ comment by Apprentice · 2010-09-10T19:37:08.827Z · LW(p) · GW(p)
Good stuff. Would you consider turning it into a top level post?
Replies from: fhe↑ comment by fhe · 2010-09-11T00:59:32.395Z · LW(p) · GW(p)
thanks. how do i turn top level? I walked around the site and don't see a button that lets me do that. I am new to this forum (in fact i registered to reply to the original post, which I saw on some other site.)
Replies from: Perplexed, CronoDAS↑ comment by Perplexed · 2010-09-12T01:11:32.119Z · LW(p) · GW(p)
Once you reach 20 points of karma, there will be a "Create new article" button in the upper right - same general area as your name and current karma score. To "turn your comment into a top level post" you mainly need to copy and paste, but you should also include some introductory context information, including a link to the top-level-article that inspired yours.
Replies from: Cyan↑ comment by CronoDAS · 2010-09-11T01:05:10.374Z · LW(p) · GW(p)
You need more karma before you can make a top-level post. (I think you need 20, unless it's been changed since the site started.)
Replies from: komponisto↑ comment by komponisto · 2010-09-12T00:54:05.027Z · LW(p) · GW(p)
It was changed to 50 for a short while, then changed back to 20.
↑ comment by undermind · 2011-04-14T23:20:15.693Z · LW(p) · GW(p)
The image is using sex to encourage people to learn and act rationally (I have no idea how that might work).
There's a grand tradition of women withholding sex for political reasons (usually to end a war), starting with Lysistrata. People resurrect this idea from time to time, and often achieve quite remarkable results.
Replies from: Fleisch↑ comment by Fleisch · 2011-12-12T09:58:15.251Z · LW(p) · GW(p)
As an aside: The interesting thing to remember about Lysistrata is that it's originally intended as humorous, as the idea that women could withhold sex, especially withhold it better than men, was hilarious at the time. Not because they weren't allowed, but because they were the horny sex back then.
comment by wakingnow · 2010-09-08T23:36:08.770Z · LW(p) · GW(p)
There's an important piece missing from the articles analysis.
As humans we are inherently social in nature.
We delegate a lot of our reasoning to the wider social group around us. This is more energy efficient.
The article asks 'why do many people go through long training programs "to make money" without spending a few hours doing salary comparisons ahead of time'. We do long training programs (eg, college degrees) mostly because they are socially esteemed. This social esteem serves as a proxy to their worth, and its typically information that has a lower personal cost to obtain, than going and looking at salary surveys.
The reason we do so little systematic testing for ourselves is that we have trusted our wider social grouping to do it for us. I don't find a rational argument about the bungie jump mechanism nearly as compelling evidence of safety, as I do my talking with enthusiastic friend who has done it 20 times. If I was to learn about my cars braking mechanism in sufficient detail to convince myself of why it worked, I would never go anywhere. Instead, I see others who I trust driving the car, and 'delegate' to them.
This is simply a heuristic. It doesn't always work. Just because all my friends smoke doesn't mean smoking isn't dangerous (the social influence here is well documented). But the vast majority of the time its a much more cost/information efficient way of doing things.
Any analysis of our behaviour in such circumstances must factor in our social aspects, and the fact we don't act as individuals or reach decisions in a vacuum.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-09-10T21:31:07.273Z · LW(p) · GW(p)
We delegate a lot of our reasoning to the wider social group around us.... the vast majority of the time its a much more cost/information efficient way of doing things.
This strikes me as half right. Specifically: Yes, we often use social indicators to take the place of personal reasoning. And, yes, these indicators are better than nothing. But given the rapid (relative to the EEA) of change in e.g. what jobs pay well, what we know about how to avoid accidents, what skills can boost your productivity (e.g., typing on computers is now important, and, thus, it's important to learn more than two-fingered typing), etc., and the fact that social recommendations update fairly slowly, it seems that most on this site can do far better by adding some internet research and conscious thought to standard socially recommended productivity heuristics.
comment by Paul Crowley (ciphergoth) · 2010-09-08T11:53:49.659Z · LW(p) · GW(p)
Most basically, because humans are only just on the cusp of general intelligence.
This a point I've been thinking about a lot recently - that the time between the evolution of a species whose smartest members crossed the finish line into general intelligence, and today, is a blink of an eye in evolutionary terms, and therefore we should expect to find that we are roughly as stupid as it's possible to be and still have some of us smart enough to transform the world. You refer to it here in a way that suggests this is a well-understood point - is this point discussed more explicitly elsewhere?
It occurs to me that this is one reason we suffer from the "parochial intelligence scale" Eliezer complains about - that the difference in effect between being just barely at the point of having general intelligence and being slightly better than that is a lot, even if the difference in absolute capacity is slight.
I wonder how easy it would be to incorporate this point into my spiel for newcomers about why you should worry about AGI - what inferential distances am I missing?
Replies from: timtyler, John_Maxwell_IV, jacob_cannell↑ comment by timtyler · 2010-09-08T12:51:58.480Z · LW(p) · GW(p)
Replies from: Jonathan_Graehl, NancyLebovitz, FrF↑ comment by Jonathan_Graehl · 2010-09-08T23:11:29.235Z · LW(p) · GW(p)
I watched the end of this video and liked it quite a lot. Pretty good job, Eliezer. And thanks for the link.
And wow, the Q&A at the end of the talk has some tragically confused Q. And I'm sure these are people who consider themselves intelligent. Very amusing, and maddening.
↑ comment by NancyLebovitz · 2010-09-08T18:36:15.037Z · LW(p) · GW(p)
Selection pressure might be even weaker a lot of the time than a 3% fitness advantage having a 6% chance of becoming universal in the gene pool, or at least it's more complicated-- a lot of changes don't offer a stable advantage over long periods.
I think natural selection and human intelligence at this point can't really be compared for strength. Each is doing things that the other can't-- afaik, we don't know how to deliberately create organisms which can outcompete their wild conspecifics. (Or is it just that there's no reason to try and/or we have too much sense to do the experiments?)
And we certainly don't know how to deliberately design a creature which could thrive in the wild, though some animals which have been selectively bred for human purposes do well as ferals.
This point may be a nitpick since it doesn't address how far human intelligence can go.
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
Eliezer made a very interesting claim-- that current hardware is sufficient for AI. Details?
Replies from: Kaj_Sotala, CronoDAS, timtyler, Shalmanese, JamesAndrix, wnewman, Nisan↑ comment by Kaj_Sotala · 2010-09-08T22:08:03.349Z · LW(p) · GW(p)
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
To be fair, the races of Middle-Earth weren't created by evolution, so the criticism isn't fully valid. Ilúvatar gave the dwarves spirits but set them to sleep so that they wouldn't awaken before the elves. It's not unreasonable to assume that as he did so, he also made them admire elven beauty.
↑ comment by CronoDAS · 2010-09-08T19:18:19.855Z · LW(p) · GW(p)
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
Why do humans think dolphins are beautiful?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-09-08T19:21:41.507Z · LW(p) · GW(p)
Is a human likely to think that one specific dolphin is so beautiful as to be almost worth fighting a duel about it being the most beautiful?
Replies from: Kaj_Sotala, Jonathan_Graehl↑ comment by Kaj_Sotala · 2010-09-08T21:12:32.007Z · LW(p) · GW(p)
Well, it's always possible that Gimli was a zoophile.
Replies from: jmmcd↑ comment by Jonathan_Graehl · 2010-09-08T22:54:58.311Z · LW(p) · GW(p)
I'm a human and can easily imagine being attracted to Galadriel :) I can't speak for dwarves.
Replies from: JohannesDahlstrom↑ comment by JohannesDahlstrom · 2010-09-09T12:36:13.125Z · LW(p) · GW(p)
Well, elves were intelligently designed to specifically be attractive to humans...
↑ comment by timtyler · 2010-09-08T22:18:26.351Z · LW(p) · GW(p)
Eliezer made a very interesting claim-- that current hardware is sufficient for AI. Details?
Most who think Moravec and Kurzweil got this about right think that supercomputer hardware could run something similar to a human brain today - if you had the dollars, were prepared for it to run a bit slow - and had the right software.
↑ comment by Shalmanese · 2010-09-09T04:14:02.366Z · LW(p) · GW(p)
"Another example of attribution error: Why would Gimli think that Galadriel is beautiful?"
A waist:hip:thigh ratio between 0.6 & 0.8 & a highly symmetric fce.
Replies from: wedrifid↑ comment by JamesAndrix · 2010-09-09T01:29:50.330Z · LW(p) · GW(p)
Another example of attribution error: Why would Gimli think that Galadriel is beautiful?
If I'm not mistaken, all those races were created, so they could reasonably have very similar standards of beauty, and the elves might have been created to match that.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-09-09T04:19:25.441Z · LW(p) · GW(p)
[From Wikipedia:}(http://en.wikipedia.org/wiki/Dwarf_%28Middle-earth%29)
In The Lord of the Rings Tolkien writes that they breed slowly, for no more than a third of them are female, and not all marry; also, female Dwarves look and sound (and dress, if journeying — which is rare) so alike to Dwarf-males that other folk cannot distinguish them, and thus others wrongly believe Dwarves grow out of stone. Tolkien names only one female, Dís. In The War of the Jewels Tolkien says both males and females have beards.[18]
On the other hand, I suppose it's possible that if humans find Elves that much more beautiful than humans, maybe Dwarves would be affected the same way, though it seems less likely for them.
Replies from: JamesAndrix, dclayh↑ comment by JamesAndrix · 2010-09-09T06:01:56.064Z · LW(p) · GW(p)
Also, perhaps dwarves don't have their beauty-sense linked to their mating selection. They appreciate elves as beautiful but something else as sexy.
↑ comment by dclayh · 2010-09-09T04:48:48.913Z · LW(p) · GW(p)
Yeah, as JamesAndrix alludes to (warning: extreme geekery), the Dwarves were created by Aulë (one of the Valar (Gods)) because he was impatient for the Firstborn Children of Iluvatar (i.e., the Elves) to awaken. So you might call the Dwarves Aulë's attempt at creating the Elves; at least, he knew what the Elves would look like (from the Great Song), so it's pretty plausible that he impressed in the Dwarves an aesthetic sense which would rank Elves very highly.
Replies from: gnovos↑ comment by gnovos · 2010-09-09T15:08:47.646Z · LW(p) · GW(p)
Yes this is definitively correct. Also, it's a world with magic rings and dragons people.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-09-09T16:26:28.208Z · LW(p) · GW(p)
There are different kinds of plausibility. There's plausibility for fiction, and there's plausibility for culture. Both pull in the same direction for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
What might break your suspension of disbelief? The usual BEM behavior would probably mean that the Watcher at the Gate preferencially grabbing Galadriel if she were available would seem entirely reasonable, but what about Treebeard? Shelob?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2010-09-10T08:36:21.546Z · LW(p) · GW(p)
for LOTR to have Absolute Beauty, which by some odd coincidence, is a good match for what most of its readers think is beautiful.
Particularly when referring to the movie versions, you could consider this simply a storytelling device, similar to all the characters speaking English even in movies set in non-English speaking countries (or planets). It's not that the Absolute Beauty of Middle-Earth is necessarily a good match for our beauty standards, it's that it makes it easier for us to relate to the characters and experience what they're feeling.
↑ comment by wnewman · 2010-09-09T15:11:21.129Z · LW(p) · GW(p)
You write "Eliezer made a very interesting claim-- that current hardware is sufficient for AI. Details?"
I don't know what argument Eliezer would've been using to reach that conclusion, but it's the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same operations; then scale the estimated cost of the brain in proportion to the ratio of volume of nervous tissue.
See http://boingboing.net/2009/02/10/hans-moravecs-slide.html for the conclusion of one popular version of this kind of analysis. I'm pretty sure that the analysis behind that slide is in at least one of Moravec's books (where the slide, or something similar to it, appears as an illustration), but I don't know offhand which book.
The analysis could be grossly wrong if the foundations are wrong, perhaps because key neurons are doing much more than we think. E.g., if some kind of neuron is storing a huge number of memory bits per neuron (which I doubt: admittedly there is no fundamental reason I know of that this couldn't be true, but there's also no evidence for it that I know of) or if neurons are doing quantum calculation (which seems exceedingly unlikely to me; and it is also unclear that quantum calculation can even help much with general intelligence, as opposed to helping with a few special classes of problems related to number theory). I don't know any particularly likely for way the foundations to be grossly wrong, though, so the conclusions seem pretty reasonable to me.
Note also that suitably specialized computer hardware tends to have something like an order of magnitude better price/performance than the general-purpose computer systems which appear on the graph. (E.g., it is much more cost-effective to render computer graphics using a specialized graphics board, rather than using software running on a general-purpose computer board.)
I find this line of argument pretty convincing, so I think it's a pretty good bet that given the software, current technology could build human-comparable AI hardware in quantity 100 for less than a million dollars per AI; and that if the figure isn't yet as low as one hundred thousand dollars per AI, it will be that low very soon.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-09-09T16:27:51.917Z · LW(p) · GW(p)
Thanks. I'm not sure how much complexity is added by the dendrites making new connections.
↑ comment by Nisan · 2010-09-09T18:51:11.535Z · LW(p) · GW(p)
Why would Gimli think that Galadriel is beautiful?
The dwarves were intelligently designed by some god or other. That a dwarf can find an elf more beautiful than dwarves could be an unfortunate design flaw.
(Elves were also intelligently designed, but their creator was perhaps more intelligent.)
Edit: The creator-god of dwarves probably imbued them with some of his own sense of beauty.
↑ comment by FrF · 2010-09-09T16:25:26.863Z · LW(p) · GW(p)
With all respect to Eliezer I think nowadays the gravely anachronistic term "village idiot" shouldn't be used anymore. I wanted to say that almost every time when I see the intelligence scale graphic in his talks.
Replies from: wnoise, ciphergoth↑ comment by wnoise · 2010-09-09T16:56:08.826Z · LW(p) · GW(p)
Why do you think the term "village idiot" is "gravely anachronistic"? It's part of an idiom. "Idiot" was briefly used as a quasi-scientific label for certain range of IQs, and that usage is certainly anachronistic, but "idiot" had meaning before that, and continues to. The same is true for "village idiot".
Replies from: FrF↑ comment by FrF · 2010-09-09T18:25:15.955Z · LW(p) · GW(p)
You're right, wnoise, "village idiot" is part of an idiom but one I don't like at all and I don't think I'm particular in this regard.
I should have put my objection as "'Village idiot' is gravely anachronistic unless you want to be insensitive by subsuming a plethora of medical conditions and social determinants under a dated, derogatory term for mentally disabled people."
This may sound like nit-picking but obviously said intelligence graph is an important item in SIAI's symbolic tool kit and therefore every detail should be right. When I see the graph, I'm always thinking: Please, "for the love of cute kittens", change the "village idiot"!
Replies from: Emile↑ comment by Emile · 2010-09-09T18:54:34.217Z · LW(p) · GW(p)
For what it's worth, I don't find anything wrong with the term "village idiot".
However, from previous discussions here, I think I might be on the low side of the community for my preference for "lengths to which Eliezer and the SIAI should go to accommodate the sensibilities of idiots" - there are more important things to do, and a never-ending supply of idiots.
Still, maybe it should be changed. It's not because it doesn't offend me that it won't offend anybody reasonable.
↑ comment by Paul Crowley (ciphergoth) · 2010-09-09T16:56:42.339Z · LW(p) · GW(p)
In conversation with friends I tend to use George W Bush as the other endpoint - a dig at those hated Greens but it's uncontentious here in the UK, and if it helps keep people listening (which it seems to) it's worth it.
Replies from: mattnewport, Emile↑ comment by mattnewport · 2010-09-09T17:45:50.504Z · LW(p) · GW(p)
This seems a bad example to use given the context. If you are trying to convince people that greater than human intelligence will give AIs an insurmountable advantage over even the smartest humans then drawing attention to a supposed idiot who became the most powerful man in the world for 8 years raises the question of whether you either don't know what intelligence is or vastly overestimate its ability to grant real world power.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-09-12T11:25:55.511Z · LW(p) · GW(p)
For the avoidance of doubt, it seems very unlikely in practice that Bush doesn't have above-average intelligence.
↑ comment by Emile · 2010-09-09T18:59:15.384Z · LW(p) · GW(p)
Wikipedia gives him an estimated IQ of 125, which may be a wee bit off for the low end of the IQ distribution. Still, if that's the example that requires the less explanation in practice, why not.
Maybe Forrest Gump would work as well?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-09-19T07:59:48.703Z · LW(p) · GW(p)
My most recent use of this example got the response George W Bush Was Not Stupid.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2010-09-09T06:29:58.438Z · LW(p) · GW(p)
This a point I've been thinking about a lot recently - that the time between the evolution of a species whose smartest members crossed the finish line into general intelligence, and today, is a blink of an eye in evolutionary terms, and therefore we should expect to find that we are roughly as stupid as it's possible to be and still have some of us smart enough to transform the world. You refer to it here in a way that suggests this is a well-understood point - is this point discussed more explicitly elsewhere?
OK, but if you buy the idea that environment has a substantial impact on intelligence, which I do, then it seems that the average modern human would have passed the finish line by a somewhat substantial amount.
Really there is no finish line for general intelligence--intelligence is a continuous parameter. Chimpanzees and other apes do experience cultural evolution, even though they're substantially stupider than us.
"I'm just about as stupid as a mind can get while still being able to grasp x. Therefore it's likely that I don't fully understand its ramifications."
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-09-09T06:33:35.652Z · LW(p) · GW(p)
Really there is no finish line for general intelligence--intelligence is a continuous parameter. Chimpanzees and other apes do experience cultural evolution, even though they're substantially stupider than us.
You are equivocating "cultural evolution". If you fix the genetic composition of other currently existing apes, they will never build an open-ended technological civilization.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2010-09-09T07:07:39.167Z · LW(p) · GW(p)
Technological progress makes the average person smarter through environmental improvements, and technological progress is dependent on a very small number of people in society. Let's say the human race had gotten lucky very early on in its history and had a streak of accidental geniuses who were totally unrepresentative of the population as a whole. If those geniuses improved the race's technology substantially, that would improve the environment, cause everyone to become smarter due to genetic factors, and bootstrap the race out of their genetic deficits.
Replies from: Vladimir_Nesov, timtyler, wnoise↑ comment by Vladimir_Nesov · 2010-09-09T13:03:05.911Z · LW(p) · GW(p)
I don't see how this note is relevant to either your original argument, or my comment on it.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2010-09-10T08:02:42.894Z · LW(p) · GW(p)
It's basically a new argument. Would you prefer it if I explicitly demarcated that in the future? I briefly started writing out some sort of concession or disclaimer but it seemed like noise.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-09-10T17:34:28.556Z · LW(p) · GW(p)
The problem here is that it's not clear what that comment is argument for, and so the first thing to assume is that it's supposed to be an argument about the discussion it was made in reply to. It's still unclear to me what you argued in that last comment (and why).
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2010-09-10T19:43:05.909Z · LW(p) · GW(p)
Trying to argue against a magical level of average societal genetic intelligence necessary for technological takeoff.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-09-11T06:30:38.671Z · LW(p) · GW(p)
You can't get geniuses who are "totally unrepresentative" in the relevant sense, since we are still the same species, with the same mind design.
↑ comment by timtyler · 2010-09-09T07:30:08.382Z · LW(p) · GW(p)
So: you are arguing that the point where intelligent design "takes off" is a bit fuzzy - due to contingent factors - chance? That sounds reasonable.
There is also a case to be made that the supposed "point" is tricky to pin down. It was obviously around or before the 10,000 year-old agricultural revolution - but a case can be made for tracing it back further - to the origin of spoken language, gestural language, or to perhaps to other memetic landmarks.
Replies from: wnewman↑ comment by wnewman · 2010-09-09T15:46:38.382Z · LW(p) · GW(p)
It seems to me that once our ancestors' tools got good enough that their reproductive fitness was qualitatively affected by their toolmaking/toolusing capabilities (defining "tools" broadly enough to include things like weapons, fire, and clothing), they were on a steep slippery slope to the present day, so that it would take an dinosaur-killer level of contingent event to get them off it. (Language and such helps a lot too, but as they say, language and a gun will get you more than language alone.:-) Starting to slide down that slope is one kind of turning point, but it might be hard to define that "point" with a standard deviation smaller than one hundred thousand years.
The takeoff to modern science and the industrial revolution is another turning point. Among other things related to this thread, it seems to me that this takeoff is when the heuristic of not thinking about grand strategy at all seriously and instead just doing what everyone has "always" done loses some of its value, because things start changing fast enough that most people's strategies can be expected to be seriously out of date. That turning point seems to me to have been driven by arrival at some combination of sufficient individual human capabilities, sufficient population density, and sufficient communications techniques (esp. paper and printing) which serve as force multipliers for population density. Again it's hard to define precisely, both in terms of exact date of reaching sufficiency and in terms of quite how much is sufficient; the Chinese ca. 1200AD and the societies around the Mediterranean ca. 1AD seem like they had enough that you wouldn't've needed enormous differences in contingent factors to've given the takeoff to them instead of to the Atlantic trading community ca, 1700.
↑ comment by jacob_cannell · 2010-09-08T22:55:49.417Z · LW(p) · GW(p)
Most basically, because humans are only just on the cusp of general intelligence.
This a point I've been thinking about a lot recently - that the time between the evolution of a species whose smartest members crossed the finish line into general intelligence, and today, is a blink of an eye in evolutionary terms, and therefore we should expect to find that we are roughly as stupid as it's possible to be and still have some of us smart enough to transform the world
This point of view drastically oversimplifies intelligence.
We are not 'just on the cusp' of general intelligence - if there was such a cusp it was hundreds of thousands of years ago. We are far far into an exponential expansion of general intelligence, but it has little do with genetics.
Elephants and whales have larger brains than even our brainiest Einsteins - with more neurons and interconnects, yet the typical human is vastly more intelligent than any animal.
And likewise, if Einstein had been a feral child raised by wolves, he would have been mentally retarded in terms of human intelligence.
Neanderthals had larger brains than us - so evolution actually tried that direction, but it ultimately was largely a dead end. We are probably near some asymptotic limit of brain size. In three very separate lineages - elephant, whale and hominid - brains reached a limit around 200 billion neurons or so and then petered out. In the hominid case it actually receded from the Neanderthal peak with homo sapiens having around 100 billion neurons.
Genetics can surely limit maximum obtainable intelligence, but its principally a memetic phenomenon
Replies from: gwern↑ comment by gwern · 2014-07-25T17:55:37.364Z · LW(p) · GW(p)
Elephants and whales have larger brains than even our brainiest Einsteins - with more neurons and interconnects, yet the typical human is vastly more intelligent than any animal.
Yes, because brain size does not equal neuron count; there are scaling laws at play, and not in the whales'/elephants' favor. On neurons, whales and elephants are much inferior to humans. Since it's neurons which compute, and not brain volume, the biological aspect is just fine; we would not expect a smaller number of neurons spread over a larger area (so, slower) to be smarter...
See https://pdf.yt/d/aF9jcFwWGn6c6I7O / https://www.dropbox.com/s/f9uc6eai9eaazko/1954-tower.pdf , http://changizi.com/diameter.pdf , http://onlinelibrary.wiley.com/doi/10.1002/ar.20404/full , http://www.pnas.org/content/early/2012/06/19/1201895109.full.pdf , https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#Whole_nervous_system
In three very separate lineages - elephant, whale and hominid - brains reached a limit around 200 billion neurons or so and then petered out. In the hominid case it actually receded from the Neanderthal peak with homo sapiens having around 100 billion neurons.
Cite for the 200b and 100b neuron claims? My understanding too was that H. sapiens is now thought to have more like 86b neurons & the 100b figure was a myth ( http://revistapesquisa.fapesp.br/en/2012/02/23/n%C3%BAmeros-em-revis%C3%A3o-3/ ), which indicates the imprecision even for creatures which are still around and easy to study...
Replies from: jacob_cannell, army1987↑ comment by jacob_cannell · 2014-09-14T05:11:49.372Z · LW(p) · GW(p)
Elephants and whales have larger brains than even our brainiest Einsteins - with more neurons and interconnects, yet the typical human is vastly more intelligent than any animal.
Yes, because brain size does not equal neuron count; there are scaling laws at play, and not in the whales'/elephants' favor.
Yes. - When I said 'large', I was talking about size in neurons, not physical size. Physical size, within bounds, is mostly irrelevant. (although it does effect latency of course).
On neurons, whales and elephants are much inferior to humans.
No - they really do have more neurons, ~257 billion in the elephant's case. 1 (2014)
Since it's neurons which compute, and not brain volume, the biological aspect is just fine; we would not expect a smaller number of neurons spread over a larger area (so, slower) to be smarter...
According to google, an elephant brain is about 5kg vs a human's 1.4kg. So we have 51 billion neurons per kg for the elephant vs 75 to 60 per kg for the human. This is by the way, a smaller difference than I would have expected.
The elephant's brain has a larger cerebellum than us but smaller cortex: about 5 billion neurons vs our 15 billion ish. Interestingly the elephant cortex is also sparser while its cerebellum is denser, perhaps suggesting that we should look at more parameters, such as synapse density as well (because of course there are many tradeoffs in neural micro-circuits).
Anyway the human cortex's 3x neuron count is a theory for our greater intelligence. But this by itself is insufficient:
- the elephant interacts with the world mainly through its trunk which is cerebellum controlled
- humans/primates use up a large chunk of their cortex for vision, the elephant much less so
- humans rely far more on their cortex for motor control, such that humans completely lacking a cerebellum are largely functional
Now - is having a larger cortex better for general intelligence than a larger cerebellum? - most likely. It appears to be a better hardware platform for unsupervised learning.
But again the key to intelligence is software - we are smart because of our ability to accumulate mental programs , exchange them, and pass them on to later generations. Our brain is unique mainly in that it was the first general platform for language, not because our brains are larger or have some special secret circuit sauce. (which wouldn't make sense anyway - humans are recent and breed slowly; the key low level circuit developments were already made many millions of years back in faster breeding ancestor lineages)
Cite for the 200b and 100b neuron claims? See above for elephant neuron counts.
For humans I was probably just using wikipedia or this page based on older research.
↑ comment by A1987dM (army1987) · 2014-07-25T18:26:38.377Z · LW(p) · GW(p)
Elephants and whales have ... more neurons [than humans] ...
Yes, because ... whales and elephants [have fewer neurons than] humans.
[emphasis added]
Wait, what?
Replies from: gwern↑ comment by gwern · 2014-07-25T19:08:26.058Z · LW(p) · GW(p)
I think jacob_cannell is correct in that whales and elephants have larger brains, but that he's extrapolating incorrectly when he implies through the conjunction that larger brain size == more neurons and more interconnects; so I'm agreeing with the first part, but pointing out why the second does not logically follow and providing cites that density decreases with brain size & known neuron counts are lower than humans.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2014-09-14T05:21:24.585Z · LW(p) · GW(p)
I don't always take the time to cite refs, but I should have been more clear I was talking about elephant and whale brains as being larger in neuron counts.
"We are probably near some asymptotic limit of brain size. In three very separate lineages - elephant, whale and hominid - brains reached a limit around 200 billion neurons or so and then petered out."
Ever since early tool use and proto-language, scaling up the brain was advantageous for our hominid ancestors, and it in some sense even overscaled, such that we have birthing issues.
For big animals like elephants and whales especially, the costs for larger brains are very low. So the key question is then why aren't their brains bigger? Trillions of neurons would have almost no extra cost for a 100 ton monster like a blue whale, which is already the size of a hippo at birth.
But instead a blue whale just has order 10^11 neurons, just like us or elephants, even though its brain only amounts to a minuscule 0.007% of its mass. The reasonable explanation: there is no advantage to further scaling - perhaps latency? Or more likely, that there are limits of what you can do with one set of largely serial IO interfaces. These are quick theories - I'm not claiming to know why - just that its interesting.
comment by byrnema · 2010-09-08T14:14:52.654Z · LW(p) · GW(p)
I woke up this morning with a set of goals. After reading this post, my goals abruptly pivoted: I had a strong desire to compose a reply. I like this post and think it is an excellent and appropriate reply to Lionhearted's (also a nice post), and would have liked to proffer some different perspectives. Realizing that this was an exciting but transient passion, I didn't allow my goals to be updated and persisted in my previous plans. An hour or two into my morning's work, I finally recalled the motivation behind my original goals and was grateful. It took some time, though, before I felt emotionally that I had chosen the right set of goals for my morning. Working through those transient periods of no-emotional-reward is tough. You need to have faith in the goal decisions of previous selves, but not too much.
Replies from: byrnema, Jonathan_Graehl↑ comment by byrnema · 2010-09-09T17:38:54.982Z · LW(p) · GW(p)
I believe this comment is along the lines of what I would have written yesterday..
If you measure intelligence against the goals we haven’t met, we certainly come up short. However, zooming out to look at humanity as a whole, I am impressed by how productive we are. Huge cities, dozens of them, with gorgeous and functional buildings and everyone milling about being productive, all over the world. The infrastructure of our civilization is enormous. And all the art we output – books, movies, gardens. I think we’re amazingly successful at achieving some types of goals, when seen as a single complex system.
When you zoom in to the individual, I think it becomes more difficult to judge from among the small-scale effects if humans are meeting their goals. The problem of individual success is so complex not only because we have trouble achieving our goals, but because it is a much more difficult task to decide on appropriate goals, and distribute resources among them.
Whatever our goals are, x,y,z; our goal is rarely to “have x, no matter what”. There’s always a trade-off and a limit to the resources we’re willing to expend towards x. Several comments have already mentioned the cost considerations in decision-making about goals. In particular, it can be argued that considering resource costs, one might better pursue nothing than pursue sub-optimal goals – pursuing goals of unknown value sub-optimally may be a reasonable middle ground.
Choosing goals appropriately so as to not waste effort depends upon an environment we have limited information about. Unknown variables and chance play a very large role in whether you will be successful or not. Instead of choosing a goal and directly pursuing it, it can be wise to do nothing and wait for opportunities. In life philosophies, this is described as ‘not fighting the universe’ or ‘yang instead of yin’.
There is a mind-body ‘wholistic’ aspect to meeting our goals, which unfortunately gives the impression that success in meeting goals is a quality or a talent rather than rationality. Only certain goals can be straight-forwardly achieved by designing and following a ‘plan’. I recently finished a terrific book and wondered how that book was written. I doubt the author himself knows. Certainly, there are ingredients: having something to say and recognizing an aptitude for writing, the discipline to keep a writing schedule, etc., but presumably many components of the author’s personality needed to come together to write that book, something that couldn’t be forced but which was permitted. This kind of success in life makes it very difficult to make a connection between ‘plans’ and ‘success’. I personally wasted a lot of mental energy as a child wondering why sometimes things seemed easy and sometimes they seemed hard, because I suspected fate or external intervention. There are many components of our personality we don’t seem to have control of, and the importance of integrating your personality behind a goal eclipses— often – the importance of the having a rational plan. (The point I am making here echoes what was said in this thread.)
↑ comment by Jonathan_Graehl · 2010-09-08T22:45:17.726Z · LW(p) · GW(p)
Working through those transient periods of no-emotional-reward is tough. You need to have faith in the goal decisions of previous selves, but not too much.
Yes.
comment by orthonormal · 2010-09-09T22:11:48.817Z · LW(p) · GW(p)
The fact that we so blatantly fail to optimize for using reason to solve our problems, and so effortlessly use it to rationalize our actions, is another strong piece of evidence for the thesis that reasoning evolved primarily for arguing.
comment by Vladimir_Golovin · 2010-09-08T11:50:38.113Z · LW(p) · GW(p)
Do you agree with (a)-(g) above?
- (a) Yes. I have to do that consciously, verbally.
- (b) Same – I have to mentally talk with myself about this;
- (c) Thankfully, this one comes easy to me – I usually become genuinely interested in whatever I happen to be doing because I'm a damn perfectionist. This held true for all jobs I had during my career and all my past and current hobbies.
- (d) Same as a) and b) – I have to concsiously gather such information. Thankfully, I usually become interested in the subject, provided that it aligns with my abilities and interests to at least some degree;
- (e) Speaking of "methods that aren’t habitual for us", I'm fascinated with the idea of Nakatomi space (not math), and I'd very much like to level up my own Nakatomi navigation abilities;
- (f) No opinion yet;
- (g) I sort of failed this one last time. I had a conjunction in my goal definition: "Build the best Widget on the planet AND have at least one million dollars per year in profit". The overlap between the two subgoals turned out to be small. Plus, the goal had an internal conflict: I wasn't really ready to sacrifice the perfection of the Widget in exchange for the million. As a result, I spent 10 years and actually built the Best Widget on the Planet, but it's not earning me millions (though it's pretty profitable and will have a healthy long lifecycle). Next time I'll make sure that there are no conjunctions and conflicts in my goals.
- (h) Not sure if I got your meaning right, but I never miss a chance to brag about my achievements.
Do you have some good heuristics to add? Do you have some good ideas for how to train yourself in such heuristics?
I'm not sure if these can be called heuristics, but I do have two techniques that I found to be very successful.
The first technique is "Concentrate on high-order bits". Essentially, it's a generalized 80/20 rule.
There's always a number of activities I can do, and they all contribute towards my goal, but some of them contribute 128 points, some 64 points, and some just 1 point. I consciously try to find activities that contribute the most points, and then execute using various anti-akrasia tactics including those discussed here on LW.
The second technique is complimentary for the first one. "It's macro time.".
Basically, it boils down to spending a day or more doing nothing but looking at the goal and the big picture.
Starcraft players should be familiar with the meaning of 'micro' and 'macro'. Micro and macro refer to micro- and macro-management. Micro is fine-grained real-time control of combat units (concentrating fire on dangerous enemies, using special abilities etc.) and macro refers to higher-level activities like maintaining the resource flow, building the base towards the desired tech, expanding to acquire more resources etc.
Personally, I happen to be pretty good at micro, both in games and in real life (as my co-workers will angrily confirm), so my biggest problem both in Starcraft multiplayer and in business was going too micro and not having enough mind cycles for macro.
To counteract this, I'm working to form a habit of allocating dedicated 'macro days', or perhaps even 'macro weeks' during which my primary task would be figuring out which activities are high-impact and which can be put on low-priority. I recently came back from the longest vacation I had in my last 10 years of work – full 45 days! – and found it to be extremely helpful in figuring out what I should do next.
Added: Did the above strategies help me achieve anything?
I've been using them for just about a year. So far the biggest achievement is a very solid, very powerful Version 2.0 of the Widget, done in a year, with no feature creep (thanks to concentrating on 'high-order bits' and cutting less-important stuff), no burnout on my part (due to macro time and non-conflicting goals) and almost no crunch time (a month prior to the release doesn't count :)
Replies from: komponisto↑ comment by komponisto · 2010-09-08T17:20:38.124Z · LW(p) · GW(p)
The first technique is "Concentrate on high-order bits"
Cf. Umeshisms.
Replies from: Vladimir_Golovin↑ comment by Vladimir_Golovin · 2010-09-08T17:43:06.778Z · LW(p) · GW(p)
Yep, that's where I took it from, couldn't remember the source.
comment by gnovos · 2010-09-09T14:56:52.763Z · LW(p) · GW(p)
There's a reason why we don't think strategically, and it's actually a very good reason and is unfortunately why we will never have an innately strategic mentality: cost. Specifically, the cost of time. i.e. it's always cheaper in terms of time to make a correct lucky guess on the first try than to work out a solution properly over a significant length of time.
Imagine there was a such thing as a lucky charm, and by holding it, you were, say, 70% more likely to always get the right answer on your calculus test without even needing to completely understand the problem. In this situation, taking the calculus test would take you just a few minutes, and you'd still score well enough to pass the class. In fact, you could take the entire years worth of tests, perhaps, in the same amount of time that it takes the rest of the students to work their way through the first one, yet still most likely pass. Your lucky charm didn't give you the best grade, but it allowed you to quickly solve all the problems you needed to solve and now you can spend the rest of the year taking other classes.
Well, the thing is, the human mind has evolved just such a "lucky charm", specifically our highly sophisticated pattern matching ability. We can look at situations that we've seen in the past, and can generally make a "mostly right" choice much of the time with very little effort or thought. Those humans who had a particularly powerful patterning matching ability were able to "coast" through even incredibly complex situations with a small effort, leaving them more resources available to survive and propagate while those who spent more time working out far more optimal solutions would find themselves sorely behind in the evolutionary race, even though they are ending up with better answers.
Why is time so valuable? Think about it this way: Imagine you could mathematically work out the winning lottery numbers, but it would take you 50 years, or you could guess every week and never win the jackpot, but, on average, could make a few bucks consistently. Which approach will keep you in food and shelter until you reach the age that you can have children? The jackpot may be orders of magnitude more money, but you need the monetary resources up front in order to pay your way through survival.
Evolution doesn't seek out the optimal long-term solution, it seeks out the "just barely good enough for right now" solution. Unfortunately, as long as we continue to evolve in a world where time is as or more valuable than total available resources, we'll most likely never reach the point where "strategic thinking" is something we do by default.
Replies from: orthonormal, byrnema↑ comment by orthonormal · 2010-09-09T22:29:25.549Z · LW(p) · GW(p)
These unconscious strategies optimized or satisficed in the ancestral environment, when people weren't conscious of enough relevant factors to make long chains of reasoning (or quantitative thinking) obviously superior to their unconscious heuristics and biases.
They're clearly far from optimal (and sometimes far from satisfactory) in the modern developed world.
Some things have changed way too fast for evolution to keep up.
↑ comment by byrnema · 2010-09-09T16:06:10.347Z · LW(p) · GW(p)
I completely agree; we think with our 'gut' as much as with our 'brain'. Only, I wouldn't denigrate "pattern matching". It's much more than a lucky charm; it's a powerful and high-level component of intelligence. It's something that we haven't systematized yet, and so we don’t understand it or always trust it very well.
All my comments today will be defending human intelligence.. I wonder about the motive behind this goal, since I agree people could easily be more intelligent, and that would be great. Also -- in comparison to what? It's not like my saying 'humans are so intelligent they're at least 8.3!' means anything different than, 'humans are so dumb they're no more than 8.6!'.
Replies from: orthonormal, None↑ comment by orthonormal · 2010-09-09T22:24:48.553Z · LW(p) · GW(p)
I think the statement "Humans act a lot stupider than they think they do" has a pretty non-arbitrary meaning.
↑ comment by [deleted] · 2010-10-24T22:14:36.177Z · LW(p) · GW(p)
I completely agree; we think with our 'gut' as much as with our 'brain'.
Stephen Colbert recommends that we think with our gut.
comment by steven0461 · 2010-09-08T21:09:47.299Z · LW(p) · GW(p)
Part of it is that achieving success through means other than the standard things you're supposed to achieve success by doing well at can feel like cheating, possibly for some sort of signaling reason. Part of it is there are serious psychological and social costs not only to doing things that other people don't do, but to doing things for different kinds of reasons. Part of it is you're suggesting the benefits of what you call being strategic are larger than they really are by focusing on available cases where it changed someone's life and ignoring a great many forgettable and hard to pinpoint cases where it was just a time/energy sink, or where considering it was a time/energy sink, or where there was good reason to believe the relevant strategy had already been taken into account by whatever caused you to be doing the default thing, or where there seemed to be such good reason absent an appreciation of the world's madness.
comment by CronoDAS · 2010-09-08T19:07:14.533Z · LW(p) · GW(p)
Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out.
I think you're underestimating the average person.
Replies from: AnnaSalamon, cypher197↑ comment by AnnaSalamon · 2010-09-08T19:14:07.276Z · LW(p) · GW(p)
I might well be. Given the value of empiricism-type virtues, anyone want to go test it (by creating an operationalized notion of what it is to understand the heuristics, and then finding randomly choosing several people independently from e.g. your local grocery store and testing it on them), and let us know the results?
Jasen Murray and Marcello and I tried this the other day concerning what portion of native English speaking American adults know what a "sphere" is ("a ball" or "orange-shaped" count; "a circle" doesn't), and found that of the five we sampled, three knew and two didn't.
Replies from: None, None, byrnema, Sniffnoy, Kaj_Sotala↑ comment by [deleted] · 2010-09-08T20:21:49.662Z · LW(p) · GW(p)
I once taught middle- and high-school teachers who wanted to get certified to teach math. I was a TA for a class in geometry (basically 8th or 9th grade Euclidean geometry.) I had an incredibly hard time explaining to them that "draw a circle with center point A" means that A goes in the middle of the circle, instead of on the boundary. As I recall, it took more than a week of daily problem sessions before they got that.
Of course, I may have been a bad teacher. But I was trying.
Replies from: Sniffnoy↑ comment by [deleted] · 2010-09-09T04:17:46.105Z · LW(p) · GW(p)
Marcello and I and (damnit, I can't remember who) tried this the other day >concerning what portion of native English speaking American adults know what a >"sphere" is ("a ball" or "orange-shaped" count; "a circle" doesn't), and found that of >the five we sampled, three knew and two didn't.
Did you do this test by asking them to define the word "sphere" verbally? Because I can easily imagine a less-articulate person saying "circle" when they really do understand the difference between a plane figure and a sphere. It might be better to ask them to select which of a given set of objects is a sphere, or even to name something that is shaped like a sphere, although in the latter case they might use the rote knowledge that the earth is a sphere, which could create bias in the opposite direction.
↑ comment by byrnema · 2010-09-09T15:56:58.362Z · LW(p) · GW(p)
My estimate would be far on the other side: I think at least 95% of the population could understand and agree with those heuristics. I pay less attention to what people say they understand, and look at what they do, and am usually impressed by how intelligent people are -- in ways academic tests would not typically fully measure.
.. I think only 5% could compose these heuristics, if asked to. And only half of 1% could know to compose them, without being told to...
Regarding your study, I'm not sure what you could deduce other than that 'sphere' is not in common usage, at least not as the geometric object. (For example, any 4 year old child can distinguish a sphere from other shapes, and then 'sphere' is just a label.)
Perhaps 'sphere of influence' is heard slightly more frequently than sphere as a geometric object. I would expect that the former connotation, if superseding the geometric one, would result in a little confusion and waving of hands, since it is so abstract.
↑ comment by Sniffnoy · 2010-09-08T20:14:36.898Z · LW(p) · GW(p)
What about "a 3D circle"?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-09-08T20:32:48.585Z · LW(p) · GW(p)
We counted that as correct.
Replies from: khafra↑ comment by khafra · 2010-09-08T22:31:53.822Z · LW(p) · GW(p)
Did the ones who failed to give correct answers say something like "a species of worm found in south America," or did they refrain altogether from answering--possibly from fear of a trick question, or that they might be asked to explain the Banach-Tarski theorem about sphere doubling via the axiom of choice if they worded their answer in a way vulnerable to that?
Did you hold clipboards or wear lab coats while doing the questioning?
Replies from: AnnaSalamon, Will_Newsome↑ comment by AnnaSalamon · 2010-09-09T01:43:12.356Z · LW(p) · GW(p)
We tried to be friendly and unintimidating and, if asked, we explained with a bit of embarrassment that it had to do with a bet. Many just assumed we needed to know what a "sphere" was, though. We might have said we weren't looking for a fancy answer, I'm not sure. (Ideal, if you want to repeat this experiment, would be to get a child to do the asking and to say it's for their homework or something.) I don't clearly remember what wrong answers we got; it's possible that someone said "Does it mean circle-shaped?" but couldn't give follow-up detail and someone else, who looked rather blank, said something like "Um. 'Sphere?' Do you know what that is, Frank?" and then asked the man she was with, who answered correctly.
Like SarahC, I used to tutor folks who were en route to becoming high school math teachers, and who had to pass a math exam to be allowed to teach. Many of them genuinely didn't know what a sphere was, in the sense that often their eyes would light up if I told them that "sphere" meant "ball-shaped" (and, if I didn't, they would memorize the formula for the volume of a sphere but would often not know they could apply it to estimate the volume of a ball). This was one of those pieces that I initially didn't realize I needed to teach. Other such pieces included e.g. the fact that a "square centimeter" is a 1-cm by 1-cm square, that "area" is about how many such squares it takes to cover a given shape, that one can find the area of a compound shape by adding or subtracting the area of the components, and that there is a difference in meaning between "If A, then B" and "If B, then A".
↑ comment by Will_Newsome · 2010-09-09T01:09:41.977Z · LW(p) · GW(p)
It is important to note that real Bayesians wear robes, not lab coats. And they carry with them archival quality notebooks and archival quality pens. Lab coats are just silly.
Replies from: khafra↑ comment by khafra · 2010-09-09T02:13:04.918Z · LW(p) · GW(p)
...in the weeks and months that followed, San Franciscans became accustomed to being accosted and asked a brief series of questions by a friendly young person carrying an archival quality notebook and wearing a clown suit.
Replies from: Nisan↑ comment by Kaj_Sotala · 2010-09-08T21:15:49.984Z · LW(p) · GW(p)
(damnit, I can't remember who)
My memory suggests either Jasen or Louie.
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-09-08T21:54:26.744Z · LW(p) · GW(p)
Thanks, Kaj.
comment by Spurlock · 2010-09-08T14:56:53.508Z · LW(p) · GW(p)
Is it really fair to say there has been "no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective"?
Clearly we've evolved the ability (trainable hardware) to do the kind of planning, abstract reasoning, and analysis that would help us find these optimal courses of action. Furthermore, we've evolved the tendency to do a fair amount of this (compared to other life forms) automatically.
This isn't just a hardcoded ability to execute plans that bring food, shelter, and sex. If you decide you want a new pair of shoes, it's trivial for you to mentally construct and carry out the relatively complex (again, comparing to other species) plan required for you to get them. You'll even carry out some optimizations without too much effort ("wait, there's a closer shoe store east of here").
While it's trivially true that we haven't evolved to automatically seek the optimum path in all things (which there might be a good reason for, e.g. time-constraints on assessing and choosing paths), I think it's fair to say evolution has given us a running start.
And the selective pressures are pretty clear: something like the Machiavellian Intelligence hypothesis is almost certainly at work, selecting those genes that are best at carrying out plans, since in the ancestral environment most plans were directly concerned with survival and breeding. Granted, no shortage of optimization was hacked on towards attaining those goals specifically (which might explain why it's so hard to focus on your bigger goals when you're hungry), but the fact is humans didn't evolve to mindlessly seek out food and mates. We're capable of pursuing other interests.
So I'd say there have been strong selective forces to help us choose effective courses of action. Not absolutely optimal courses of action, but when you consider the massive size of action-space (as you pointed out, rocks fail calculus tests by default), it's apparent that evolution didn't totally leave us out in the cold.
Replies from: patrissimo↑ comment by patrissimo · 2010-09-09T18:28:23.973Z · LW(p) · GW(p)
This is true and a valuable correction, however I would argue that our planning ability was evolved for very different goals in a very different environment, and while it works pretty well at "figuring out if your friend is backstabbing you" or "figuring out how to get calories", when it comes to long-term goals in the modern environment ("how do I manipulate this laptop so as to make me millions of dollars over the next 5 years?") it performs miserably, and all Anna's points then apply.
Paul Graham recently made a related point, the world is getting more and more addictive, and in order to be productive we must develop more effective screening and anti-time-suck methods: http://www.paulgraham.com/addiction.html
On the plus side, there are a few great people working on this problem - Merlin Mann comes to mind.
comment by Liron · 2010-09-08T12:38:26.326Z · LW(p) · GW(p)
Here's a strategic thing I figured out:
When I wake up really early, I get a lot more work done because the morning hours have no distractions and I feel like I'm ahead of the day, like I'm using 100% of the possible day.
Therefore I wake up really early now - 3-5am.
Replies from: gwern, patrissimo, CronoDAS↑ comment by gwern · 2014-07-26T21:21:15.525Z · LW(p) · GW(p)
I wonder how much this differs from person to person. I tried correlating 2.5 years of data (when I got up from bed with my self-ratings of productivity for that day), and looking at the LOESS & cubic fits, it seems merely like getting up a bit after 8AM correlates with productivity but later is worse and earlier is much worse (albeit with limited sampling):
And it's not hard to tell a non-causal or reverse-causation story: I can't be very eager to wake up and get started on work if I'm willing to sleep in to 10AM, now can I...? So I dunno. Maybe it's literally more time from simple sleep deprivation.
That said, I'll have to remember to recheck this later; I'm trying out caffeine pills for causing earlier rising, so if earlier rising itself causes more productivity, there should be an attenuated effect from the caffeine.
Replies from: Liron↑ comment by patrissimo · 2010-09-09T18:44:12.004Z · LW(p) · GW(p)
In general, getting an isolated environment is really important for certain types of work, and early or late are the simplest methods of isolation given how social humans and our environments are.
↑ comment by CronoDAS · 2010-09-08T19:08:44.198Z · LW(p) · GW(p)
Depending on your environment, the late night hours could also serve the same purpose.
Replies from: jimrandomh, Liron, Jonathan_Graehl↑ comment by jimrandomh · 2010-09-08T19:23:16.264Z · LW(p) · GW(p)
The deciding factor there is likely to be biochemistry, not environment. Many people simply can't be very productive late at night. They run into issues like caffeine crashes, as well as other biochemical fatigue causes that're harder to identify.
Replies from: orthonormal, CronoDAS↑ comment by orthonormal · 2010-09-08T22:59:29.447Z · LW(p) · GW(p)
Yup. I'm one of four new hires; two of us keep a relatively normal workday, one wakes up at 5 and does all his work in the morning, and one stays up and does all his work between 10 PM and 4 AM. (Thank goodness for academia.)
↑ comment by Jonathan_Graehl · 2010-09-08T22:50:19.105Z · LW(p) · GW(p)
I find it easy to keep working when it's late. Eventually I realize that I've become slow and tired, and I would have been better off had I gone to sleep hours ago, and resuming work after the rest. I realize that by "late night hours" you didn't necessarily mean staying awake when tired.
I also think the immediate post-waking hour is potentially valuable, in that I feel different during that time, so might work different (in a good way? I don't know). Maybe I just feel different because of what I'm typically doing, and if I sat down and worked, my state would quickly normalize.
comment by JohnDavidBustard · 2010-09-08T21:17:59.615Z · LW(p) · GW(p)
I've wrestled with this disparity myself, the distance between my goals and my actions. I'm quite emotional and when my goals and my emotions are aligned I'm capable of rapid and tireless productivity. At the same time my passions are fickle and frequently fail to match what I might reason out. Over the years I've tried to exert my will over them, developing emotionally powerful personal stories and habits to try and control them. But every time I have done so it tends to cause more problems that it fixes. I experience a lot of stress fighting with myself in this way and quickly lose the ability to maintain perspective or, more importantly, to prioritise. My reason becomes a tunnel visioned rationalisation, and rather than being a tool for appropriate action becomes a tool to reinforce an unwise initial judgement of my priorities.
More recently, I've come to accept that my conscious reasoning self is, to an extent, a passenger in an emotional mind. What's more, that that emotional mind often has a much more sophisticated understanding of what will lead to a satisfying future than my own reasoning can provide. If I have the patience to listen (and occasionally offer it suggestions) I seem to get much closer to solving creative and technical problems, and more importantly, much closer to contentment, than if I try to force myself to follow an existing plan.
I think there is a real risk of having ones culture and community define goals for ourselves that are not actually what we want. Causing us to feel a sense of duty towards values that deep down, we don't share. Is our reasoning flawed or do we just not understand our utility function?
Replies from: snarles↑ comment by snarles · 2010-09-08T23:30:11.602Z · LW(p) · GW(p)
I've had the same experiences re: passion and productivity. On your last comment:
"I think there is a real risk of having ones culture and community define goals for ourselves that are not actually what we want."
It's not clear to me what your concern is. You draw a distinction between cultural goals and values, and personal goals and values, but how would you be able to draw the line between the two? (What does it mean to feel something "deep down"?) And even if you could draw that distinction, why is it automatically bad to acquire cultural goals? What would be the consequences of pursuing these "incorrect" goals or values?
The most eye-opening article I've read recently, of possible relation to the subject, is a series on hunter-gatherer tribes by Peter Gray (see http://www.overcomingbias.com/2010/08/school-isnt-about-learning.html). While I'm skeptical of Gray's seemingly oversimplified depiction of hunter-gatherer tribes, the salient point of his argument is that there is a strong anti-authority norm in typical hunter-gather tribes. This leads me to think that the "natural" human psyche is resistant to authority, and conformity has to be "beaten in." Some of my own emotional conflicts have been due to a conflictedness about obeying authority; it seems to me that the "emotional mind" is more in line with these primal psychologies, which are exhibited more strongly in hunter-gatherer tribes than in modern society.
Certainly I would argue that following the emotional mind is not something everyone should do; it seems like there are a few niches in our society for the totally "free", who have the luxury of being able to make a living while largely ignoring the demand for individuals to find and conform to a specific externally-rewarded role in society. The positive and negative feedback individuals receive for following or ignoring their emotional minds, I would hypothesize, plays a large part in determining how much they ultimately listen to their emotional minds.
Replies from: JohnDavidBustard↑ comment by JohnDavidBustard · 2010-09-09T13:37:47.246Z · LW(p) · GW(p)
Thanks for the link.
You make a good point about the lack of a clear distinction, and at a fundamental level I believe that our genes and external environment determine our behaviour (I am a determinist, i.e. I don't believe in free will). However, I think that it is also possible to be highly motivated about different things which can cause a lot of mental stress and conflict. I think this occurs because we have a number of distinct evolved motivations which can drive us in opposing ways (e.g. the desire to eat, the status desire of being thin, the moral desire to eat healthily etc.). What I mean by "deep down" is the result of balancing these motivations to provide a satisfying compromise. The reason I emphasise culture is because I feel that society has developed powerful means of manipulating our motivations. This is good to the extent that it can make our sense of motivation (and enjoyment) more intense but can also lead to these strong internal conflicts, which, at least for myself, are not enjoyable.
I am fascinated by how these manipulations of our motivation occur and like yourself experience a strong resistance towards authority. I think the strength of these feelings is a reflection of my personality. On a Myers Briggs assessment I am an ENTP and descriptions of this type indicate a common resistance to authority. In part I suspect this is because I don't find arguments not based on reason to be that legitimate. I'm not sure whether this personality is 'more natural' or is merely one form of survival strategy reflected by the interaction of my genes with the environment.
I do feel a strong disparity between the world as it is and how I think it could (should?) be. In particular I think there is a great difference between peoples internal stories of why they act as they do and the true dynamics of how they have been influenced. For example, I find the ideas of Adam Curtis, John Taylor Gatto and Alain de Botton very interesting. I recognise that the society we currently have may well require the kind of values and conditioning described by these authors but I think it would be preferable to have a society with less of it, or at least have it performed much more openly and explicitly. I also feel that a stable society is possible with a much greater degree of emotional 'freedom' than we currently experience. Particularly through the use of technology. For example, by providing comfortable technologically based self sufficiency so that a competitive externally rewarded role is viewed as a lifestyle option rather than a necessity.
comment by patrissimo · 2010-09-09T17:51:29.411Z · LW(p) · GW(p)
I agree with all of this.
Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out?
At my organization, the leaders regularly (every 3-12 months) get together and say "what have we been doing? Is it the most useful thing? If not (as has always been the case when we've done this) why not? how can we do better". We always find ourselves having made substantial errors, and over our 2+ years have found that our activities are slowly getting more focused on what matters - although still much less than we'd like.
Personally, the standard goal-setting / time-management techniques don't work great for me, but they are better than nothing. At least yearly, I explicitly review my life goals and annual sub-goals, which has some effectiveness. I keep them printed out on my laptop, which has had no effect. I have been experimenting lately with tracking time spent on each project (the Pomodoro Technique), which has been going quite well - it is harder to deny that you aren't working on the right thing when the timer is staring you in the face saying "I am off because you are not working on one of your projects, you must work on one of your projects to turn me on". I am training myself to intuit that if the timer isn't on (like now), I'm not really working.
I am starting to feel the potential for good side-effects, like if the timer is not on, why do low-value work-ish type work rather than relax & have fun & re-generate energy to do high-value work?
Anyway, your question basically embraces all of time-management, GTD, and motivation, so it's a huge topic, with many techniques out there and many books written on it. Hard to answer briefly. But I agree it is a crucial skill set for rationalists (how to identify and work effectively towards your goals), and well worth putting a lot of time and study into. I would love to participate in such a training group.
Some book recommendations: "Eat That Frog!", "Getting Things Done" (flawed in many ways, but with enough useful info to be worth reading), "The Four-Hour Workweek" (large parts are irrelevant, yet a few, like those on work efficiency, are outstanding.
Replies from: Vladimir_Golovin, arundelo↑ comment by Vladimir_Golovin · 2010-09-11T08:56:55.011Z · LW(p) · GW(p)
Thanks for mentioning "Eat That Frog". I'm skimming through a PDF version and so far it seems to be an excellent book. I'm ordering a paperback from Amazon.
Replies from: FiftyTwo↑ comment by arundelo · 2010-09-09T20:24:33.289Z · LW(p) · GW(p)
I was going to ask what your biggest complaints with Getting Things Done were, but then I saw that you have a "gtd" tag on your blog.
Replies from: patrissimo↑ comment by patrissimo · 2010-09-09T20:44:37.010Z · LW(p) · GW(p)
It has little to contribute about what to work on when and how to make that happen. I'm somewhat ADHD, so my problems are filtering my mass of ideas, and focusing on the ones that are most important, not most shiny. Tracking all my to-dos just results in my having lots of long lists of things I will never do. GTD has a teeny bit of this with their 50,000 foot through 10,000 foot review, but it mostly ignores the question of "how do I decide what to do, what to defer, and what do dump?", and to me that's the crux.
Contrast with something like "Eat That Frog!" which is about repeating again and again the simple message that if you focus your time working on the most useful task for your most important project, you will be much more productive. (Plus various heuristics for identifying such projects, such tasks, and building up the habit). It's a very simple message, yet following it, for me, yields much greater productivity returns than GTD.
comment by [deleted] · 2010-09-08T16:13:03.258Z · LW(p) · GW(p)
Thanks for the list, and to you and Lionhearted for the posts. I haven't yet figured it all out. But I'm trying to get started on this approach:
Time "working toward your goals" as you usually do is habitual. There's no harm in writing out a calendar for your pre-existing habits, and it's probably very useful for most people to do so to form new habits. My system mostly revolves around calendars.
In my calendar, the habit I've written in is a bit of planning or "meta" time. Twice a week, I plan out a full week. By re-evaluating the course of action half-way through, I'm hoping it should be easier to track where I go off-track.
Once a month, this planning time must include meta-planning. During this time, the idea is to review that my planning method is the most effective. This is the time for reviewing the past month's calendar, and also for reading any books on planning.
As for evaluating sub-goals, I've decided that the best step after some initial self-reflection is consultation. Therapy/coaching can be valuable for anyone working to solve an internal problem that defends itself, and it seems prudent to gain what I can from professional guidance. I've stated that I want my goal with that time to make evaluating my volition and turning it into action natural and habitual.
Finally, my current method (not perfectly successful, but a start) has been to keep a list of projects in a spreadsheet, each assigned a number for personal importance and for urgency, and each with a column for a link to an outline. Long-term projects are on a separate sheet, and short-term projects either belong to a long-term project or do not. On my calendar, I have a block of time simply called "Project Time" on Saturday, and open space on weekday evenings and Sunday. The idea is that during Planning Time, I take projects from the Projects List, allot them Project Time.
There is one more step I've figured out in this plan. Posting it here, mentioning it to others. Now I know people will ask me occasionally how it's going, which will provide a bit of motivation to get started on using the system.
Has anyone made good use of a planning structure similar to this, with scheduled planning and meta-planning?
Replies from: patrissimo, LukeStebbing↑ comment by patrissimo · 2010-09-09T18:43:18.792Z · LW(p) · GW(p)
Having regular time which is explicitly for planning, not working, is vital. Daily, weekly, monthly, and yearly seems to work pretty well. Daily - what are my most important tasks? Weekly - how did the last week go? What are my critical projects/tasks for next week? And so forth. That's one of the simple-but-massively-effective insights of things like GTD, even though I disagree with their tactics - regularly spend time explicitly planning rather than working.
↑ comment by Luke Stebbing (LukeStebbing) · 2010-09-08T20:40:14.226Z · LW(p) · GW(p)
Yes, my approach is similar.
I schedule planning time where the level of abstraction is proportional to the logarithm of the recurrence period, and it seems effective at pruning cached goals and sanity-checking my meta-goals. (However, it's difficult to test because of the time scales involved and the fact that I can't fork myself.)
Recently, I noticed that my general skills aren't improving as fast as I'd like, so I decided to take advantage of compound interest[1] and created a parallel structure for working, learning, and meta-learning.
- Richard Hamming, "You and Your Research"
EDIT: Fixed link misparse.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-09-08T22:43:32.635Z · LW(p) · GW(p)
the level of abstraction is proportional to the logarithm of the recurrence period
This brings to my mind the idea of a complete n-ary tree (with n being the base of your logarithm), with the highest abstraction level at the root - if you spend equal time on each node, then you'll portion time across levels as you described.
I found this amusing - I'm not sure I know of any generally meaningful meta-thinking levels beyond say, 2.
comment by danlucraft · 2012-04-15T09:24:23.322Z · LW(p) · GW(p)
Perhaps the only way to train yourself to achieve long-term goals is to use short-term motivation to improve your automatic behaviours, instead of trying to train ourselves to have motivational systems that work on long-term multi-step plans.
What if we broke down the action steps of your algorithm into:
- ask yourself what kind of person achieves goals like this by habit
- ask yourself how you could change yourself into that kind of person, perhaps by establishing new habits
- evaluate whether your new habits are effectively causing you to do things that work towards your goal.
So, forget about long-term plans. Instead select and implement short-term plans that:
- incrementally improve your position, so more opportunities that you can act on occur.
- change bad habits into more goal-directed habits
- put you into situations where you are likely to take actions that further your goals, automatically
- increase your intrinsic enjoyment of things that are directed towards your goals
So, for example, starting a startup is less Step 1 of a Grand Plan to become a millionaire, and more a way to put yourself in a situation where you will have to do things that you think help towards becoming a millionaire, and will change you into more the kind of person who does things that make you a millionaire.
Of course, this whole thing is just one big long-term plan after all :) But it's a more specific one.
comment by mattnewport · 2010-09-08T18:33:59.964Z · LW(p) · GW(p)
I think you rather overstate your case here. When you say:
But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically:
I'm not sure who you are referring to by 'we'. Most of these tactics are fairly commonly advised by everything from management and business books to self help and sports training. Some of them are things that come naturally to me and seem to come naturally to quite a few other people I know (though certainly not everybody).
(a) Ask ourselves what we’re trying to achieve;
This comes naturally to me but I've noticed it doesn't seem to to everybody. It is something I've seen others do (and talk about doing) fairly often however.
(b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress;
Very common advice in business/management and in programming. It does seem to require a bit of practice for most people to acquire this habit and it is one of the things I notice separating more experienced programmers from less experienced. It needs pointing out however that this is often very difficult and/or time consuming in practice for many real world goals and is easy to get wrong.
(c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal;
Comes naturally to me and seems to be reasonably common in others but I'd agree that there are many people for whom it doesn't seem to come naturally.
(d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past);
Seems obvious and natural to me and pretty common in others. In fact I think this is how most people approach most of their goals. Many people fall down by being too undiscriminating in who they ask for advice and what evidence they require from others that said advice is effective however. Again it should be pointed out that this is much more difficult in practice for many real world problems than is implied here. There are many goals for which there is no straightforward and uncontroversial answer to how best to achieve them.
(e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work;
This is one where I could stand to improve. I think it's a common failing and few people do this as much as they should. It is another case of something that is quite difficult in practice however - tracking can be time consuming and difficult for many goals and it can be difficult to gather 'clean' data on what really works best.
(f) Focus most of the energy that isn’t going into systematic exploration, on the methods that work best;
Seems fairly obvious and I think is reasonably common but people are easily distracted from their goals. Sometimes distraction can be a signal that goals need re-evaluating however.
(g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;
Seems fairly obvious but is something it is useful to get into a mental habit of reminding oneself of periodically. Another one that can be incredibly difficult in practice however. I'd say that figuring out what our 'real' goals are and how to achieve them is the central problem of most people's lives. I know I consciously think about this a lot, I think to a greater extent than is typical, and have yet to reach any entirely satisfactory conclusions.
(h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;
This strikes me as pretty common advice but it is useful advice and bears repeating.
Overall I don't think you are saying anything here that isn't already fairly widely known and talked about in many contexts. Some of these things come naturally, others require conscious effort to develop as habits. There is clear variation in the population when it comes to which of these come naturally however and certainly there are many people who do few of these things as a matter of course. The real trick is in the execution however - many of these things are difficult to do and failure to do them is just as often a result of this inherent difficulty as it is of a lack of awareness of these heuristics.
Replies from: AnnaSalamon, Jonathan_Graehl↑ comment by AnnaSalamon · 2010-09-08T18:50:57.415Z · LW(p) · GW(p)
I agree that many of these heuristics are discussed in the business and self-help literatures reasonably often. My point was simply that we for the most part do not automatically implement them -- humans seem not to come with goal-achievement software in that sense -- and so it should not be surprising that most human "goal-achievement" efforts are tremendously inefficient. These heuristics are relatively obvious to our verbal/analytic reasoning faculties when we bother to think about them, but, absent training, are mostly not part of our automatic reward-gradients and motives.
If you find that e.g. (a) and (c) come fairly naturally to you, ask yourself why, and see if you can spell out the mechanics in ways that may work for more of us. The question here isn't "are (a)-(h) novel ideas that demonstrate amazing original insight?" but rather: "how can we get our brains to automatically, habitually, reliably, carry out heuristics such as (a)-(h), which seem to offer straight-forward gains in goal-achievement but seem not to be what we automatically find ourselves doing".
Replies from: mattnewport↑ comment by mattnewport · 2010-09-08T20:37:19.117Z · LW(p) · GW(p)
These heuristics are relatively obvious to our verbal/analytic reasoning faculties when we bother to think about them, but, absent training, are mostly not part of our automatic reward-gradients and motives.
I think d) for example (gather information) is pretty 'automatic' for many (if not most) people. It is the natural first step for many people. It is often difficult to find accurate information and detect and ignore misinformation so simply taking this step is not sufficient on its own however and I think it is in the execution that most people fail.
If you find that e.g. (a) and (c) come fairly naturally to you, ask yourself why, and see if you can spell out the mechanics in ways that may work for more of us.
Both a) and c) have come naturally to me for as long as I can remember. I'm afraid I can't offer any more detail through introspection. It still strikes me as odd when people don't do these automatically even though I've learned over time that many people do not.
For some of the other heuristics, e) for example, I've had to consciously work to develop them as habits of thought (still imperfectly in this case). My general approach has been to consciously think through what other heuristics I could apply periodically (usually prompted by getting stuck / not making progress on some goal) and then apply any heuristics that I realize I have neglected. Over time some things can move from this 'meta' level of analysis to become more automatic habits.
Replies from: bigjeff5↑ comment by bigjeff5 · 2011-01-27T05:36:19.063Z · LW(p) · GW(p)
I think d) for example (gather information) is pretty 'automatic' for many (if not most) people. It is the natural first step for many people. It is often difficult to find accurate information and detect and ignore misinformation so simply taking this step is not sufficient on its own however and I think it is in the execution that most people fail.
I disagree for everything people have enough information of to have performed a prior opinion. Gathering information is predicated on the idea that you do not have enough information. Most people believe they already know what they need to know, and all that is left are the details.
The perfect example is the one in the article: I want to become a comedian, so I will watch Garfield. Where is the intermediate step of finding out whether or not watching a funny show is a good way to learn how to be funny? You need more information to even begin to answer that, yet he skips this step. Why? It is almost certainly because he has already decided that the way to learn to be funny is to study funny things, and he thinks Garfield is funny, so he is going to study.
Now, it is entirely possible he could learn to be funny just by watching Garfield and asking the right questions, but given his track record I seriously doubt it. It's also re-inventing the wheel, because other people have figured out the secret of funny before him (else there would be no one funny to study) and the information is available for those who seek it.
If a person is aware he lacks information, then yes I would agree that gathering information is automatic. However, most people in most situations where this comes up are not aware that they lack information. They believe they know exactly how to do what it is they want to do, even though they are almost certainly wrong, and even though they are wrong on these matters all the time (the many failures to achieve their goals). Therefore, there is no need to seek new information, so seeking information is not automatic.
Another way of putting it is that you can't seek the right information if you aren't looking for it.
I would agree that, when people are aware that they lack information, they generally try to inform themselves.
Replies from: Unnamed, bigjeff5↑ comment by Unnamed · 2011-01-27T19:04:26.969Z · LW(p) · GW(p)
You can edit your comment to fix the quote formatting. We use Reddit Markdown syntax - you can see the most-used options by clicking "Help" below the comment box while you are writing/editing a comment (to the right of the "comment" and "cancel" buttons). To quote something, just start the paragraph with > .
Replies from: bigjeff5↑ comment by Jonathan_Graehl · 2010-09-08T22:00:03.049Z · LW(p) · GW(p)
I'd say that figuring out what our 'real' goals are and how to achieve them is the central problem of most people's lives. I know I consciously think about this a lot, I think to a greater extent than is typical, and have yet to reach any entirely satisfactory conclusions.
Likewise. I somewhat envy those who can form or decide "doing (or achieving) X will make me happy", and it really turns out to be true (whether it's an accurate or merely self-fulfilling prophecy doesn't matter too much).
I've considered whether this sort of confusion (about what goals will give lasting happiness in their pursuit or accomplishment) might have a solution in caring less about some things (to lessen constraints until there's a reachable solution).
For example, I like to do things that give me evidence that I'm unusually talented. Perhaps if I gave up that reward, I would find myself doing things that are more pleasurable or valuable.
I definitely don't think scorched earth Buddhist "don't care about anything" is a good move for me. I'm trying to give up just what seems optional and harmful (while expecting sometimes to find that I can't and so shouldn't try to, even though a hyper-rational person would be able to).
Replies from: pjeby↑ comment by pjeby · 2010-09-09T01:32:38.637Z · LW(p) · GW(p)
I somewhat envy those who can form or decide "doing (or achieving) X will make me happy", and it really turns out to be true (whether it's an accurate or merely self-fulfilling prophecy doesn't matter too much).
Don't ask what will make you happy, ask what future conditions you would prefer to experience, and what self-descriptions you would prefer to judge yourself as having.
Why? Because our brains aren't evolved to optimize happiness, they're evolved to steer the world to more-preferred states, and to optimize our expectations of others' perception of us. So if you start from those points, your inquiry (and subsequent optimizations) will benefit from hardware assistance.
(Whereas, if you try to optimize "what will make me happy", your brain will get confused, and/or try to optimize what things, socially speaking are "supposed to" make you happy, i.e. what your brain expects would cause your peers/tribe members to judge you as being happy.)
Replies from: None, Jonathan_Graehl↑ comment by [deleted] · 2010-09-09T16:48:49.401Z · LW(p) · GW(p)
Why? Because our brains aren't evolved to optimize happiness, they're evolved to steer the world to more-preferred states, and to optimize our expectations of others' perception of us. So if you start from those points, your inquiry (and subsequent optimizations) will benefit from hardware assistance.
Have you written elsewhere in more detail about this? I'm particularly interested in any tips you have on using our social expectation machinery successfully.
Replies from: pjeby↑ comment by pjeby · 2010-09-09T21:20:21.087Z · LW(p) · GW(p)
Have you written elsewhere in more detail about this?
Well, I did a multi-part video series/audio CD on this topic a couple months ago (called, "The Secrets of 'Meaning' and 'Purpose'"); my comment above was more or less an attempt to summarize one of its key ideas in a couple of sentences. I've also written about it in my newsletter before, but none of these materials are publicly available at the moment, even for sale.
(I keep meaning to put them up for sale but I'm usually too busy getting my current month CD, newsletter, and workshop put together to spend much time on trying to get more business. Probably I should think more strategically and move "posting on LW" a bit lower on my priorities... ;-) )
I'm particularly interested in any tips you have on using our social expectation machinery successfully.
Think character/identity-priming. What "kind of person" do you want to be, in the sense of "the kind of person who would X"... where X is whatever you would like to motivate yourself to be/do. What kind of person do you want to see yourself as? Be sure to see it from the outside, as if it were someone else.
Experiments show that "kind-of-personness" priming has a big effect on people's decisions; when our identity is primed as belonging to a particular group, we automatically behave more like a stereotype of that group. So, pick what group(s) you want to prime yourself as a member of, and go for it. ;-)
↑ comment by Jonathan_Graehl · 2010-09-09T03:25:27.685Z · LW(p) · GW(p)
This seems right. The things people have described to me as being goals they have reached that, as they predicted, made them happy, were definitely of the two broad types you described.
If you construe hedonic experiences as falling under "future conditions you would prefer", then perhaps your dichotomy is exhaustive.
For sure nobody needs to be told to do what feels best locally - and most of us have reached a limit in that respect (there are only so many cheesecakes you can benefit from).
Some complaints, however:
(A) what future conditions you would prefer to experience
seems just as hard as predicting what I can accomplish that will make me happy
also,
(B) what self-descriptions you would prefer to judge yourself as having
I have been hesitant to indulge in such satisfactions, because it seems to me that they're most often achieved by or result in hypocrisy. However, I should probably just do it if it feels good.
Because our brains aren't evolved to optimize happiness, they're evolved to steer the world to more-preferred states, and to optimize our expectations of others' perception of us. So if you start from those points, your inquiry (and subsequent optimizations) will benefit from hardware assistance.
You seem to contradict yourself. Other than (A) and (B), are there any other things that can make me happy? If not, then you seem to be arguing that evolved human brain-nature does in fact help me become happy. Also, why do you argue only from evopsych/biology? I'm mostly limited by the options permitted by the society I live in, and may still be crippled by some religious upbringing or other social programming that lacks force of law or threatened violence.
evolved to steer the world to more-preferred states, and to optimize our expectations of others' perception of us
The second is a subcategory of the first. I assume you mean preferred for various genes' survival. I think there is a lot about us that is accidental and serving no particular gene (it's just some artifact of the reachable or actually reached evolutionary "design").
I do think it's fine to ask of my present state "am I happy (in other words, how do I feel)?", and to wonder "what will make me happier if I get it?" For the latter, I do like your two suggested (vague) subgoals. I think the former is still essential, although I suppose you could ask how you feel in relation to your two general happiness subgoals.
Replies from: pjeby↑ comment by pjeby · 2010-09-09T21:42:25.968Z · LW(p) · GW(p)
You seem to contradict yourself. Other than (A) and (B), are there any other things that can make me happy? If not, then you seem to be arguing that evolved human brain-nature does in fact help me become happy.
What I'm saying is that the machinery is better at answering concrete questions relating to these matters, than abstract ones. To our abstract thinking machinery, it seems like there should be no logical difference between "what will make me happy?" and A) "what kind of world do I want to live in?" or B) "what kind of person do I want to be?"
However, as the saying goes, the difference between theory and practice is that in theory, there's no difference, but in practice, there is. ;-)
I assume you mean preferred for various genes' survival.
No, I meant, "preferred", as in "what would you prefer?" Not your genes. (Your genes already have another level of control over what sort of preferences you're able to learn, but that's not relevant to the issue at hand.)
I do think it's fine to ask of my present state "am I happy (in other words, how do I feel)?", and to wonder "what will make me happier if I get it?"
This is another one of those seemingly nitpicky things that actually makes a difference: try asking what you want, not what will make you happier. (Also, what you feel, not whether you're happy.)
The problem with asking "am I happy" is that it discards information that would be useful to you about what you do feel, in favor of a one-bit, yes-or-no answer. (At minimum, knowing the difference between the broad non-happy categories of sad, afraid, and mad would be good!)
Next, the problem with "what will make me happier" is that it presupposes ("have you stopped beating your wife?"-style) that there is something that will "make" you happy, as though it's something you don't have any control over. Essentially, the question itself is continually re-priming the idea that you are not in control of your happiness!
Keep that up, and pretty soon you'll be thinking things like:
I'm mostly limited
Oops. Too late. ;-)
Truth be told, the question is more a symptom than a cause; I'm not saying you feel limited or stuck because you asked the question, so much as that the question is both an expression and reinforcement of the stuckness you already feel.
To change your answers, change your questions! (And be aware of what those questions are priming, because the questions you habitually ask yourself are the #1 source of priming affecting your thought processes and emotions.)
In contrast, asking "what do I want?" carries a different prime, by implying that what you want matters, and that you intend to go after it and get it. It also does not call for your brain to figure anything out. Either you want a thing or you do not; there is nothing to "figure out" or strategize. Simply tell the truth about what you do or do not want, do or do not know whether you want. Repeat telling the truth until you know.
"What do I want?" is a question about the current state of reality, in other words, and you can keep asking it as much as you want. The answers may change over time, but that's okay, because that's the truth. You need not expect one answer or "the" answer, because there is no one answer.
"What will make me happy(er)?" is problematic precisely because it causes you to think that there is a problem to be solved, a riddle to be answered or a puzzle to be figured out. It engages the parts of your brain that solve that kind of question, but which have absolutely no idea what you want.
That's why I said the questions matter: because it makes a huge difference which parts of your brain are engaged in finding the answer, and therefore what kind of answers you will get.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-09-09T23:00:33.553Z · LW(p) · GW(p)
It feels like you're obsessed with the specific words I've used to express a line of introspection/deciding/planning, as if I'm going to verbally ask myself a question, and parts of me will react very superficially to the phrasing. I don't think I need to worry about it, because when I think about something in depth, I really think about it. If I'm really thinking, then it doesn't matter what words I use to describe the topic.
However, I am in general willing to experiment with priming tricks, because it's true that I can't afford to think deeply all the time. I haven't found any such trick yet that I can definitely say works.
You quoted a phrase "I'm mostly limited ..." from my claim that social constraints and programming matter as much as brain architecture, but didn't respond to the substance. I'll assume this means that you agree. Do you have any advice on exploiting those factors? What you've given here is based only on evopsych brain-architecture guesses (a "hardware advantage" reachable by well-phrased self questioning)?
Replies from: pjeby↑ comment by pjeby · 2010-09-10T02:46:23.195Z · LW(p) · GW(p)
It feels like you're obsessed with the specific words I've used to express a line of introspection/deciding/planning, as if I'm going to verbally ask myself a question, and parts of me will react very superficially to the phrasing.
Not quite - I'm also saying that people's choice of words is rarely random or superficial, and tends to reflect the deeper processes by which they are reasoning... and vice versa. (i.e., the choice of words tends to have non-random, non-superficial effects on the thinking process).
Note that how a question is phrased makes a big difference to survey results, so if you think this somehow doesn't apply to you, then you are mistaken.
It only feels like such things don't apply to ourselves, like the people in the "Mindless Eating" popcorn experiments who insist that the size of the popcorn container had nothing to do with how much they ate. They (and you) only think this because of the limited point of view from which the observation is made.
I haven't found any such trick yet that I can definitely say works.
Of course - for the same reason that people don't think the size of the container makes any difference to how much they eat. It's easy to write off unconscious influences.
That being said, choice of questions makes a big difference to answers, but it's not solely a matter of priming. After all, if you use the words "What do I want?" and go on internally translating that in the same way as you asked, "What will make me happy?", then of course nothing will change!
So, it's not merely the surface linguistics that matter, but the deep structure of how you ask yourself, and the kind of thinking you intend to apply. Based on the challenge you described, my guess is that the surface structure of your questions is in fact a reflection of how you're doing the questioning... because for most people, most of the time, it is.
You quoted a phrase "I'm mostly limited ..."
The reason I quoted "I'm mostly limited" is because I wanted to highlight that the thought process you appeared to be using was one in which you already assume you're limited, before you even know what it is that you want! (It sounded to me as though you were implying that it doesn't matter if you know what you want, because you're not really going to get it anyway -- and that wasn't just from that one phrase; that was just the easiest one to highlight.)
This sort of assumption is not a trivial matter; it is inherent to how we limit ourselves. When we make an assumption, our brains do not challenge the assumption, they instead filter out disconfirming evidence. That applies even to things like thinking you're not good at knowing what you want!
social constraints and programming matter as much as brain architecture,
Social constraints aren't that important, since people with the appropriate programming can work around them. And choosing effective questions to ask yourself falls under the heading of "programming", in the verb sense of the word.
Do you have any advice on exploiting those factors? What you've given here is based only on evopsych brain-architecture guesses (a "hardware advantage" reachable by well-phrased self questioning)?
I have tons of "programming" tricks, especially ones for removing social programming. Teaching them, however, is a non-trivial task, for reasons I've explained here before.
One of the key problems is that people confabulate things and then deny having done so. Alicorn's notion of "luminosity" is closely akin to the required skill, but it is very easy for people to convince themselves they are doing it when they are actually not even close. What's more, unless somebody is seriously motivated to learn, they won't be able to pick it up from a few text comments.
(Contra to MoR!Harry's statement that admitting you're wrong is the hardest thing to learn, IMO the hardest thing to learn is to take seriously the idea that you don't already know the answers to what's going on in your head... on an emotional and experiential basis, rather than merely an intellectual abstraction that you don' t really believe. Or, to put it another way, most people claim to "believe" the idea, while still anticipating as if they already know how things in their head work.)
Anyway, for that reason, I mostly don't bother discussing such things on LW in the abstract, as it quickly leads to attempts to have an intellectual discussion about experiential phenomena: dancing about architecture, so to speak.
Instead, I usually try to limit myself to throwing out cryptic hints so that people with the necessary motivation and/or skill can reconstruct the bigger picture for themselves, a bit like Harry and the "42" envelope. ;-)
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-09-10T07:44:44.170Z · LW(p) · GW(p)
While it's true that I can't rule out things that I can't detect, I can't really believe in them, either.
I understand where you're coming from. You've tried much harder than most people do to understand your own emotions and motivations, and you're pretty sure you've actually done so. I agree that there are many people who think they have, but haven't. Similarly, sometimes people think they're really trying, but aren't.
I'm impressed with how much you know about my thoughts :)
I won't suggest that we're fundamentally different in any way, but I do sometimes wonder if there are significant architectural emotion/motivation differences in "normal" people, other than the obvious (male/female).
The popcorn container example doesn't surprise me or change my views in any way - but cool.
I feel like I'm pretty flexible in what I want - that is, I can ask what it is I currently want, but I also ask what I maybe should want, because I've had some success simply provisionally choosing to care more or less about particular things. I sometimes find out that I couldn't actually maintain that level of (dis)interest, and I take this as evidence (not certainty; just some evidence) that such a (lack of) desire is a fixed part of my personality.
comment by MartinB · 2010-09-08T12:59:46.372Z · LW(p) · GW(p)
More examples please in the likes of [1]. I am bright enough to understand them, but not to come up with too many on my own.
Replies from: Vladimir_Golovin↑ comment by Vladimir_Golovin · 2010-09-09T15:01:12.619Z · LW(p) · GW(p)
Some examples off the top of my head:
A designer who has spent 12 years working in Photoshop but haven't learned even basic hotkeys because doing everything with the mouse is "more convenient". The same person also always clicks the Open button in file opening dialogs with the mouse cursor instead of just double-clicking the filenames.
A guy who often gets out of the building to get some junk food instead of checking out a new affordable cafeteria on the ground floor of the same building, which has been open for 3 months. (That was me, prior to today.)
A manager who has been working in software development companies for 15 years who still uses IE7 as his main web browser.
comment by MugaSofer · 2013-04-04T21:51:18.962Z · LW(p) · GW(p)
Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out. That is not at all the same as the ability to automatically implement these heuristics.
Source?
Replies from: somervtacomment by psyklic · 2010-09-08T21:04:32.925Z · LW(p) · GW(p)
I disagree - I think that people usually do know how they could be more productive. This argument is really about people who TALK versus people who DO - the talkers know that optimally they should be "do"ing. But, being a sheep (talker) is BORING, and being a fox (do-er) is LONELY.
In the author's example, the comedian knows that watching re-runs is the easy way out. He'll be bored, but he'll learn a little bit and he can tell his friends he's working.
He also knows that, ideally, he'd be working comedy all the time instead. But he's already working another job, he's tired at night, and if he REALLY plunged into comedy he'd lose his free time too. He'd be lonely and on his own.
This isn't an issue of the comedian not having "strategic" capabilities. It's an issue of him not being a risk-taker, not being sufficiently motivated, and in many ways of him just being tired out!
Replies from: orthonormal↑ comment by orthonormal · 2010-09-09T22:38:29.596Z · LW(p) · GW(p)
Hi psyklic, and welcome to Less Wrong! Be sure and introduce yourself in the welcome thread.
comment by CronoDAS · 2010-09-08T19:28:45.183Z · LW(p) · GW(p)
What do you do when the answer to (a) is "Nothing in particular"?
Replies from: AnnaSalamon↑ comment by AnnaSalamon · 2010-09-08T19:35:00.588Z · LW(p) · GW(p)
Keep introspecting. If you find yourself preferring to e.g. play a video game, rather than to lie in bed, there's a reason you prefer it. Micro-goals count too.
Replies from: CronoDAS, CronoDAS↑ comment by CronoDAS · 2010-09-08T23:28:00.343Z · LW(p) · GW(p)
Introspection? I try to avoid that, and I think I have a pretty good reason. I don't like to do introspection because I don't like what I find.
When I query my brain for what I ultimately want out of life, the answer that comes back is "I want to die." And it's not that I'm particularly unhappy at the moment; "death" seems to feel like a kind of freedom, freedom from all the annoying things that other people insist that I do (and I can't justify saying "no" to) and all the annoying things that I have to do to maintain this body, such as eat and go to the bathroom, freedom from, as Shakespeare put it, "the heart-ache and the thousand natural shocks / That flesh is heir to". The emotion I feel most strongly when I contemplate the state of being dead is not fear, not sadness, but relief - and that scares me. I don't think I ought to want to die. And if I did die, that would make many people who know me very sad, and I definitely don't want that. So I haven't killed myself yet; I'm waiting for my parents to die first. And until then, I just waste time doing nothing in particular.
Sorry to be so morbid. :(
Replies from: LucasSloan↑ comment by LucasSloan · 2010-09-09T01:07:42.026Z · LW(p) · GW(p)
Umm...
I want to point out the contraction between you saying introspection says "die" and the fact that you, having reflected on this, deciding not to do "introspection" because doing so leads to the thought that dying would be good and you don't want to die. If you could change yourself such that doing "introspection" didn't lead to the thought of death would you? The fact you haven't killed yourself suggests that you're not actually introspecting on your true values, just some unhappy subset thereof (or perhaps, introspecting with your true values on an incomplete subset of the data you have about the quality of your life/the universe).
Also, if I promise to spend 5 minutes crying upon notice of your death, will you not kill yourself in order to spare me the unpleasantness?
Replies from: CronoDAS↑ comment by CronoDAS · 2010-09-09T03:14:05.238Z · LW(p) · GW(p)
I want to point out the contraction between you saying introspection says "die" and the fact that you, having reflected on this, deciding not to do "introspection" because doing so leads to the thought that dying would be good and you don't want to die. If you could change yourself such that doing "introspection" didn't lead to the thought of death would you?
Probably.
The fact you haven't killed yourself suggests that you're not actually introspecting on your true values, just some unhappy subset thereof (or perhaps, introspecting with your true values on an incomplete subset of the data you have about the quality of your life/the universe).
I currently have compelling reasons to refrain from killing myself, regardless of my general lack of personal interest in continued existence. Alas, like so many other things, the peace of the grave is denied to me.
Also, if I promise to spend 5 minutes crying upon notice of your death, will you not kill yourself in order to spare me the unpleasantness?
I don't expect you to get such notice. You're just some guy on the internet; if I simply stop posting, you'll probably never know why. But no, that wouldn't be enough to dissuade me from implementing Really Extreme Altruism if I ever decided to actually go through with it.
↑ comment by CronoDAS · 2010-09-09T00:15:07.270Z · LW(p) · GW(p)
Often the main reason that I do anything seems to boil down to "sheer force of habit."
Replies from: erratio, jimrandomh↑ comment by erratio · 2010-09-10T05:47:23.545Z · LW(p) · GW(p)
May I recommend an experiment then? Try ignoring force of habit for a few days and see how you feel about all those activities. It may help you to come up with internal reasons to want to do things rather than relying on the external pressures of habit and expectations. If, after a few days, it turns out that lying in bed doing nothing is actually preferable to escapism through computer games and surfing the Internet, I submit that it means your medication isn't doing everything that it should and that getting that fixed should be your first priority. In all other cases I would expect that it will turn out that you do have reasons to get out of bed that aren't dependent on habit.
For me, no matter how depressed I am I always get out of bed at the very least, even if it's just so I can stare at the wall while I try to focus and motivate myself to do something enjoyable or productive. If I inspect my reasons for doing so, "habit" is definitely a large part of it. But a larger part is "boredom", as in, I can only contemplate my utter worthlessness for so long before my thoughts start feeling repetitive and boring, and I feel the need to distract myself by getting up and doing something that I find at least marginally engaging.
Replies from: CronoDAS↑ comment by CronoDAS · 2010-09-10T07:53:33.642Z · LW(p) · GW(p)
I've tried the whole "lying in bed doing nothing" thing. When I wake up, I'm usually groggy and can end up spending an hour or two in bed half-asleep. I'm usually not thinking about much of anything at all during this time, or at least I'm not thinking in words, so I'm not "contemplating my utter worthlessness". When trying to go to sleep, though, I tend to get frustrated if I don't fall asleep quickly, so I'll often turn on a portable game system (leaving the lights in the room off) and play until I basically can't stay awake any more. I strongly suspect that this is a bad idea, though, as it tends to shift my sleep schedule later and later. I also have a tendency to take naps during the "day" and then get back up. (I do this once or twice a week, I guess.)
Sometimes, I really do play video games because the playing of the game itself is fun. (Persona 3 Portable is the most recent game to have taken over my life.) Some games have both boring parts and more interesting parts, and I play through the boring parts so I can get to the more interesting parts. Once in a while I'm playing one so I can say I've finished it before I go on to another one; I'm a bit of a completionist and often get annoyed if I don't get Hundred Percent Completion. Or sometimes it's because I'm simply curious about what happens next even though the game itself isn't really all that good. (I'll occasionally see a movie I don't expect to be very good simply to satisfy my curiosity about it.) And I've found that carrying around a portable video game system (or a novel) is a great way to avert boredom when doing things like waiting in line. So "habit" and "convenience" aren't the only reasons I play lots of video games.
There is one specific thing that I've noticed about games, though: even a bad game gets a lot more interesting when I have some work to avoid. It's often exciting for me to have something that I should be doing but don't want to, and then not do it. (I noticed this phenomenon when I was in college; it hasn't seemed to apply very much since then.)
Replies from: erratio↑ comment by erratio · 2010-09-13T05:41:16.431Z · LW(p) · GW(p)
Right, so it sounds like you do value engagement over doing nothing. That's certainly a good start.
Basically, I think it should be possible for you to find some better (as in: likely to help you change your terminal value) goals that you actually want to do, without necessarily having to introspect about your desire to kill yourself. Of course, I could well be generalising from one example.
There is one specific thing that I've noticed about games, though: even a bad game >gets a lot more interesting when I have some work to avoid. It's often exciting for >me to have something that I should be doing but don't want to, and then not do it. (I >noticed this phenomenon when I was in college; it hasn't seemed to apply very >much since then.)
Oh boy do I know that feeling. The corollary being that after I finally got the work done or sat the exam or whatever I suddenly realised that I'd wasted 20+ hours on some piece of dreck :)
↑ comment by jimrandomh · 2010-09-09T01:14:21.612Z · LW(p) · GW(p)
Sounds more like a biochemical issue to me; that sort of laziness is likely to mean something's wrong that's not just psychological. Are you taking a multivitamin regularly?
Replies from: CronoDAS↑ comment by CronoDAS · 2010-09-09T03:02:45.156Z · LW(p) · GW(p)
No, but I do take antidepressants.
Replies from: jimrandomh↑ comment by jimrandomh · 2010-09-09T05:09:51.996Z · LW(p) · GW(p)
I predict with p=0.95 that you have at least one micronutrient deficiency which is greatly contributing to your depression, and that starting to take a multivitamin regularly would be enormously to your benefit. I predict with p=0.6 that you are specifically deficient in thiamine, and that a single dose of sulbutiamine (a molecule that crosses the blood-brain barrier and then breaks into two thiamine molecules) would cause a large and sudden reduction in your depression. I am basing this on my own experience with thiamine deficiency (caused by T1 diabetes), which produced in me a specific type of apathy which I recognize in your comments.
Unless you either lied about taking a multivitamin to your current doctor, or ignored their advice to take one, fire him or her and find a new one. Also, thoroughly research every drug you're currently taking. At a minimum, search for the name of each one on PubMed, skim the first few pages of titles and read some of the abstracts. Don't adjust anything without consulting a qualified doctor, but do make sure to have that consultation.
Following up on this may be the most important thing you ever do.
EDIT: One other thing - if you're on antidepressants, you should be getting blood work, of the "large checklist of tests" variety, done on a regular basis. Make sure your TSH has been tested at least once in the past two years (result will be interesting with p=0.1, but very interesting if it is).
Replies from: CronoDAS, datadataeverywhere, wnoise, RobinZ↑ comment by CronoDAS · 2010-09-09T06:42:50.041Z · LW(p) · GW(p)
I have been getting blood work; everything always comes out just fine. (Yes, thyroid hormone is one of the things that's been checked.) And none of the many doctors I've been dragged to have told me to take vitamins, although my psychiatrist has occasionally asked about my diet. There are multivitamins in my house, but I stopped taking them a long time ago because they're these really annoying, very large chewable tablets the size of quarters.
In terms of vitamin deficiency, I'm actually most suspicious of vitamin B12. Both my maternal grandmother and my mother have low levels and get B12 injections regularly. (My mom is currently 60.) I once asked my psychiatrist to have my B12 checked, but I don't think it actually has been.
Also, the basic effect of my antidepressants has been "Well, I am more cheerful now, but my life still sucks every bit as much as it did when I wasn't taking them." I'll quote a doctor's anecdote:
“I remember one patient who came in and said she needed to reduce her dosage,” he says. “I asked her if the antidepressants were working, and she said something I’ll never forget. ‘Yes, they’re working great,’ she told me. ‘I feel so much better. But I’m still married to the same alcoholic son of a bitch. It’s just now he’s tolerable.’ ”
Perhaps the difference between me on antidepressants and me off antidepressants is that, while on antidepressants, I was willing to go do my homework even though I'd rather touch a hot stove than do another problem set, while when off them, no amount of social pressure from my parents and other authority figures could make me open up my textbook and get to work, because I just couldn't make myself do it no matter what happened.
Right now, I'm not necessarily depressed because I have screwed up brain chemicals. I'm depressed because I'm a 28-year-old lazy bum who doesn't think he'll ever be able to get a job he can stand and keep it for any length of time, is supported by (and lives with) his parents, doesn't have any close friends, has never been in a romantic relationship, lives in fear of having his parents decide to stop supporting him, is endlessly frustrated by his mother's (completely justified) demands that he help her with various tasks because she has MS and can barely walk, and doesn't have any particular goals in life other than "escape it".
I think I can't cope with being my mother's caretaker any more; I need to get an income and get the hell away from my parents, but I don't think I can do that, so I just stay where I am and put up with the same shit that's been making me miserable for the past eight or so years. (Before then, I was often miserable, but for different reasons.)
Replies from: FrF, Apprentice, Alicorn, AdeleneDawner, None, Jonathan_Graehl↑ comment by FrF · 2010-09-09T15:25:54.110Z · LW(p) · GW(p)
Hello CronoDAS,
You're story sounds somewhat similar to mine (but I'm considerably older than you). My mother had Multiple Sklerosis, too; I was her main caretaker until her death. It's strange that it didn't dawn on me how much my upbringing and my mother's illness has shaped my father's and my life - and furthermore I didn't really understand until recently how unusually withdrawn my life has been so far. Now, social isolation is a well-known danger when you're severely ill but I was (at least on a physical level) healthy and still I wasn't able to break out of the habits that I (to a certain degree) adopted because of my former circumstances and a general inclination towards shyness.
I have a very unoriginal proposition for you: Act as soon as possible and change your situation! Believe me, things don't get easier once you're ten years older than you are now. What about a "trial move"? The way you describe your parents I think you could always return if for one reason or another you can't cope with being "on your own".
I'm "in the process" (as vaguely as that may sound) to finally get my act together and make some serious, so-long-overdue-you-won't-believe-it life changes. I know some of the depressive symptoms you're describing: A general world-weariness, an enmity to my own body, avoidance of "boring" errands up to a point where it got seriously damaging, seeing no sense in dragging this carcass of mine through a pointless world etc pp. But somehow things begin to click for me a bit more. If it's "meant to be" that I'm going down, then at least I'm putting up a fight (i.e. trying to beat some amount of rationality into my skull which is thick with irrational believes and blocks)!
Take care!
Replies from: CronoDAS↑ comment by CronoDAS · 2010-09-09T20:39:17.048Z · LW(p) · GW(p)
The most immediate change I probably need to make is "get an income". It's a prerequisite for most other changes I'd want to make.
(My mom's MS is unusual, because she started showing symptoms late in life, only a few years ago.)
Replies from: FrF↑ comment by FrF · 2010-09-09T21:52:08.679Z · LW(p) · GW(p)
Then your mom is lucky in more than one regard! Because of medical progress it is very different to be diagnosed with MS today than it was in 1973, when my mother had her first MS episode at the age of 27.
You wrote earlier that a lot of what you don't like about your life is simply due to habits. Personally, I find the key to change is to persistently chip away at my mountain of bad habits (my main nemesis is procrastination) and to think more from day to day, to try to implement some (any!) positive difference in my life at a daily basis, and be it only to show a friendly face when you're not really feeling like it, or to do that one more household chore you try to avoid, or to confront another uncomfortable truth about yourself and verbalize it to (well-chosen!) friends and acquaintances.
I know, these strategies are so basic they almost don't qualify for Self-Help 101 but once you "really want to change" I found they work quite well.
Replies from: CronoDAS↑ comment by CronoDAS · 2010-09-09T21:56:10.284Z · LW(p) · GW(p)
You wrote earlier that a lot of what you don't like about your life is simply due to habits.
Actually, what I said was that a lot of the activities I do (video games, blog commenting) are generally done because they're what I've gotten used to spending time doing, not that the habits themselves are necessarily causing the problem.
↑ comment by Apprentice · 2010-09-09T15:23:35.319Z · LW(p) · GW(p)
I apologize in advance for the long-shot other-optimizing but, well, here goes.
Something that has repeatedly worked for me to move from a lethargic, somewhat depressed state to an active and happy (if restless) state is to deliberately refrain from sexual release while not refraining from exposure to sexual stimuli. I came upon this independently but I've since found the same basic idea in Taoist literature and in femdom literature. It could also easily be pitched as an evo-psych idea.
Replies from: katydee↑ comment by katydee · 2010-09-09T15:25:31.344Z · LW(p) · GW(p)
Uh, what's the mechanism there?
Replies from: Apprentice, gwern↑ comment by Apprentice · 2010-09-09T17:32:50.438Z · LW(p) · GW(p)
I'm not aware of any research on this exact question so what literature there is is mostly religious or pseudo-scientific. What I do think is fairly well-established is that lack of sexual release makes men restless. Why 'restless' in my case translates to "active and happy" rather than, say, "aggressive and abusive" I don't exactly know. Some factors that may be relevant (but I had not thought of before now): a) My baseline personality is quite docile and submissive, b) Like many people here, I enjoy toying with self-hacking, c) I have lots of projects to pour extra energy into, projects that are satisfying intellectually and status-wise.
↑ comment by gwern · 2010-09-09T16:00:03.827Z · LW(p) · GW(p)
Presumably sublimation. At least, Freud's sublimation reminds me a heck of a lot of the Tantric Buddhism and Taoist ideas of collecting ch'i from sexual activities (or lack thereof) and using it for other purposes.
↑ comment by Alicorn · 2010-09-09T13:36:24.195Z · LW(p) · GW(p)
Do you have a skill that you are willing to offer potential roommates? I'm currently exchanging my culinary expertise for room and board. It's a good deal. I can get away with having a really minuscule income to cover discretionary expenses and mostly I do whatever I want all day until it's time to mix up a batch of muffin batter.
↑ comment by AdeleneDawner · 2010-09-09T12:48:28.010Z · LW(p) · GW(p)
In terms of vitamin deficiency, I'm actually most suspicious of vitamin B12. Both my maternal grandmother and my mother have low levels and get B12 injections regularly. (My mom is currently 60.) I once asked my psychiatrist to have my B12 checked, but I don't think it actually has been.
I suggest trying emergen-c or your local generic version. It's mostly marketed for the vitamin C megadose, but 416% of the recommended minimum of B12 isn't insignificant. The generic I use has an odd taste when prepared according to the directions, but is good when mixed with a sweet drink like kool-aide.
Replies from: CronoDAS↑ comment by CronoDAS · 2010-09-09T20:36:21.598Z · LW(p) · GW(p)
B-12 deficiency is usually caused by problems with absorption, not by a lack of B12 in the diet.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-09-09T23:37:36.328Z · LW(p) · GW(p)
B-12 deficiency is usually caused by problems with absorption, not by a lack of B12 in the diet.
Yes, but sometimes (often?) it can be cured by increasing dietary sources. Acute doses might not be ideal, though.
↑ comment by [deleted] · 2010-09-09T15:19:12.954Z · LW(p) · GW(p)
Have you ever tried cognitive therapy? If antidepressants made you more cheerful but haven't otherwise changed your outlook then maybe some systematic effort at altering your thought patterns would? Maybe combined with antidepressants if they make you more likely to complete homework assignments (I think cognitive therapy involves those).
↑ comment by Jonathan_Graehl · 2010-09-09T07:55:15.344Z · LW(p) · GW(p)
But you seem to be quite smart. Sigh. I guess you know that you will be happier with a decent-paying and/or intellectually engaging job (even one you "can't stand"), because you'll then have a realistic chance for some of the things you want, so if taking antidepressants lets you tolerate finding and performing a job, then it makes sense to keep on using them. Without knowing you well enough, I'll still guess that it's unlikely that you "don't think you can" based on your actual ability and opportunity, but more because of the helplessness of depression (naturally I could be completely wrong).
Replies from: CronoDAS↑ comment by CronoDAS · 2010-09-09T08:17:18.383Z · LW(p) · GW(p)
Well... I've had some pretty bad experiences with employment. The last time I was employed, I sat in a cubicle and surfed the Internet all day while feeling guilty about not getting anything done. It was really awful. I once signed up with a temp agency. My first assignment lasted a week. After it was done, the customer complained about me (please don't ask why) and I was fired from the temp agency. Another time, I worked as a cashier at a supermarket, and I lasted all of three days before being fired for insubordination.
Money's never been a very big motivator for me. I've got over twenty thousand dollars sitting in the bank, so if I want to spend $50 on a video game, or $300 on a video game system, I can. And I have enough unplayed video games sitting on my shelf to last me a long, long time. What would I do with more money? Well, I did decide within the last 24 hours that I definitely can't cope with being my mom's caretaker any more, so I'd want to move out of my parents' house, and I'd want to get a cat, and I once calculated that it would cost me a few thousand dollars a year to play Magic: the Gathering competitively, but that's about it.
The usual "carrot-and-stick" approach to motivation doesn't work too well on me; I just give up on getting the carrots and resign myself to enduring the sticks. Is that what they call "learned helplessness"? I've had people trying to drum the lesson "you're going to have to do what you're told, regardless of what you want to do, and fighting will only make things worse" into me my whole life, and it seems like they were mostly right: as a child, you're pretty powerless to get what you want, if what you want is "not to go to school".
Replies from: CronoDAS, SilasBarta, jimrandomh, Jonathan_Graehl↑ comment by CronoDAS · 2010-09-09T09:18:08.562Z · LW(p) · GW(p)
On the plus side, I think I could probably teach or tutor math without going crazy.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-09-09T16:53:09.371Z · LW(p) · GW(p)
Most people find teaching (well) to be difficult. If you're good at it, then that's quite valuable.
Replies from: mattnewport↑ comment by mattnewport · 2010-09-09T17:38:19.491Z · LW(p) · GW(p)
What sense of valuable are you using here? I've seen very little evidence in my interactions with the education system that being good at teaching is highly valued either in terms of direct financial rewards or career prospects.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2010-09-09T19:49:24.653Z · LW(p) · GW(p)
Effective tutoring would be very valuable to rich parents. Perhaps passively building your reputation wouldn't work; self-promotion would be necessary.
Public school teachers are well compensated overall over an entire career (including pension), although I doubt the job is very fun, and you're right that the rewards are in no way contingent on actually teaching well.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2010-09-09T23:18:21.956Z · LW(p) · GW(p)
Effective tutoring would be very valuable to rich parents.
Are rich parents able to distinguish effective tutors? In my experience they largely hire based on elite education. Plus, most of their "tutoring" time is really guarding the child to make sure the child actually does homework. But there are also non-rich parents. I don't think that DAS should have any trouble getting hired and keeping tutoring positions for $20 or maybe $50 hourly, if he can find parents who want a tutor. This is a very different skill and I think the main determinant of people actually tutoring. (ETA: I seem to have missed JG's second sentence. Sorry.)
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-09-09T23:55:33.686Z · LW(p) · GW(p)
I poked around a little earlier today, and found a few sites that do paid online tutoring. This one was the most open about hiring new tutors of the ones I looked at. Their FAQ says that their most active Chemistry tutors earn $800-$1600/month. Even given that that's an upper bound, it may be worth looking into. (I lived pretty comfortably on $1200/month last year, with about Crono's expectation of lifestyle, and without having someone to share bills with.)
↑ comment by SilasBarta · 2010-09-09T16:01:19.434Z · LW(p) · GW(p)
If you are really capable of playing Magic competitively if only you had the cards, etc., I would be glad to start you up, and you can pay me back whenever. But I would need to know that e.g. you are up-to-date on what decks/strategies work, tournament formalities (so you don't lose because of using the wrong "done with turn" indicator or tapping rotation angle), etc.
(I made this offer over a year ago, but was strongly criticized for having the proviso that Crono put his karma at stake to indicate seriousness and as a motivator.)
Replies from: CronoDAS, AdeleneDawner↑ comment by CronoDAS · 2010-09-09T20:32:56.595Z · LW(p) · GW(p)
I'm not yet capable of playing professionally. I might be able to reach that level, but I'm not there yet. And by "playing professionally" I don't mean "play well enough to make a living at it." There are very few people in the world who have ever made enough money from Magic tournaments to live on, although the number of people who at least manage to make back their expenses is much larger. (The "several thousand dollars a year" figure is an upper bound and doesn't take into account potential winnings.)
I actually do have a plan to get better, though; if I can put up a good showing in a few tournaments, Zvi Moshowitz will let me join his Magic playing social circle. (I think.) The current plan is to wait for the next Pro Tour Qualifier season to start - it's Sealed Deck with the soon-to-be-released Scars of Mirrodin set - and just attend as many as I can get to while also getting in plenty of practice by playing on Magic Online.
Replies from: Apprentice, Sniffnoy↑ comment by Apprentice · 2010-09-10T16:07:23.735Z · LW(p) · GW(p)
I once knew a gamer, indeed an MtG player, who made a decent (though certainly not extravagant) living out of playing online poker. Smart guy. I never observed his poker skills first hand but he certainly kicked the shit out of me in MtG.
I don't know how difficult it is to use poker as an income source but you probably have the basic skill set (math/rationality/gaming) required for good poker playing.
Replies from: CronoDAS↑ comment by AdeleneDawner · 2010-09-09T16:29:27.711Z · LW(p) · GW(p)
Erm...
...and I once calculated that it would cost me a few thousand dollars a year to play Magic: the Gathering competitively...
Unless Crono's disregarding his potential winnings, your question about whether he thinks he'd be able to earn money that way seems to have been answered.
Replies from: SilasBarta↑ comment by SilasBarta · 2010-09-09T16:37:24.664Z · LW(p) · GW(p)
Yes, but from earlier discussions he had suggested he'd be able to play professionally, so that's what I interpreted him to mean here, and the cost is gross rather than net, so he'd only need the first year's expenses to be self-sustaining.
So I was indeed sneaking in assumptions from earlier exchanges.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-09-09T16:44:22.678Z · LW(p) · GW(p)
If that is gross, sure. I did mention that he might be disregarding potential winnings. It seems odd to me that he'd word it that way in that case, though.
↑ comment by jimrandomh · 2010-09-09T12:11:23.386Z · LW(p) · GW(p)
Learned helplessness applies more to specific stimuli and specific rewards; what you're describing sounds more like general lack of energy. My advice is to tweak your biochemistry until you feel more energetic, and try the cubicle environment again.
↑ comment by Jonathan_Graehl · 2010-09-09T16:55:27.819Z · LW(p) · GW(p)
Interesting. For sure you will need to save more money than that in the long run (when you are older and really not able to do much work).
It sounds good that you've decided that you need to move out, provided you actually do so.
↑ comment by datadataeverywhere · 2010-09-09T15:53:29.278Z · LW(p) · GW(p)
I'm with wnoise, but I have a question to clarify my position.
How many diagnoses do you expect a competent physician to get wrong? I would say that more than 1 in 20 is at least reasonable. However, without meeting CronoDAS, or performing tests of any kind, based purely on the scant evidence in his posts, you have diagnosed him with a micronutrient deficiency, and have a confidence of 95% in your diagnosis. Seriously? What's your prior? Even for thiamine, a 60% confidence that this near-stranger is deficient in it seems dramatically too high.
Replies from: Alicorn, jimrandomh↑ comment by Alicorn · 2010-09-09T16:32:28.463Z · LW(p) · GW(p)
How many diagnoses do you expect a competent physician to get wrong?
I expect physicians to be bewildered rather a lot. I spent years severely anemic. My father is an MD, my uncle is an MD, I saw a variety of doctors during this time, I was eating cups and cups and cups of ice every single day and was unremittingly tired and ghostly pale, partway through I became a vegetarian - and it took the Red Cross's little machine that goes beep to figure out that maybe I wasn't getting enough iron. I have a vast host of symptoms less serious than that which no doctor, med student, or random interlocutor has been able to offer plausible guesses about.
I expect bewildered people to make things up.
Replies from: datadataeverywhere↑ comment by datadataeverywhere · 2010-09-09T16:50:38.241Z · LW(p) · GW(p)
Agreed. Even if they don't make things up, the responsible thing to do is to iterate through harmless or nearly-harmless treatments for conditions that the physician thinks are unlikely, but more likely than any other ideas he or she has.
This is exactly the opposite problem; not being at all bewildered or in doubt, despite a paucity of evidence. Doctors do that too.
Both making things up and jumping to conclusions happen because doctors are humans and are wired to see patterns, whether or not they exist. While we're busy refining the art of human rationality, we ought to try to curb that behavior.
↑ comment by jimrandomh · 2010-09-09T17:08:15.630Z · LW(p) · GW(p)
These numbers are uncalibrated estimates (I spent 60s looking for population statistics to use as priors, and didn't find any), but I don't think they're at all unreasonable. Keep in mind that deficiencies come in degrees, and only the most severe ones ever get diagnosed. Anyways, here's a breakdown (again, just estimates) of that 0.95:
P(micronutrient deficiency) = 0.2
P(micronutrient deficiency|no multivitamin) = 0.8
P(micronutrient deficiency|no multivitamin & depressed) = 0.95
I certainly wouldn't say it's the only problem, but it's very likely a contributing factor.
Anyways, we can find this out directly. CronoDAS, could you take a look at the wikipedia page on thiamine, go through the lists of thiamine-containing and thiaminase-containing foods, and estimate your intake? Or better yet, order sulbutiamine and report its effects here?
Replies from: datadataeverywhere, CronoDAS↑ comment by datadataeverywhere · 2010-09-09T18:02:58.445Z · LW(p) · GW(p)
P(micronutrient deficiency) = 0.2
I would go as high as 0.3 if you extend to third world countries, but suspect it's lower among people like ChronoDAS who can afford a variety of food. Either way, it's good enough.
P(micronutrient deficiency|no multivitamin) = 0.8
The law of conditional probability indicates that you think that a minimum of 75% of the population takes a multivitamin. I think this is way too high, especially for a population that has a 20% micronutrient deficiency rate.
P(micronutrient deficiency|no multivitamin & depressed) = 0.95
So the rate of depression among those with micronutrient deficiencies (and who don't take their vitamins) is about 119% that of the general population? I can buy that, but if it's that low, then why are you so sure that a micronutrient deficiency is "greatly contributing" to his depression?
I agree that there's no harm in having CronoDAS gather data or experiment a little, since sulbutiamine seems to have very few negative side effects with recommended doses.
My main reason for brining it up is that I see some very high probabilities tossed about on Less Wrong, and it bothers me when I feel like they're assigning numbers that they can't justify. I'm still skeptical about your 95% confidence, but it's nice to see a break down.
Would you be willing to take a bet at a 2/3rds payoff that CronoDAS has no thiamine deficiency? How about a 1/19th payoff that taking a daily multivitamin wouldn't significantly alter how he feels?
[EDIT: Revised payoffs in bet to reflect professed certainty]
↑ comment by CronoDAS · 2010-09-13T16:25:25.033Z · LW(p) · GW(p)
From Wikipedia:
Thiamine is found in a wide variety of foods at low concentrations. Yeast and pork are the most highly concentrated sources of thiamine. In general, cereal grains are the most important dietary sources of thiamine, by virtue of their ubiquity. Of these, whole grains contain more thiamine than refined grains, as thiamine is found mostly in the outer layers of the grain and in the germ (which are removed during the refining process). For example, 100 g of whole-wheat flour contains 0.55 mg of thiamine, while 100 g of white flour contains only 0.06 mg of thiamine. In the US, processed flour must be enriched with thiamine mononitrate (along with niacin, ferrous iron, riboflavin, and folic acid) to replace that lost in processing.
Some other foods rich in thiamine are oatmeal, flax, and sunflower seeds, brown rice, whole grain rye, asparagus, kale, cauliflower, potatoes, oranges, liver (beef, pork and chicken), and eggs.
Hmmm... as it turns out, I've been eating quite a lot of thiamine-fortified pasta lately, and it's also in cold cereal, orange juice, and bread. I don't think I have an unusually low amount of thiamine in my diet when compared to the average American.
↑ comment by wnoise · 2010-09-09T07:17:32.810Z · LW(p) · GW(p)
What Robin said. Good for making easily testable predictions. But it really sounds like you're generalizing from one example here.
comment by Conor (conor) · 2021-01-14T06:15:24.889Z · LW(p) · GW(p)
How has your strategy (a-h) changed since you wrote this? Are there resources you can share for learning to be more strategic? A method for finding quality resources? Methods for practicing and assessing strategic skill?
comment by Adam Zerner (adamzerner) · 2013-12-05T19:06:17.382Z · LW(p) · GW(p)
Thanks for writing this. It has enabled me to articulate the rationales behind a lot of the "crazy" thoughts I have. For example:
People are horrible at choosing careers. They hardly explore their options at all, and thus limit themselves greatly.
People are bad at choosing who their girl/boyfriends are. They make decisions impulsively based on romantic love when the should really be considering the expected value of true attachment. A lot of times it seems that certain relationships "work", but are clearly suboptimal. Also, it seems that if people were genuinely interested in finding someone to enter a relationship with, that they should spend more time exploring their options.
Most philanthropists are lazy and impulsive. They don't think about how to best help society, they just make impulsive decisions based on what issues push their buttons.
The great majority of people are stupid. They don't think at all about how to best achieve the supergoal. They just make impulsive decisions based on their high level maps.
I think the reason behind these types of decisions is that humans are not automatically strategic.
Replies from: hyporational↑ comment by hyporational · 2013-12-05T19:32:43.548Z · LW(p) · GW(p)
Both choosing a career and choosing a mate seem to suffer from this weird expectation of finding the one that fits me perfectly. This sort of thinking has always been very alien to me, and to this day I don't understand what causes it. I suppose media has something to do with it.
Replies from: adamzerner, TheOtherDave↑ comment by Adam Zerner (adamzerner) · 2013-12-05T23:27:50.165Z · LW(p) · GW(p)
I think that career/mate are both huge decisions. Both will be a huge part of your life for ~50 years!!! If you could improve your career/mate even a little bit, the impact is multiplied by this large duration of time... thus making career/mate decisions important, and worthy of a lot of thought.
Still, I don't think it's worthy of so much thought that you should be looking for a perfect fit. The chances of a perfect fit happening are small enough to outweigh the huge reward.
Also, I would say that people look for a mate/career that they think fits them perfectly, not one that actually does. And what they think is just this romantic and general idea that is based off of generalized maps that are many levels above the territory. As for how they develop these maps, I don't have much of an idea.
↑ comment by TheOtherDave · 2013-12-05T20:32:00.548Z · LW(p) · GW(p)
It seems to me a special case of a broader habit of inferring individual agents where the reality is more distributed statistical patterns, which I expect pre-dates media in the modern sense (though I suppose we could say modern media has something to do with it in the sense of reinforcing a pre-existing tendency, if we wanted).
comment by AlexFromSafeTransition · 2023-02-26T09:28:30.075Z · LW(p) · GW(p)
Very interesting and revealing post. (I'm new)
I recently picked up a habit that makes me more strategic and goal-achieving that might be useful to share.
I have instated a rule to start the day by: Making a list of options of things that I could do and ranking them in importance and how much effort they cost, and then the rule is to do the most important / greatest effort or unpleasant first. Then, when I have done it, I have moved toward my goal and feel better about myself. Before doing this, I would choose what to do based on what you WANT to do first and then feel disappointed by the little progress after a few hours. Another rule that ties in with this is no phone or other distraction before having done one hour of productivity.
comment by HungryTurtle · 2012-02-10T19:20:48.686Z · LW(p) · GW(p)
What a thought provoking article. Thank you so much for writing this. I am especially interested in the question "why do people spend their Saturdays 'enjoying themselves' without bothering to track which of their habitual leisure activities are actually enjoyable. When I was younger I spent a large amount of my summer vacation and weekends playing Call of Duty Modern Warfare online. The bizarre thing was that I would always stop infuriated. It did not make me happy. In fact, there are few things more infuriating than what you hear while playing an online video game.
I have some hypotheses as to why people continue to do things that they do not enjoy, but I was wondering if you know of any other essays on this cite, your own ideas, or a body of literature that you are familiar with on the issue.
comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-09-09T04:15:00.398Z · LW(p) · GW(p)
This was a magnificent post, Anna. I'd like to write a longer reply and more analysis later, but for the moment I wanted to say this was really fantastic and amazing, and there's wisdom and insight packed very densely here. Thank you for writing this up, it's inspiring and insightful.
comment by [deleted] · 2010-09-08T11:37:31.771Z · LW(p) · GW(p)
Why do even unusually numerate people fear illness, car accidents, and bogeymen, and take safety measures, but not bother to look up statistics on the relative risks?
Disease, motor vehicles, and humans are very dangerous. Currently, everyone dies eventually(1), and almost everyone who dies is killed by one of these three things. The CDC has charts about this. See 10 Leading Causes of Death by Age Group, United States – 2007. Boxes that aren't one of the Big Three Of Doom are extremely rare. This chart breaks down the unintentional injuries. As you can see, motor vehicles dominate. I was surprised at first that "Unintentional Poisoning" came in second, especially among the 35-54 age group, then I realized that it's probably drug overdoses, not people thinking that vitamins are candy. After that, it's Unintentional Fall among older people(2).
Do people fear the wrong diseases, wrong motor vehicles, and wrong humans? Sure. But at least the categories are correct.
I suggest "plane crashes".
- If you are cryopreserved, you are dead, but with strange aeons even death may die. (Information-theoretic death is eternal, though.)
- And now you know why there's a Culture ship named Death and Gravity.
As for the rest of the post, which is asking very good questions, when I became a programmer I started with (c) "Find ourselves strongly, intrinsically curious about information that would help us achieve our goal;", bootstrapped up to skill and power, and then figured out what goals I could achieve with it. (In no particular order, improving the state of the art at my employer and the field as a whole, working on my projects at home, and funding my retirement.) I consciously decided to teach myself a real programming language (the wrong one, although it worked out in the end), but I didn't have to mess with my reward gradients or motivational systems. At most, I had to throw away mini-goals that were interesting at the time, but later became trivial or pointless. For me, learning how to do neat stuff was its own reward and its own motivation. After I worked my way out of the depths of newbie confusion, I realized that I could work on problems that previously, or to anyone else, would have seemed like a terrible slog. (My work and hobbies involve staring at angle brackets all day, and not the HTML kind.)
I don't know if that can work for anyone else. It's just my data point.
Replies from: AnnaSalamon, Kyre↑ comment by AnnaSalamon · 2010-09-08T19:20:56.168Z · LW(p) · GW(p)
I agree plane crash concerns are generally more irrational. But I mean... take me, for example. I know plane crashes and sharks are mostly negligible while car accidents and humans present larger risks; that much information reached me be accident. But, even though I regularly go out of my way to "reduce my risk from car accidents", I haven't ever bothered to look up info on e.g. which lane is safest to drive in, or how accident rates scale with sleep deprivation, or which freeways near my home present the largest risk. I'm motivated to do activities I associate with driving safety, but not to systematically estimate and reduce the risks. If a book was published on how to actually reduce my risk, I might read it, but more because it fits my identity as an aspiring rationalist and an aspiring goal-oriented person than than to, you know, actually reduce my risk of death. Which is the point.
Replies from: Matt_Duing, MartinB, steven0461↑ comment by Matt_Duing · 2010-09-09T05:48:06.599Z · LW(p) · GW(p)
wrt sleep deprivation, according to a DOT driver's manual, driving without having slept in 18 hours is equivalent in risk to driving with a .08 blood alcohol level. Driving without having slept in 24 hours corresponds to a .10 blood alcohol level.
↑ comment by MartinB · 2010-09-08T20:43:03.050Z · LW(p) · GW(p)
I choose a live style that lets me limit my driving severely. Might be easier in Europe than the US. If you must drive, then doing a training in safe driving can help a bit. It trains some reflexes for emergency situations. Also avoid to drive at the specific times when most accidents happen. Which here is Friday and Saturday night, when the drunk drive home after the disco, and the few first days of icy weather each year. Also one should have a up-to date car, with Air-bags. Safety is for the most part a play with statistics, but it is really easy to reduce your risk below the average. And then you will never find you what kind of troubles you managed to avoid.
Replies from: mattnewport↑ comment by mattnewport · 2010-09-08T20:48:13.427Z · LW(p) · GW(p)
I choose a live style that lets me limit my driving severely.
This is a good approach. It's not the primary reason I choose a life style that minimizes car usage but it is definitely an additional benefit of arranging for a largely car free existence and one I am conscious of.
Replies from: MartinB↑ comment by MartinB · 2010-09-08T22:36:01.046Z · LW(p) · GW(p)
It also helps financially, and I picked my current room so i could walk to work in 10-20 minutes. I would hate to have to commute each day. But those preferences might change with different living situations. The general idea is just: if it is dangerous, do it less and learn how to do it well.
Replies from: mattnewport↑ comment by mattnewport · 2010-09-08T22:43:06.115Z · LW(p) · GW(p)
It also helps financially, and I picked my current room so i could walk to work in 10-20 minutes. I would hate to have to commute each day. But those preferences might change with different living situations.
Yes, a primary reason for aiming for a lifestyle where I have a reasonably short (30 minute) walk into work is my dislike of commuting by car. I figured out early on that it made me miserable (and wasted a lot of time) but I've subsequently seen a fair bit of evidence that the common trade off of a longer commute for a larger house is a poor one for most people.
The general idea is just: if it is dangerous, do it less and learn how to do it well.
I don't always follow this rule. Some activities I enjoy are relatively dangerous (snowboarding for example) so I just aim to do them as safely as possible but I don't necessarily try and do less of dangerous activities if I enjoy them. It's a win-win to do less of dangerous activities I don't particularly enjoy however.
↑ comment by steven0461 · 2010-09-08T22:43:13.688Z · LW(p) · GW(p)
even though I regularly go out of my way to "reduce my risk from car accidents"
Why? Car accident death rate is 1/10000 per year for your age/gender and probably substantially less for you personally under ordinary circumstances; do the present-value-of-time math.
↑ comment by Kyre · 2010-09-09T04:53:20.126Z · LW(p) · GW(p)
- And now you know why there's a Culture ship named Death and Gravity.
I always assumed that "Gravity" was replacing the "Taxes" part of "nothing is inevitable except Death and Taxes", because the Culture had clearly dispensed with taxes.
Replies from: Nonecomment by nwthomas · 2012-04-26T06:14:24.080Z · LW(p) · GW(p)
In my case, I don't run into "not being able to make myself pursue my goals effectively" a whole lot. What I do run into a lot is, "not being able to figure out what goals I actually want to pursue."
I think that what's going on is this in part. When I find resistance within myself to pursuing some goal (which I read into the comedian watching reruns), I take that as evidence that this goal isn't what I'm really after. I don't spend a lot of time in a state of trying to make myself do something, because of my assumption that whatever I really want to do, I won't need to make myself do. (You seem to be working under a different assumption.)
My experience is that when I hit on something I sincerely want to do, I don't find resistance in myself to doing it. I just do it. Maybe a lot of problems getting ourselves to do what we want are actually misdiagnosed problems of understanding what we want?
(I actually wrote a post on this topic a while back. Realized halfway through this comment that I was repeating what that post said; but it's what leapt into my mind when I read this, so I thought I'd press forward anyway.)
comment by Kevin · 2010-09-08T22:02:50.301Z · LW(p) · GW(p)
On Hacker News: http://news.ycombinator.com/item?id=1673144
Replies from: timtylercomment by blake.crypto · 2023-09-26T01:22:23.745Z · LW(p) · GW(p)
Even Pythagoras believed in the laity - laypeople.
Laypeople do not have goals and generally only engage thier reason after action has been taken in order to justify why they did what they weren't thinking about.
I don't see this as a problem. I think it's just the way it is and getting all people to be actors (instead of being acted upon) is a fool's errand (probably).
comment by Falcone[فالكون] · 2020-05-17T04:01:58.228Z · LW(p) · GW(p)
I really enjoyed your writing :)
comment by WannabeChthonic · 2019-10-10T20:44:30.459Z · LW(p) · GW(p)
Thanks so much for writing this great article! I'm new so for all of you this is an old hat. I want to add my 2ct anyways.
Do you agree with (a)-(h) above? Do you have some good heuristics to add? Do you have some good ideas for how to train yourself in such heuristics?
The above mentioned steps are the best system for progressing in life in general which I was able to find so far. I've read and applied lots of self-help in recent years and I can definitely agree that applying the theory is incredible hard (and I fail at that like >90% of all time - only very few things stick but those really are my superpowers in everyday life). Rewiring habits is really hard.
I can recommend The Power of Habit: Why We Do What We Do in Life and Business and Smarter Faster Better: The Secrets of Being Productive in Life and Business. They are both really good books.
Do you agree with (a)-(h) above?
While I've seen them before this is the best summary I found in the internet so far. I'm definitely going to bookmark this!
I don't know how other people do this but when I want to wire something in my brain I first need to research it. Then I sit down quietly together with pen and paper and I work through the concept until it feels natural to me. Most of the time this requires regular breaks and/or sleeping over weather I really like this and/or researching some more. Then, when I'm ready to make this part of my identity I append this to my Horizons of Focus Document. It's a 15ish page document which I review semi-regularely (yeah it's hard...).
Writing things down won't make me apply it. Doing autosuggestive training makes me do things. I became good in math by performing autosuggestive training. And I became self-organized due to autosuggestive training. Please note that up to this point I haven't read the core sequences and/or the "How to teach yourself" article yet. Tricking my brain into believing something through constant repeating ("autosuggestive training") is the only tool which worked for me so far. I'm ready to hear your opinion on how to incorporate these steps into ones life!
comment by WannabeChthonic · 2019-10-10T20:30:47.102Z · LW(p) · GW(p)
Why do many who type for hours a day remain two-finger typists, without bothering with a typing tutor program?
Because science shows, that being a two-finger typists can be of comparable speed of a ten-finger typist. I'm guilty of being a two-finger typists. But I'm also guilty of having learned the 10 finger way, practicing ot for days ongoing and then just dropping it when I realized that "this learning curve is way to steep for my 5 % realistic speed improvements".
Besides I figured "why the heck do I need to write fast anyways? 90 % of my computer time is thinking about what to write, not actually writing". My job is solving problems with my brain. My fingers are just a way to communicate with the primary tool of choice: the computer.
Convince me of why I should pursue the 10 finger way of writing. I'm a QWERTZ user by the way...
comment by BaconServ · 2013-10-17T08:06:21.425Z · LW(p) · GW(p)
...
I automatically do points (a) through (h).
I have always automatically done points (a) through (h).
I always attributed this to the fact that I had no identity with which to value particular opinions with. As impossible as I already know it is for anyone to accept, you have to let go of the idea that your opinions are even remotely correct. Not because your opinions are incorrect, but because you will not be able to effectively correct them until you accept that they could all be just outright blatantly wrong. But if I say it that way, you'll try to retain as many of them as you can, which is wrong. You literally need to let go of the idea that you or the people around you know anything about anything at all. If it helps, pretend you're in a simulation designed to help you cope with the fact that you're now in a simulation. Pretend we're all just participating in this simulation with you until you yourself accept that this could be a simulation and literally everything you know about logic and reason and physics could just be... Wrong.
Why should your first guess ever be likely to be correct or accurate?
comment by nwthomas · 2011-07-12T03:49:01.470Z · LW(p) · GW(p)
I've found that the most helpful thing for me in achieving my goals seems to be picking the right goals to begin with. I try to find goals that I really care about with a large portion of my being, rather than goals that only a small portion of my being cares about. This requires a fair amount of introspection. What do I want? It's not an easy question; counterintuitively, we don't know what we want. But, if I know what I want, then I can get it.
I'll give a couple examples. I used to have the conscious goal, "write music." My real goals, though I did not know it, were "express myself," "experience beauty," and "be more intuitive." Now I pursue those goals directly, in more concentrated forms than I could achieve with music-writing.
I now know that I can express myself better through writing than through music, so I now pursue the goal of self-expression much more efficiently by writing.
I now know that I can experience beauty in almost anything, and so my aesthetic interests are no longer limited to the narrow domain of music.
I have integrated the goal of being more intuitive in numerous ways into my life. However, finding efficient ways of becoming more intuitive remains a challenge for me.
So I no longer make music, but I still get all of the things that I wanted out of music-making, and I get them much more efficiently and in larger quantities than I did with music-making. This was made possible by my increased knowledge of what I want.
So to recap, I think that the first step in getting what you want, is knowing what you want. If you're having trouble getting yourself to pursue your goals, maybe you're wrong about what your goals are.
comment by jtanz · 2010-09-20T14:50:57.469Z · LW(p) · GW(p)
In common with all animal species, our sensory perceptual interpretation and behavioural action is also recognisable in basic physiological structure of (a) the peripheral nervous system, in our case the eyes, ears etc., and (b) parts of the central nervous systems, frontal lobes, the visual cortex, hypothalamus, amygdala, etc. that are within the brain. These are significant and extensive hardwired components. Using these structures, we can detect, recognise and evaluate a huge number of sensory patterns. For each of us these patterns are given emotional value. This is perception and learning at a distributed physiological level, on fine-grained scale that works in response to all the changes in our environment as they occur. We have a large memory to store the patterns we ‘see’, with a facility to recall and match. Emotional experience attaches new or revised values to each pattern. Where it is novel, innate curiosity is aroused. If it is seen as an unexplainable or a threat we avoid, if it is seen as an opportunity we approach. If, as is the case most of the time, it is determined as neutral we are initially attentive but tend to ignore or habituate to most of it. Moreover we physiologically tend to seek situations where our environment is largely ‘known’ and not one where the unknown continually confronts us. It is very demanding having to make highly aroused conscious decisions. There are only so many we are capable of dealing with in a period of time. However some degree of non-threatening novelty can brighten a routine day.
For any individual our daily waking lives are dominated by fine grained decision-making and action instigating mechanisms constructed from our perceptions and affective memory. By about 10 years of age these collective processes are extremely well developed . Dependent upon circumstances they could be the basis of an independent survivable life though in western society another 6 to 12 years social support is the norm. Nevertheless despite extensive experience there is an innate requirement to make decisions. This is dominant and continuously operational and manifest many individual discrete decisions.
As the world is ever changing we remember the pattern and the experience of our interaction. Second time round the response maybe quicker, eventually our action becoming almost sub-conscious; we gradually habituate to a potential changing complex environment. This gives rise to our almost unbounded ability to 'see' and easily decide what to do in the complex world about us - most of the time. Thus given an appropriate worldly experience then – scratching an itch, avoiding cars in traffic, jumping on busses, keeping clear of alligators, buying and selling stocks and shares, eating lunch in the park on a sunny day, flying planes, performing surgery or kicking pigeons in Trafalgar Square – all seems routine. In fact it is. It is the basis of living our everyday life.
The consequences of the innate and engrained nature of ‘decision making’ means that for most of the time we are not very interested in decisions we don't have to make and by things we can’t actually see. We are 'aroused' or 'activated' by what we see and hence do; by what the environment 'tells' us we need to do, now. We do not easily see the inevitability of something that happens in two years’ time, in fact we hardly see beyond two weeks into the future. How often do we wish we could replay a situation? With hind sight surely we could do better. For hind sight read planning - strategy -working out how a situation might play before it happens. There is an unconscious acceptance of an unfolding world to be negotiated. Most of the time this may appear as rational considered thought but in reality it is action man who is king and planning beyond the weekend is for nerds.
The perception, arousal, appraisal decision, action sequence is the basis of what we do every day maybe 99% of the time. Sometimes there are nuances of difference, new things to accept as for example when we travel to new places on business or on holiday. The bed room and bath room are not the same. Which side do I get out of bed? Where have I put the tooth brush? Where can I get a hot drink? In such circumstances we have to make lots of simple every day decisions. The regular business traveller is frustrated by those starting out on our once a year holiday. He knows how airport systems work – we don’t. However we quickly learn and habituate to this and drift with the flow.
Rarely are we stimulated to consider any major discontinuity. When this rare occurrence happens we are usually required to deal with an urgent threatening or opportunistic situation. In the modern world our usual range of innate coping mechanisms are inadequate. Unless we have been trained to perform in unusual situations as are soldiers or airline pilots we tend to do something immediate rather than something appropriate.
Those with psychological and physiological processes that arouse significant longer-term consideration in their perceptual decision-making actions are the exception and not the rule. They are aroused by curiosity and an even smaller number by abstraction and formalism. Nevertheless we are all susceptible to our increasing complex world that requires more of us to think before we act rather than the acting before we think. However we are not very proficient at this latter course since in past time it was the least conducive to survival. Now we need to better understand factors that moderate decisions and improve our scope for performance and learning. How we do this is critically important and what this list is about. However the list alone is not sufficient. It is necessary to take the majority and human neurophysiology is not in our favour.
comment by peterward · 2010-09-10T05:39:44.928Z · LW(p) · GW(p)
I think the term "abstract reasoning" is being conflated with acting on good or bad information (among other things). E.g., in most cases, one basically has to take it on faith ice cream is good or bad. And since most people aren't in a position to rationally make a confident choice re: the examples the author provides or comparable ones that could be imagined, agnosticism would seem the only rational alternative.*
More generally, I think a lot of these problems stem from radically defective education (if people aren't merely mostly morons as the author implies at one point: "Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out.")**. We don't get experience from a young age in figuring things out for ourselves. Instead we are merely told what to believe and not to believe (based on "respect for authority")--a recipe for making terrible decisions in later life if ever there was one.
Finally, I think the author is lumping a lot of different problems together that seemingly shouldn't be. In one case the problem may be "strategy", another ideology...ignorance, laziness etc. Apart from the fact ones stated goal is often not the real goal at all. One really needs, I think, to do a lot more work examining actual cases before attempting to pontificate on the matter. As far as I can see almost no actual work has been done...much like "postulating what one wants..."
*Of course the implication is we don't eat ice cream to be healthy on the basis of expert claims. But this raises all kinds of further questions, invoking the application of more "abstract reasoning" before we can decide wether to trust these experts (if we are really trying to be rational, that is).
**You're telling me the 5% figure wasn't pulled out of someone's ass--please!
comment by Jonathan_Graehl · 2010-09-08T21:55:17.042Z · LW(p) · GW(p)
We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.
Well put. I've realized that really planning (and acting) in order to reach hard goals, is something I almost never do. Most of the time I'm just working on what feels most rewarding locally.
humans are only just on the cusp of general intelligence. Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out
Yes - but perhaps some of the unfortunate remainder could grow to understand (by acquiring some missing prerequisites first).
Our verbal, conversational systems are much better at abstract reasoning than are the motivational systems that pull our behavior
Yes. And I wonder how much non-task-specific training or tricking of the motivational systems is possible. There may be general tricks like (h), but I have no evidence for a sort of general "able to try harder" faculty that can be improved (even though it seems extremely plausible).
comment by adsenanim · 2010-09-08T18:09:40.743Z · LW(p) · GW(p)
The calculus example is a good one for examining goal-achievement.
I am currently taking Calculus 2, Integration by Trigonometric Substitution is one of the methods.
The textbook I am using is very Implicit in examples explaining this method, and I have thought many times about how much easier it would be if it were to use more Explicit examples.
Implicit examples by nature take more time and effort than explicit examples, making the implicit less likely to be chosen than the explicit.
It would have to be one very highly motivated 8-year-old to pass the calculus test, or one that has an extremely high ability to understand implicit examples.
As far as the goals of a comedian, he/she would have to be very highly motivated and very good at implicit learning to gain anything from 'Garfield and Friends'.
Myself, I would choose George Carlin as an explicit example…
Replies from: Matt_Duing↑ comment by Matt_Duing · 2010-09-09T05:57:25.948Z · LW(p) · GW(p)
The examples on www.patrickjmt.com might help.
Replies from: adsenanim↑ comment by adsenanim · 2010-09-09T15:59:51.425Z · LW(p) · GW(p)
Thanks, nice link.
I must say though that my example is mainly to illustrate the point of Implicit learning (breaking the code) being harder than explicit learning (being given a key).
I prefer breaking the code most times.
I guess the double entendre about Carlin was a bit to implicit... maybe just not funny...
:)
comment by patinador · 2010-09-09T20:54:05.110Z · LW(p) · GW(p)
Doing things the wrong way is a good way of discovering new ways and ideas. If we were programmed to go always in the right direction we couldn't explore the landscape and we should be trapped in a local minima. Random behaviour is part of an intelligent design to evolve and mature. Humour is a way of jumping across island of rationality.
Replies from: orthonormal, Sniffnoy, ata↑ comment by orthonormal · 2010-09-09T22:16:29.658Z · LW(p) · GW(p)
Firstly, welcome to Less Wrong! Be sure and introduce yourself on the welcome thread.
You raise a valid point, but the benefit you mention doesn't explain doing the wrong thing again and again, after enough evidence has accumulated; and it also doesn't explain that we do lots of things wrong in the exact same ways.
↑ comment by ata · 2010-09-09T22:27:54.132Z · LW(p) · GW(p)
The Futility of Chaos is the sequence that responds to this sort of claim. (That sequence depends on Mysterious Answers to Mysterious Questions, if you haven't read it yet.)