Man-with-a-hammer syndrome
post by Shalmanese · 2009-12-14T11:31:48.237Z · LW · GW · Legacy · 41 commentsContents
41 comments
What gummed up Skinner’s reputation is that he developed a case of what I always call man-with-a-hammer syndrome: to the man with a hammer, every problem tends to look pretty much like a nail.
The Psychology of Human Misjudgment is an brilliant talk given by Charlie Munger that I still return to and read every year to gain a fresh perspective. There’s a lot of wisdom to be distilled from that piece but the one thing I want to talk about today is the man-with-a-hammer syndrome.
Man-with-a-hammer syndrome is pretty simple: you think of an idea and then, pretty soon, it becomes THE idea. You start seeing how THE idea can apply to anything and everything, it’s the universal explanation for how the universe works. Suddenly, everything you’ve ever thought of before must be reinterpreted through the lens of THE idea and you’re on an intellectual high. Utilitarianism is a good example of this. Once you independently discover Utilitarianism you start to believe that an entire moral framework can be constructed around a system of pleasures and pains and, what’s more, that this moral system is both objective and platonic. Suddenly, everything from the war in the middle east to taking your mid-morning dump at work because you need that 15 minutes of reflective time alone with yourself before you can face the onslaught of meaningless drivel that is part of corporate America but feeling guilty about it because you were raised to be a good Randian and you are not providing value from your employers so you’re committing and act of theft can be fit under the Utilitarian framework. And then, hopefully, a few days later, you’re over it and Utilitarianism is just another interesting concept and you’re slightly embarrassed about your behavior a few days prior. Unfortunately, some people never get over it and they become those annoying people write long screeds on the internet about their THE idea.
The most important thing to realize about man-with-a-hammer syndrome is that there’s no possible way to avoid having it happen to you. You can be a well seasoned rationalist who’s well aware of how man-with-a-hammer syndrome works and what the various symptoms are but it’s still going to hit you fresh with each new idea. The best you can do is mitigate the fallout that occurs.
Once you recognize that you’ve been struck with man-with-a-hammer syndrome, there’s a number of sensible precautions you can take. The first is to have a good venting spot, being able to let your thoughts out of your head for some air lets you put them slightly in perspective. Personally, I have a few trusted friends to which I expose man-with-a-hammer ideas with all the appropriate disclaimers to basically ignore the bullshit that is coming out of my mouth.
The second important thing to do is to hold back from telling anyone else about the idea. Making an idea public means that you’re, to a degree, committed to it and this is not what you want. The best way to prolong man-with-a-hammer syndrome is to have other people believing that you believe something.
Unfortunately, the only other thing to do is simply wait. There’s been nothing I’ve discovered that can hasten the recovery from man-with-a-hammer syndrome beyond some minimum time threshold. If you’ve done everything else right, the only thing left to do is to simply out wait it. No amount of clever mental gymnastics will help you get rid of the syndrome any faster and that’s the most frustrating part. You can be perfectly aware that you have it, know that everything you’re thinking now, you won’t believe in a weeks time and yet you still can’t stop yourself from believing in it now.
Man-with-a-hammer syndrome can destroy your life if you’re not careful but, if handled appropriately, is ultimately nothing more than an annoying and tedious cost of coming up with interesting ideas. What’s most interesting about it to me is that even with full awareness of it’s existence, it’s completely impossible to avoid. While you have man-with-a-hammer syndrome, you end up living in a curious world in which you are unable to disbelieve in something you know to be not true and this is a deeply weird state I’ve not seen “rationalists” fully come to terms with.
41 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2009-12-14T16:32:44.668Z · LW(p) · GW(p)
My experiences are somewhat opposed: I find that as soon as I publicly speak about some such idea, my excitement with it will begin to vanish.
I find it curious that your post makes no mention of the possibility of the idea being right. When people first came up with the theory of evolution, I imagine it having been much like you described. People started applying it to every plant and animal and for that matter every living thing out there, and was the universal explanation for how things turned out to be the way they did. And guess what? They were right! Even more so for people who came up with the laws of physics. Yes, there is a danger if you take it too far, like if you used evolution to justify Moral Darwinism, but talking about this period of initial excitement as a purely negative issue doesn't sound right either.
In fact, I might personally prefer to extend these periods somewhat: I find that I often forget about applying such a theory after the initial excitement. Theories are declarative knowledge, but applying them is procedural knowledge, which requires practice to develop. The initial burst of excitement creates a brief time period during which you get practice by applying the theory as much as possible. After that, it might easily happen that you don't remember to seek to apply it, and therefore don't develop the skill to the point where it comes natural.
Part of higher education is an artificial replacement for this rush of excitement: I've heard several students remark that the important thing wasn't the specific courses they took or any particular facts they learned. Rather the important thing was that they learned to think like (economists/psychologists/computer scientists/Cthulhu cultists).
I'd say trying to "fight" this tendency isn't necessarily the most productive approach - instead, you might want to embrace such periods, and seek to experience as many of them as possible. That way, you'll be more likely to have a lasting set of mental tools in your toolkit, that can be applied as variedly as possible.
Replies from: ERitAbLe↑ comment by ERitAbLe · 2015-02-18T21:06:16.567Z · LW(p) · GW(p)
I truly understand your point and must admit that i think the same for some of your statements, but i think you took it too much as an obvious statement he made and not just a warning. The ''Man-with-a-hammer syndrome'' sounds like it actually apply for people wich have the common bad reflexe on applying something on everything and making it work. That's how i understand ''to the man with a hammer, every problem tends to look pretty much like a nail.'', no matter if it's an internationnal war or a problem between a dad and his child who don't want to eat dinner, they reduce it to a simple reason that could resolve everything if we could remove it (I think we can agree that there's an edge between those two). For me, Shalmanese just warned us to prevent getting to that point where we just get to much into it.
And about that point you made on physics, quantum is one of the biggest example. Too many person took it as the ultimate law, and people like Heinsenberg, pure genius, even said that nothing could overpass it and today, there's so many people finding problem with it. I won't get too much into this topic, not the point here, just saying that you probably took it too much as an official law he made against ''bright idea you got that explain a lot'' and didn't separate two different thing as the one you state and the one he did.
(Sorry if you can't understand everything, i'm french and don't have that much time to do a proper explanation of my idea)
comment by wedrifid · 2009-12-14T15:41:02.017Z · LW(p) · GW(p)
Man-with-a-hammer syndrome is pretty simple: you think of an idea and then, pretty soon, it becomes THE idea. You start seeing how THE idea can apply to anything and everything, it’s the universal explanation for how the universe works. Suddenly, everything you’ve ever thought of before must be reinterpreted through the lens of THE idea and you’re on an intellectual high.
Which all makes sense if you think about it from the perspective of Perceptual Control Theory.
Replies from: Richard_Kennaway, CronoDAS↑ comment by Richard_Kennaway · 2009-12-14T16:34:50.692Z · LW(p) · GW(p)
Can you elaborate on that? No particular interpretation of the phenomenon in the light of PCT occurs to me.
Replies from: Eliezer_Yudkowsky, SilasBarta↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-14T16:52:39.481Z · LW(p) · GW(p)
I'm glad I'm not the only one whose jokes oft go ungotten.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2009-12-14T17:47:02.758Z · LW(p) · GW(p)
The problem with loftily ignoring something is that no-one knows you're loftily ignoring it. So for the record, I did notice that interpretation (i.e. William Powers, or me, or pjeby, is a man with a hammer), and loftily ignored it. In fact, of all the people I know personally who are involved in PCT, I would say that Powers is the least likely to ever manifest the syndrome.
You have to have some positive reason for accusing someone of man-with-a-hammer syndrome beyond the fact that they have a big idea, or it just becomes an all-purpose refutation. "Man-with-a-hammer syndrome" has to be the conclusion of seeing them overapply their big idea, not a privileged hypothesis to give attention to whenever you encounter a big idea. Otherwise it is just a licence to ignore big ideas -- the more fundamental and important, the more licence to dismiss them.
Now, Bayes theorem...
Replies from: wedrifid↑ comment by wedrifid · 2009-12-14T19:19:48.626Z · LW(p) · GW(p)
for the record, I did notice that interpretation (i.e. William Powers, or me, or pjeby, is a man with a hammer), and loftily ignored it.
For the record, I too am lofty. I did notice that interpretation (that your question was rhetorical loftiness) but loftily ignored it. (Because I unashamedly appreciate my own joke and also think the PCT example is relevant to this context and demonstrates the trap.)
You have to have some positive reason for accusing someone of man-with-a-hammer syndrome beyond the fact that they have a big idea, or it just becomes an all-purpose refutation.
The positive reason for asserting pjeby was using PCT as a big hammer for several months there was that he was using it as a big hammer in the full sense of the intended meaning. It isn't particularly bad as far as hammers go, it's a damn good way of modelling systems and has plenty of applicability to human behaviour. Unfortunately if you try to force the idea too much it becomes harder to engage with other ways of looking at things. In the case of PCT, over-applying the concept gave it negative connotations where it may otherwise have become one of the useful jargon phrases (like 'cached thought', 'fully general counterargument', 'near vs far' and 'shut up and multiply').
(And that observed reaction is something that PCT could explain quite well.)
Replies from: SilasBarta↑ comment by SilasBarta · 2009-12-16T18:01:05.551Z · LW(p) · GW(p)
Agreed. There were many specific cases where it was a good model, e.g. where biologists weren't seeing a set of numerous chemical reactions as a giant negative feedback loop. But then pjeby started using extremely metaphorical applications of it for very high-level human behavior, where he was clearly just using his intuition and then post hoc relabeling it in PCT terminology.
Replies from: pjeby↑ comment by pjeby · 2009-12-16T19:34:45.736Z · LW(p) · GW(p)
Any explanation of anything is a metaphor. Negative feedback loops and finite state machines are not real entities; they're simply descriptions of patterns of behavior, in the same way that the number 3 is not a real entity; it's a label we apply to a quantity.
That is, three by itself can't exist; there have to be three of "something". Feedback loops and FSMs (not to be confused with the Flying Spaghetti Monster) do not exist in themselves either; there has to be some other thing to which we apply the label.
And every model or description of the world at some point becomes either unwieldy in detail, or inaccurate to some degree. But different models or descriptions give different benefits. For example, it's extremely difficult to do arithmetic using Roman numerals.
In my work, the PCT model has led me to two new ways of understanding and changing behavior. The first turned out to be of limited usefulness in practice (unless I wanted to spend hours upon hours walking people through analyzing their control networks, which I don't).
The second insight, however, has turned out to be much more useful, and it's the source of various reasons why I hardly ever post here any more.
Replies from: SilasBarta, wedrifid↑ comment by SilasBarta · 2009-12-16T21:56:56.231Z · LW(p) · GW(p)
Agree with your first three paragraphs but that's exactly my point: you can model anything, including high-level behavior like mate-seeking, with feedback loops, and yes, different models give different benefits.
But you never were able to break down the high-level behaviors into a useful model. Just like a literal Turing machine would be unwieldly for a model of a phenomenon, so would a feedback control model be unwieldly for the human behavior you claim it explained.
In contrast, viewing the biological system I referred to above as a negative feedback loop does simplify things and allows you insight into the function of various subsystems.
In my work, the PCT model has led me to two new ways of understanding and changing behavior.
I don't want to go over the specific arguments again, but my whole point was that this just isn't true. All evidence showed that you were just going by intuition and then applying PCT labels that gave no explanatory (data-compressive) insight. You were never able to give an example of how PCT modeling got you (or would have gotten you) to a crucial insight any faster.
Replies from: pjeby↑ comment by pjeby · 2009-12-17T01:02:58.259Z · LW(p) · GW(p)
Sigh. I really don't want to go into this again, but some of the particularly valuable bits that PCT has relative to other negative-feedback theories of human behavior are the aspects of:
- Hierarchical control structure
- Parallel/simultaneous operation
- Quasi-evolutionary unattended learning ("reorganization")
- The central importance of conflicting reference values in behavioral problems
Nothing else that I know of in the fields of psychology or self-help combines these elements in the same framework; even those self-help authors who've addressed high-level feedback loops in human behavior (e.g. Maltz's psycho-cybernetics, Eker's "wealth thermostat", etc.) have barely touched on any of the above.
All evidence showed that you were just going by intuition and then applying PCT labels that gave no explanatory (data-compressive) insight
Be precise. What you have, specifically, is no evidence that you could not also use as evidence for your position, and therefore you choose to assume that I'm lying and/or deluded. (Why, I couldn't say.)
You were never able to give an example of how PCT modeling got you (or would have gotten you) to a crucial insight any faster.
Any faster than what?
Honestly, all of your arguments on this subject have struck me as similar to a creationist saying that evolution isn't any simpler, because you still have to explain how every single creature evolved, so how does that save you anything?
AFAICT, it's the exact same argument, and also a fully-general counterargument for ANY modeling method.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-22T00:06:47.902Z · LW(p) · GW(p)
Wow. I like this comment, and am surprised it went into karma freefall. The list of 4 key points that actually do distinguish PCT from not-PCT are useful (although I cannot confirm whether or not each of the elements distinguished in the model actually match observations in humans well.)
The arguments against PCT have tended to lack rigour. (Of course, they haven't needed to be rigorous because pj's advocacy was poorly calibrated to this audience. It was too easy to object to PCT primarily based on the association to disliked style.)
↑ comment by wedrifid · 2009-12-16T20:36:07.126Z · LW(p) · GW(p)
The second insight, however, has turned out to be much more useful, and it's the source of various reasons why I hardly ever post here any more.
Have you made said insight public in any form?
Replies from: pjeby↑ comment by pjeby · 2009-12-17T01:29:26.221Z · LW(p) · GW(p)
Have you made said insight public in any form?
No. (I assume materials circulated to my private clients do not count as "public".)
Contra SilasBarta's assertion of no data compression in PCT, I can't actually explain it compactly without assuming at least several of PCT's background assumptions or conclusions, plus a variety of other material I've previously alluded to (such as the Somatic Marker Hypothesis).
With that background, it's an "aha" that puts an important piece in place for how people end up with the sort of blocks and compulsions that they do, with a clear implication of how to fix them. Without that background, it's a "huh, wat? lol" instead.
For that reason, I do not plan to include the insight itself in the book I'm working on, even though many of its practical ramifications will be worked out therein. It would be a distraction from the main point of the book. I do have another book I want to write, though, which might be a good place to include it.
Replies from: wedrifid↑ comment by wedrifid · 2009-12-17T03:09:55.962Z · LW(p) · GW(p)
I can't actually explain it compactly without assuming at least several of PCT's background assumptions or conclusions, plus a variety of other material I've previously alluded to (such as the Somatic Marker Hypothesis).
Do you have a book recommendation for either subject?
Replies from: pjeby↑ comment by pjeby · 2009-12-17T06:53:11.219Z · LW(p) · GW(p)
"Behavior: The Control Of Perception" by William T. Powers. I first learned about the SMH from Temple Grandin's "Animals In Translation", but it's hardly a reference. I just got the idea of feelings being used as a predictive mechanism from that, and read about the SMH (and other work by D'Amasio) later on. Affective asynchrony and reconsolidation are among the other concepts I've mentioned here in the past that are also involved.
ETA: Almost forgot, the MPF or "Memory Prediction Framework" adds some useful detail to PCT, effectively bridging a bit between the SMH and PCT. (This is another way in which SilasBarta errs in classifying my responses as "hammer syndrome"; I have a lot of stuff in my toolkit besides hammers. PCT just filled a gap and provided a nice organizational structure to connect and classify the other tools with.)
Oh, and btw, these various TLAs (two/three-letter acronyms) come from completely different people. PCT, MPF, AA, SMH, and reconsolidation were researched by entirely unrelated groups or individuals, with AFAIK no mutual interaction or knowledge.
↑ comment by SilasBarta · 2009-12-14T16:47:30.079Z · LW(p) · GW(p)
I do remember there being a discussion here about that general phenomenon in the context of PCT. Someone explained it by an analogy to an approximating function. You have a "function" -- the number of areas an idea is applicable to. You then estimate how widely applicable it is. It turns out you underestimated -- it's more general than you thought. If this happens more than once, you try to err in the opposite direction, overestimating its generality.
Then, I remember pjeby agreed with this comparison to PCT. I'll try to find that discussion.
(Avoided making wisecrack about omg, there's something PCT can't explain, &c.)
ETA: Oops, first instinct and wisecrack were more appropriate...
Replies from: wedrifid↑ comment by wedrifid · 2009-12-15T03:39:38.870Z · LW(p) · GW(p)
ETA: Oops, first instinct and wisecrack were more appropriate...
It is always more fun when wisecracks can double as literal truths so I appreciated your analysis. I actually think your PCT model fits reasonably well to at least part of the phenomenon and it would quite probably be a useful tool to consider when trying to recalibrate your hammer use.
↑ comment by CronoDAS · 2009-12-14T21:38:52.535Z · LW(p) · GW(p)
I think this post needed [irony][/irony] tags.
ETA: I'm assuming that the above comment was an ironic reference to the tendency of a few posters to use of Perceptual Control Theory as such a hammer, not a claim that Perceptual Control Theory explains man-with-a-hammer syndrome.
comment by NancyLebovitz · 2009-12-15T13:39:44.349Z · LW(p) · GW(p)
I suggest that one especially insidious sort of hammer is the idea that you know that other people's motivations all fall into one disreputable category.
Examples are Freudians (possibly less sophisticated than Freud) who think it's all about sex (this one is no longer fashionable), Marxists who think it's all power relationships, it's all explicable in terms of hypothesized hunter-gatherer optimization, and the recent idea that it's all status signaling.
The thing is, people are almost certainly less reality-based than they think, but that doesn't mean they're all illusion-driven to the same extent in all areas or for the same reasons.
Replies from: Douglas_Knight↑ comment by Douglas_Knight · 2009-12-16T08:55:50.877Z · LW(p) · GW(p)
I suggest that one especially insidious sort of hammer is the idea that you know that other people's motivations all fall into one disreputable category.
If you have a badge, everyone looks like a criminal.
comment by [deleted] · 2009-12-14T16:19:23.275Z · LW(p) · GW(p)
No amount of clever mental gymnastics will help you get rid of the syndrome any faster and that’s the most frustrating part
Not my experience. Realizing that "every single time I've gotten super-excited about a new idea and tried applying it to everything, I've been wrong" is effective, for me at least, in tempering how I apply it.
While you have man-with-a-hammer syndrome, you end up living in a curious world in which you are unable to disbelieve in something you know to be not true and this is a deeply weird state I’ve not seen “rationalists” fully come to terms with.
Subconscious processes (such as strong feelings of emotion and the certainty that accompanies them) are generally not consciously accessible. For example, no matter how much you know that squares A and B are the same color, they will appear to be different because the vision system processes the image automatically. Compensating for subconscious judgments that you know are misguided is an enormous part of what being a "rationalist" is.
Replies from: None↑ comment by [deleted] · 2009-12-16T21:32:01.170Z · LW(p) · GW(p)
I looked at squares A and B and found that if I stared at it for a few seconds, I could see both squares as being the same color. (Of course, the best way to see it would have been as a checkerboard pattern multiplied by a shadow with a maximum ratio equal to the light/dark ratio; I definitely couldn't do that.) I decided to check whether I could also see a light square outside the shadow and a dark square inside the shadow as being the same color, but the time I decided that, I noticed that I could no longer see A and B as being the same color. I'll have to have another go at it in a few minutes.
(Which is now, as this post took me a couple of hours to write.)
Replies from: Nonecomment by Vladimir_Nesov · 2009-12-14T17:23:05.736Z · LW(p) · GW(p)
Systematically writing things down helps me to move on (mainly as applied to research).
If I have a moderately vague idea that isn't written down, it won't leave, as it's possible to go in circles on its terms indefinitely. New thoughts gradually supplant the old thoughts, and old thoughts, not written down, get forgotten and reinvented later.
On the other hand, if everything is sketched in writing, even if it's still raw material to the point that it doesn't make much sense at the time of writing, not to speak of a few months later, then it becomes possible to see the big picture. Systematic problems and limitations become tangible when you put names on them, lack of meaningful progress is much more visible when it's possible to look over the old notes and see that the new idea is but a rebranding on an old one. It's also easier to let go, to stop thinking about the current idea and thus to start forgetting it, as there remains a path of retreat in rereading the old notes.
comment by Jack · 2009-12-14T15:06:00.849Z · LW(p) · GW(p)
I definitely recognize this tendency and have struggled with it. However, it would be a shame if someone discovered the answer to the ultimate question of life, the universe and everything and then failed to share it because they were afraid of man-with-a-hammer syndrome. Also, I think that even when obviously flawed this kind of thought process can be fruitful. Cross-domain analogies are one of our best ways of generating hypotheses.
comment by Paul Crowley (ciphergoth) · 2009-12-14T13:18:56.236Z · LW(p) · GW(p)
I'm having a hard time getting any value from this post, I'm afraid.
Replies from: dclayh, SilasBarta↑ comment by dclayh · 2009-12-14T19:29:13.822Z · LW(p) · GW(p)
It does seem to be a rehash of affective death spirals, more or less.
Replies from: CannibalSmith, Liron↑ comment by CannibalSmith · 2009-12-16T14:17:28.513Z · LW(p) · GW(p)
This one's more accessible I'd say.
↑ comment by SilasBarta · 2009-12-14T15:15:48.743Z · LW(p) · GW(p)
Because it doesn't look like a nail? ;-)
Replies from: aauschcomment by NancyLebovitz · 2009-12-14T11:38:20.331Z · LW(p) · GW(p)
I think there's another defense against "everything looks like a nail" syndrome-- associate with annoying people who ask you about counterexamples.
Replies from: Johnicholas, Bo102010↑ comment by Johnicholas · 2009-12-14T12:09:05.841Z · LW(p) · GW(p)
Sometimes, when other people probe me, I come up with a face-saving rationalization - it's not rational, but I know I do it.
A group of counterexample-offering people might bring someone out of it - but they might also armor the IDEA unusually well with clever rationalizations.
Replies from: h-H↑ comment by h-H · 2010-03-14T12:19:33.852Z · LW(p) · GW(p)
could you elaborate on your last point please? I can't parse it.
Replies from: Johnicholas↑ comment by Johnicholas · 2010-03-14T22:01:27.463Z · LW(p) · GW(p)
A person "Pat" has a clever idea "X". A friend of Pat's spots a flaw "f1" in X, and explains the flaw to Pat. Pat, in a face-saving move, rationalizes f1 using excuse "e1". Another friend tries to point out another flaw "f2", Pat adopts excuse "e2". ... Eventually, due to the helpfulness of friends, Pat has ready-made answers to essentially every criticism of the amazing idea, and is stuck.
I think this "face-sensitivity leads to polarization and entrenchment" phenomenon is one of the major problems with most current forms of combining human intelligence into teams more capable than their components.
Wikipedia's mostly-anonymous cooperation seems like a step in the right direction, as do various forms of "wisdom of crowds" cooperation, including Hanson's prediction markets and Netflix Prize-style blending of software experts.
comment by wedrifid · 2009-12-14T15:46:43.409Z · LW(p) · GW(p)
Once you independently discover Utilitarianism you start to believe that an entire moral framework can be constructed around a system of pleasures and pains and, what’s more, that this moral system is both objective and platonic.
Are you serious? If man-with-a-hammer syndrome involves using rusty hammers that obviously never worked in the first place then I certainly aren't going to admit to suffering from it!
comment by Jonathan_Graehl · 2009-12-14T23:38:49.911Z · LW(p) · GW(p)
Thank you for pointing out Munger's talk to us.
Man-with-hammer doesn't seem terribly interesting to me; it's just one expression of basic rationality flaws. It's consists of:
a) public commitment of advocating your hammer as the cure for all nails, leading to internal entrenchment
b) defending a continuing stream of status and reward as a leading hammer-man; attacking alternative tools proposed by young upstarts
Of course, there's strategic specialization in competitive research - identify whatever secret weapons that you have (but few others working on your problem have), and gamble that those weapons will lead to some advance on the problem (better: develop your arsenal while looking for likely nails).
What's funny is that the majority of published AI papers are just applying the latest fashionable tools (that aren't at all unique) to standard tasks. Everybody seems to be grabbing the same hammer. To give a particular example: machine learning methods used in natural language processing has moved through: Expectation-Maximization (maximum likelihood), Maximum Entropy (set parameters so as to give the model's predictions the same expectations of features as in real data), discriminative (set parameters directly on improving task accuracy), and Bayesian by sampling.