Posts
Comments
The quote on conflict reminds me of Jaak Panksepp's "Affective Neuroscience: The Foundations of Human and Animal Emotions", or a refracted view of it presented in John Gottman's book, "The Relationship Cure". Panksepp identifies mammalian emotional command systems he names FEAR, SEEKING, RAGE, LUST, CARE, PANIC/GRIEF, PLAY; Gottman characterizes these systems as competing cognitive modules: Commander-in-chief, Explorer, Sentry, Energy Czar, Sensualist, Jester or Nest Builder. It is tempting now to think of them as very high-level controllers in the hierarchy.
Why is "be specific" a hard skill to teach?
I think it is because being specific is not really the problem, and by labeling it as such we force ourselves into a dead-end which does not contain a solution to the real problem. The real problem is achieving communication. By 'achieving communication', I mean that concepts in one mind are reproduced with good fidelity in another. By good fidelity, I mean that 90% (arbitrary threshold) of assertions based on my model will be confirmed as true by yours.
There are many different ways that the fidelity can be low between my model and yours:
specific vs abstract
mismatched entity-relationship semantic models
ambiguous words
vague concepts
Surely there are many more.
Examples of what I mean by these few:
specific vs abstract: dog vs toy chihuahua puppy
model mismatch: A contract lawyer, a reservoir modeler, and a mud-logger are trying to share the concept "well". Their models of what a "well" is have some attributes with similar names, but different meanings and uses, like "name" or "location". To the mud-logger, a well is a stack of physical measurements of the drilling mud sampled at different drilling depths. To the lawyer, a well is a feature of land-use contracts, service contracts, etc.
Another kind of model mismatch: I think of two entities as having a "has-a" relationship. A house "has" 0 or 1 garages (detached). But you think of the same two entities using a mixin pattern: a house can have or not have garage attributes (not detached). "I put my car in the house" makes no sense to me, because a car goes in a garage but not in a house, but might make sense to you for a house with a built-in garage. We may go a long time before figuring out that my "house" isn't precisely the same as yours.
ambiguity: I'm a cowboy, you're an artist. We are trying to share the concept "draw". We can't because the concept doesn't equate.
vagueness: I say my decision theory "one-boxes". You have no idea what that means, but you create a place-holder for it in your model. So on some level you feel like you understand, but if you drill down, you can get to a point where something important is not defined well enough to use.
It is difficult to know when something that is transparent to you is being misrepresented in my head based on how you explain it to me. "I know you think you understand what you thought I said, but I'm not sure you're aware that what I said was not what I meant."
I suggest an exercise/game to train someone to detect and avoid these pitfalls: combine malicious misunderstanding (you tell me to stick the pencil in the sharpener and I insert the eraser end) and fidelity checking.
You make an assertion about your model.
I generate a challenge that is in logical agreement with your assertionss, but which I expect will fail to match your actual model. If I succeed, I get a point.
Repeat, until I am unable to create a successful challenge.
The longer it takes you to create an airtight set of assertions, the more I get.
Then we switch roles.
So I am looking for all the ways your model might be ill-defined, and all the ways your description might be ambiguous or overly abstract. You are trying to cement all of those gaps as parsimoniously as possible.
I've left the hardest part for last: the players need to be supplied with a metaphoric tinkertoy set of model parts. The parts need to support all of the kinds of fidelity-failure we can think of. And the set should be exensible, for when we think of more.
In Pinker's book "How the Mind Works" he asks the same question. His observation (as I recall) was that much of our apparently abstract logical abilities are done by mapping abstractions like math onto evolved subsystems with different survival purposes in our ancestors: pattern recognition, 3D spatial visualization, etc. He suggests that some problems seem intractable because they don't map cleanly to any of those subsystems.
I like the pithy description of halo bias. I don't like or agree with Mencken's non-nuanced view of idealists. it's sarcastically funny, like "a liberal is one who believes you can pick up a dog turd by the clean end", but being funny doesn't make it more true.
I'm actively interested in optimizing my health, and I take a number of supplements to that end. The survey would seem most interesting if its goal was to find how to optimize your health via supplements. As it turns out, none of the ones I take qualify as "minerals". If it turns out in fact that taking Vitamin XYZ is the single best thing you can do to tweak your diet, then this survey's conclusions, whatever they turn out to be (eg. that Calcium is better than Selenium) will be misleading. Maybe that's the next survey.
FYI, I'm taking: vitamin C, green tea extract, acetyl-l carnitine, vitamin D-3, fish oil, ubiquinol, and alpha lipoic acid. I've stopped taking vitamin E and aspirin.
The discussions about signalling reminded me of something in "A Guide To The Good Life" (a book about stoicism by William Irvine). I remembered a philospher who wore shabby clothes, but when I went looking for the quote, what I found was: "Cato consciously did things to trigger the disdain of other people simply so he could practice ignoring their disdain." In stoicism, the utility which is to be maximized is a personal serenity which flows from your secure knowledge that you are spending your life pursuing something genuinely valuable.
I am trying to be more empathetic with someone, and am having trouble understanding their behavior. They practice the "stubborn fundamental attribution error": someone who does not in fact behave as expected (as this individual imagines she would behave in their place) is harshly judged (neurotic, stupid, lazy, etc.). Any attempts to help her put herself in another's shoes are implacably resisted. Any explanations which might dispel harsh judgement are dismissed as "justifications". One example which I think is related is what I'll call "metaphor blindness". A metaphor that I expect would clarify the issue, the starkest example of which is a reductio ad absurdum, is rejected out of hand as being "not the same" or "not relevant". In abstract terms, my toolkit for achieving consensus or exploring issues rationally has been rendered useless.
Two questions: does my concept of "metaphor blindness" seem reasonable? And...how can I be more empathetic in this case? I'm being judgemental of her, by my own admission. What am I not seeing?
reminds me of:
"I know that you believe you understand what you think I said, but I'm not sure you realize that what you heard is not what I meant." --Robert McCloskey
This seems to be an argument about definitions. To me, Friedman's "average out" means a measurable change in a consistent direction, e.g. significant numbers of random individuals investing in gold. So, given some agents acting in random directions mixed with other agents acting in the same (rational) direction, you can safely ignore the random ones. (He argued.) I don't think he meant to imply that in the aggregate people are rational. But even in the simplified problem-space in which it appears to make sense, Friedman's basic conclusion, that markets are rational (or 'efficient'), has been largely abandoned since the mid 1980s. Reality is more complex.
I have two major comments. First, I took the Scientology Communications class 35 years ago in Boston, and it was basically the same as what has just been described. That's impressive, in a creepy kind of way.
Second, my strongest take-away from the class I took was in response to something NOT mentioned above, so this aspect may have changed. We were given a small book, something like "The History of Scientology". (This is not the huge "Dianetics" book.) We were told to read it on our own, until we understood it, and would move on to the later activities in the class only after attesting that we had done so. The book was loaded with very vague terms, imprecise at best, contrary to familiar usage at worst, but we were not allowed to discuss their meaning with anyone else, or ask instructors for insight. We had to construct a self-consistent interpretation in isolation, and comparing our own with anyone else's was effectively forbidden in perpetuity. So each student auto-brainwashed. I was impressed by the power of this technique.
I'll have to miss this one. Anti-serendipity; I NEVER leave town, except, apparently, next Saturday. Hope there's another one soon.
My deficiency is common manners. I think it's a lack of attention to the world outside of my own thoughts. I've been known to just wander away from a conversation that is clearly not over to the other participants. I notice a sneeze about 10 seconds too late to say "bless you!". I'm appropriately thankful, but assume that's clear without my actually saying or writing something to convey the feeling. Depending on the context, my preoccupation leads me to be perceived as everything from a lovable nerd to an arrogant jerk. It's something I'd like to change.
You can get a warm fuzzy feeling from doing it yourself with a downloaded form (say from Nolo) or a cheap app (like WillMaker), but there are subtle ways to mess up, so professional advice is highly recommended. Doing it yourself, you may tend to shy away from thinking about low-probability or painful scenarios, and you don't get to debug it by changing it and trying again. A will is just one part of estate planning, and sometimes a will isn't needed (if the estate is in a trust, its beneficiaries take precedence). Usually you'll need to coordinate the will and your insurance coverage, at a minimum. But don't procrastinate; you can really get burned by having no will at all. In Texas, for example, if you die married and intestate (no will), and you have kids by a previous marriage, your spouse is legally bound to give half of the estate to his/her step-children immediately. Probably not what you would have planned.
Charles de Secondat, Baron de Montesquieu "If triangles had a god, he would have three sides." [Lettres Persanes, no 59]
I'm a relatively new lurker, still working through the Sequences. It strikes me that patrissimo's disaffection and resultant call to action are targeted at "the more advanced students", or where I hope to be at some point. To use a shop-class analogy, once you've finished Shop 101, sitting around reading back issues of Woodcrafts magazine wil be lower ROI than designing and building a Mission chest of drawers. But until you've been through the basics, "go build" is less productive and potentially dangerous. I 've discovered that reading LW has helped me notice a common thread in my haphazard intellectual explorations, and align my current ones. So a follow-up question I'll pose in 2 parts is: a) is it a fallacy to presume one must walk before learning to run?, and b) if not, how can one judge when it's time to "go build"?
This is a classic time-management issue, often titled "ants vs. elephants", e.g. using your time to tackle small tasks you can complete easily for some immediate gratification instead of investing in the large ones with big payoff. In my own experience, it almost feels like tasks have an "activation energy". I have a list of prioritized goals, but if I'm low in energy I avoid the big but important tasks and do something relatively mindless like reading Science News or doing a sudoku. In college I used to despise myself for not being able to study on Saturday. Finally I accepted it, and used Saturdays for relaxing. I know you are not suggesting I should still despise myself, or somehow trick myself into not needing down-time. But I think this "energy effect" may partially explain why we don't always choose optimal tasks.
I've been lurking on LW for a couple of months, trying to work through all of the major sequences. I don't remember how I discovered it; it might have been a link in the BadAstronomy blog. I studied astronomy in school and grad school and end up becoming a software engineer, which I've done for almost 30 years now. Most of the content here resonates powerfully with the intellectual searching I've been doing my whole life, and I'm finding it both stimulating and humbling. Spurred by what I've read here, I've just acquired Judea Pearl's "Causality" and Barbour's "The End of Time", and I'm working through the Jaynes book on bayesian probability (though the study group seems pretty inactive). There's a lot of synchronicity going on in my life; much of my software work over the last decade has involved causality graphs and Bayesian belief networks, but I hadn't taken the time to delve very deeply into understanding the underlying fundamentals. I recently read Lee Smolin's "The Trouble With Physics", and he mentioned Barbour's work as a possibly promising new direction, so reading Eliezer's comments on it struck a chord. Finally, I'm becoming increasingly aware of transformative change in society (though I wouldn't go so far as to anticipate the Singularity any time soon) and trying on new ideas and concepts that might make me more successfully adaptive, like those found in Seth Godin's blog and books or Pamela Slim's "Escape from Cubicle Nation". I recognize a similar leap facing me here: if I come to believe that the Singularity/AI are "real", can I stop lurking and take meaningful action?
Newton focused on forces and gravity. Later physicists generalized newtonian mechanics, coming up with formalisms for expressing a host of different problems using a common approach (Lagrangian mechanics with generalized coordinates). They weren't losing precision or sacrificing any power to anticipate reality by having an insight that many apparently different problems can be looked at as being essentially the same problem. A cylinder accellerating down a ramp as it rolls is the same problem as a satellite orbiting the L5 lagrangian point. Another unification was Maxwell's equations for electrodynamics, which unified and linked a large number of earlier, more focused understandings (e.g. Ampere's law, Coulomb's law, the Biot-Savart law).
One more example: a physics-trained researcher studying the dynamic topology of the internet recognized a mathematical similarity between the dynamics of the network and the physics of bosons, and realized that the phenomenon of Google's huge connectedness is, in a very real sense, following the same mathematics as a Bose-Einstein condensate.
Eliezer's post seemed to denigrate people's interest in finding such connections and generalizations. Or did I miss the point? Are these sorts of generalizations not the kind he was referring to?