Erroneous Visualizations
post by BecomingMyself · 2011-01-19T01:44:13.376Z · LW · GW · Legacy · 10 commentsContents
10 comments
Buried somewhere among Eliezer's writings is something essentially the same as the following phrase:
"Intentional causes are made of neurons. Evolutionary causes are made of ancestors."
I remember this quite well because of my strange reaction to it. I understood what it meant pretty well, but upon seeing it, some demented part of my brain immediately constructed a mental image of what it thought an "evolutionary cause" looked like. The result was something like a mountain of fused-together bodies (the ancestors) with gears and levers and things (the causation) scattered throughout. "This," said that part of my brain, "is what an evolutionary cause looks like, and like a good reductionist I know it is physically implicit in the structure of my brain." Luckily it didn't take me long to realize what I was doing and reject that model, though I am just now realizing that the one I replaced it with still had some physical substance called "causality" flowing from ancient humans to my brain.
This is actually a common error for me. I remember I used to think of computer programs as these glorious steampunk assemblies of wheels and gears and things (apparently gears are a common visual metaphor in my brain for things it labels as complex) floating just outside the universe with all the other platonic concepts, somehow exerting their patterns upon the computers that ran them. It took me forever to figure out that these strange thingies were physical systems in the computers themselves, and a bit longer to realize that they didn't look anything like what I thought they did. (I still haven't bothered to find out what they really are, despite having a non-negligible desire to know.) And even before that -- long before I started reading Less Wrong, or even adopted empiricism (which may or may not have come earlier), I decided that because the human brain performs computation, and (it seemed to me) all computations were embodiments of some platonic ideal, souls must exist. Which could have been semi-okay, if I had realized that calling it a "soul" shouldn't allow you to assume it has properties that you ascribe to "souls" but not to "platonic ideals of computation".
Are errors like this common? I talked to a friend about it and she doesn't make this mistake, but one person is hardly a good sample. If anyone else is like this, I'd like to know how often it causes really big misconceptions and whether you have a way to control it.
10 comments
Comments sorted by top scores.
comment by JenniferRM · 2011-01-19T04:45:37.011Z · LW(p) · GW(p)
The quote was in an article titled Evolutionary Psychology and went like this (emphases in original):
Cognitive causes are ontologically distinct from evolutionary causes. They are made out of a different kind of stuff. Cognitive causes are made of neurons. Evolutionary causes are made of ancestors.
The most obvious kind of cognitive cause is deliberate, like an intention to go to the supermarket, or a plan for toasting toast. But an emotion also exists physically in the brain, as a train of neural impulses or a cloud of spreading hormones. Likewise an instinct, or a flash of visualization, or a fleetingly suppressed thought; if you could scan the brain in three dimensions and you understood the code, you would be able to see them...
Evolutionary selection pressures are ontologically distinct from the biological artifacts they create. The evolutionary cause of a bird's wings is millions of ancestor-birds who reproduced more often than other ancestor-birds, with statistical regularity owing to their possession of incrementally improved wings compared to their competitors. We compress this gargantuan historical-statistical macrofact by saying "evolution did it".
...When we're told that "The evolutionary purpose of anger is to increase inclusive genetic fitness," there's a tendency to slide to "The purpose of anger is reproduction" to "The cognitive purpose of anger is reproduction." No! The statistical regularity of ancestral history isn't in the brain, even subconsciously, any more than the designer's intentions of toast are in a toaster!
comment by ata · 2011-01-19T04:16:40.333Z · LW(p) · GW(p)
Platonism can be a bit of a double-edged sword. On the one hand, it can make certain concepts a bit easier to visualize, like imagining that probabilities are over a space of "possible worlds" — you certainly don't want to develop your understanding of probability in those terms, but once you know what probabilities are about, that can still be a helpful way to visualize Bayes's theorem and related operations. On the other hand, this seems to be one of the easiest ways to get caught in the mind projection fallacy and some of the standard non-reductionist confusions.
Generally, I allow myself to use Platonist and otherwise imaginary visualizations, as long as I can keep the imaginariness in mind. This has worked well enough so far, particularly because I'm rather confused about what "existence" means, and am wary of letting it make me think I understand strange concepts like "numbers", "universes", etc. better than I really do. Though sometimes I do wonder if any of my visualizations are leading me astray. My visualization of timeless physics, for instance; I'm a bit suspicious of it since I don't really know how to do the math involved, and so I try not to take the visualization too seriously in case I'm imagining the wrong sort of structure altogether.
It took me forever to figure out that these strange thingies were physical systems in the computers themselves, and a bit longer to realize that they didn't look anything like what I thought they did. (I still haven't bothered to look it up, despite having a non-negligible desire to know.)
Look what up, exactly?
...and (it seemed to me) all computations were embodiments of some platonic ideal, souls must exist. Which could have been semi-okay, if I had realized that calling it a "soul" shouldn't allow you to assume it has properties that you ascribe to "souls" but not to "platonic ideals of computation".
Well said.
Replies from: BecomingMyself↑ comment by BecomingMyself · 2011-01-19T14:04:12.589Z · LW(p) · GW(p)
It took me forever to figure out that these strange thingies were physical systems in the computers themselves, and a bit longer to realize that they didn't look anything like what I thought they did. (I still haven't bothered to look it up, despite having a non-negligible desire to know.)
Look what up, exactly?
Oh, sorry, I thought that was clear. I want to find out what the physical systems in a computer actually look like. Right now all I (think I) know is that RAM is electricity.
Edited to make this more clear.
Replies from: atacomment by jsteinhardt · 2011-01-19T16:16:58.363Z · LW(p) · GW(p)
I frequently visualize things in ways that aren't quite correct. Not to perform precise calculations, but for the sake of my intuition. That might sound just as bad, but I find that intuition works better when it has internal visual cues to support it; your "flow of causality" example is what I'm thinking about here. When I'm thinking about some complicated system, imagining something flowing between the relevant parts to create the cause and effect helps me get what's going on.
So I would say that visualizations that aren't completely correct are not in and of themselves bad, as long as they don't start affecting your predictions directly.
comment by CronoDAS · 2011-01-19T11:35:02.408Z · LW(p) · GW(p)
It took me forever to figure out that these strange thingies were physical systems in the computers themselves, and a bit longer to realize that they didn't look anything like what I thought they did. (I still haven't bothered to look it up, despite having a non-negligible desire to know.)
I think my computer engineering degree makes me qualified to explain the physical workings of a computer. I'm a bit too sleepy right now to write up a detailed explanation, but I'll give it a try later if you ask.
Anyway, a much better metaphor for a computer program is a list of directions, like the directions that you find in the box when you buy furniture that you have to put together yourself. They could be really complicated directions, but they're still just directions, sitting there doing nothing until something reads them and follows them. Obviously most computers don't read programs that are written on paper, and I don't know how all the different storage media work, but whenever a computer actually executes a program, that program is "written down" somewhere.
On the other hand, the steampunk-like assembly of gears you're envisioning actually is a reasonable metaphor for a CPU. And you really can build a computer out of that sort of thing; Charles Babbage actually designed a fully programmable mechanical computer in 1837, but it was never built. Today's computers are made out of microscopic devices that manipulate electricity instead of things that move chunks of wood and metal around, but it's still not a bad metaphor - it doesn't matter what you build your NAND gates out of, just that they're all connected properly.
Replies from: Vaniver↑ comment by Vaniver · 2011-01-19T20:19:59.271Z · LW(p) · GW(p)
Charles Babbage actually designed a fully programmable mechanical computer in 1837, but it was never built.
The British Science Museum built his #2 engine about 20 years ago. No, never built when it was relevant :P
Replies from: CronoDAScomment by TheOtherDave · 2011-01-19T17:05:48.391Z · LW(p) · GW(p)
I do a lot of... not "visualization," exactly, as I'm not really a visual thinker, but sensory-conceptualization ("kinesthization" would be the right word, I suppose, were it a word) of systems and of states that don't map to anything I know to exist in the world.
For example, I frequently model the urge to do things as something not unlike a current (aquatic, not electrical) running through my body. I frequently model social interactions similarly, as a flow running between and among people. I frequently model complicated thought-structures -- arguments and software designs and so forth -- as physical networks of strings and beads, like games of cat's-cradle.
When I was doing a lot of cognitive testing after my stroke, and thus being pushed to remember long sequences of numbers and letters in various combinations, I found myself using a lot of kinesthetic markers to augment memory... e.g., I would "hold" items in my hands and order them with my fingers, or "hold" numbers in my right hand and letters in my left, or various things along those lines. It seemed to help, though I never compared results with and without those techniques so it could just as easily have been superstition.
It isn't quite the same thing as what you're describing, I think, but seems related.
I am not generally motivated to replace these imagined metaphorical models with more literally accurate ones, as I find them useful for various purposes. (The ones I don't find useful, I generally discard.)
That said, I'm willing to believe that they are local-maxima that I could profitably replace with other models that map better to the underlying phenomena.