Ephemeral correspondence
post by So8res · 2015-04-10T03:46:20.479Z · LW · GW · Legacy · 22 commentsContents
22 comments
This is the third of four short essays that say explicitly some things that I would tell an intrigued proto-rationalist before pointing them towards Rationality: AI to Zombies (and, by extension, most of LessWrong). For most people here, these essays will be very old news, as they talk about the insights that come even before the sequences. However, I've noticed recently that a number of fledgling rationalists haven't actually been exposed to all of these ideas, and there is power in saying the obvious.
This essay is cross-posted on MindingOurWay.
Your brain is a machine that builds up mutual information between its insides and its outsides. It is not only an information machine. It is not intentionally an information machine. But it is bumping into photons and air waves, and it is producing an internal map that correlates with the outer world.
However, there's something very strange going on in this information machine.
Consider: part of what your brain is doing is building a map of the world around you. This is done automatically, without much input on your part into how the internal model should look. When you look at the sky, you don't get a query which says
Readings from the retina indicate that the sky is blue. Represent sky as blue in world-model? [Y/n]
No. The sky just appears blue. That sort of information, gleaned from the environment, is baked into the map.
You can choose to claim that the sky is green, but you can't choose to see a green sky.
Most people don't identify with the part of their mind that builds the map. That part fades into the background. It's easy to forget that it exists, and pretend that the things we see are the things themselves. If you didn't think too carefully about how the brain works, you might think that brains implement people in two discrete steps: (1) build a map of the world; (2) implement a planner that uses this map to figure out how to act.
This is, of course, not at all what happens.
Because, while you can't choose to see the sky as green, you do get to choose how some parts of the world-model look. When your co-worker says "nice job, pal," you do get to decide whether or not to perceive it as a complement or an insult.
Well, kinda-sorta. It depends upon the tone and the person. Some people will automatically take it as a complement, others will automatically take it as an insult. Others will consciously dwell on it for hours, worrying. But nearly everyone experiences more conscious control over whether or not to perceive something as complementary or insulting, than whether or not to perceive the sky as blue or green.
This is intensely weird as a mind design, when you think about it. Why is the executive process responsible for choosing what to do also able to modify the world-model? Furthermore, WHY IS THE EXECUTIVE PROCESS RESPONSIBLE FOR CHOOSING WHAT TO DO ALSO ABLE TO MODIFY THE WORLD-MODEL? This is just obviously going to lead to horrible cognitive dissonance, self-deception, and bias! AAAAAAARGH.
There are "reasons" for this, of course. We can look at the evolutionary history of human brains and get hints as to why the design works like this. A brain has a pretty direct link to the color of the sky, whereas it has a very indirect link on the intentions of others. It makes sense that one of these would be set automatically, while the other would require quite a bit of processing. And it kinda makes sense that the executive control process gets to affect the expensive computations but not the cheap ones (especially if the executive control functionality originally rose to prominence as some sort of priority-aware computational expedient).
But from the perspective of a mind designer, it's bonkers. The world-model-generator isn't hooked up directly to reality! We occasionally get to choose how parts of the world-model look! We, the tribal monkeys known for self-deception and propensity to be manipulated, get a say on how the information engine builds the thing which is supposed to correspond to reality!
(I struggle with the word "we" in this context, because I don't have words that differentiate between the broad-sense "me" which builds a map of the world in which the sky is blue, and the narrow-sense "me" which doesn't get to choose to see a green sky. I desperately want to shatter the word "me" into many words, but these discussions already have too much jargon, and I have to pick my battles.)
We know a bit about how machines can generate mutual information, you see, and one of the things we know is that in order to build something that sees the sky as the appropriate color, the "sky-color" output should not be connected to an arbitrary monkey answering a multiple choice question under peer pressure, but should rather be connected directly to the sky-sensors.
And sometimes the brain does this. Sometimes it just friggin' puts a blue sky in the world-model. But other times, for one reason or another, it tosses queries up to conscious control.
Questions like "is the sky blue?" and "did my co-worker intend that as an insult?" are of the same type, and yet one we get input on, and the other we don't. The brain automatically builds huge swaths of the map, but important features of it are left up to us.
Which is worrying, because most of us aren't exactly natural-born masters of information theory. This is where rationality training comes in.
Sometimes we get conscious control over the world-model because the questions are hard. Executive control isn't needed in order to decide what color the sky is, but it is often necessary in order to deduce complex things (like the motivations of other monkeys) from sparse observations. Studying human rationality can improve your ability to generate more accurate answers when executive-controller-you is called upon to fill in features of the world-model that subconscious-you could not deduce automatically: filling in the mental map accurately is a skill that, like any skill, can be trained and honed.
Which almost makes it seem like it's ok for us to have conscious control over the world model. It almost makes it seem fine to let humans control what color they see the sky: after all, they could always choose to leave their perception of the sky linked up to the actual sky.
Except, you and I both know how that would end. Can you imagine what would happen if humans actually got to choose what color to perceive the sky as, in the same way they get to choose what to believe about the loyalty of their lovers, the honor of their tribe, the existence of their deities?
About six seconds later, people would start disagreeing about the color of the freeking sky (because who says that those biased sky-sensors are the final authority?) They'd immediately split along tribal lines and start murdering each other. Then, after things calmed down a bit, everyone would start claiming that because people get to choose whatever sky color they want, and because different people have different favorite colors, there's no true sky-color. Color is subjective, anyway; it's all just in our heads. If you tried to suggest just hooking sky-perception up to the sky-sensors, you'd probably wind up somewhere between dead and mocked, depending on your time period.
The sane response, upon realizing that internal color-of-the-sky is determined not by the sky-sensors, but by a tribal monkey-mind prone to politicking and groupthink is to scream in horror and then directly re-attach the world-model-generator to reality as quickly as possible. If your mind gave you a little pop-up message reading
For political reasons, it is now possible to disconnect your color-perception from your retinas and let peer pressure determine what colors to see. Proceed? [Y/n]
then the sane response, if you are a human mind, is a slightly panicked "uh, thanks but no thanKs I'd like to pLeASE LEAVE THE WORLD-MODEL GENERATOR HOOKED UP TO REALITY PLEASE."
But unfortunately, these occasions don't feel like pop-up windows. They don't even feel like choices. They're usually automatic, and they barely happen at the level of consciousness. Your world-model gets disconnected from reality every time that you automatically find reasons to ignore evidence which conflicts with the way you want the world to be (because it comes from someone who is obviously wrong!); every time you find excuses to disregard observations (that study was poorly designed!); every time you find reasons to stop searching for more data as soon as you've found the answer you like (because what would be the point of wasting time by searching further?)
Somehow, tribal social monkeys have found themselves in control of part of their world-models. But they don't feel like they're controlling a world-model, they feel like they're right.
You yourself are part of the pathway between reality and your map of it, part of a fragile link between what is, and what is believed. And if you let your guard down, even for one moment, it is incredibly easy to flinch and shatter that ephemeral correspondence.
22 comments
Comments sorted by top scores.
comment by JenniferRM · 2015-04-15T06:06:09.666Z · LW(p) · GW(p)
It feels like there's a never-directly-claimed but oft-implied claim lurking in this essay.
The claim goes: the reason we can't consciously control our perception of the color of the sky is because if we could then human partisanship would ruin it.
The sane response, upon realizing that internal color-of-the-sky is determined not by the sky-sensors, but by a tribal monkey-mind prone to politicking and groupthink is to scream in horror and then directly re-attach the world-model-generator to reality as quickly as possible.
If you squint, and treat partisanship as an ontologically basic thing that could exert evolutionary pressure, it almost seems plausible that avoidance of partisanship failure modes might actually be the cause of the wiring of the occipital cortex :-)
However, I don't personally think that "avoiding the ability of partisanship to ruin vision" is the reason human vision is wired up so that we can't see whatever we consciously choose to see.
Part of the reason I don't believe this is that the second half of the implication is simply not universally true. I know people who report having the ability to modify their visual sensorium at will, so for them, it seems to actually be the case that they could choose to do all sorts of things to their visual world model if they put some creativity and effort into it. Also: synesthesia is a thing, and can probably be cultivated...
But even if you skip over such issues as non-central outliers...
It makes conceptual sense to me that there is probably something like a common cortical algorithm (though maybe not exactly like the precise algorithmic sketch being discussed under that name) that actually happens in the brain. Coarsely: it probably has to do with neuron metabolism and how neurons measure and affect each other. Separately from this, there are lots of processes for controlling which neurons are "near" to which other neurons.
My personal guess is that in actual brains, the process mixes sparse/bayesian/pooling/etc perception with negative feedback control... and of course "maybe other stuff too". But fundamentally I think we start with "all the computing elements potentially measuring and controlling their neighbors" and then when that causes terrible outcomes (like nearly instantaneous subconscious wireheading by organisms with 3 neurons) evolution prunes that particular failure mode out, and then iterates.
However, sometimes top down control of measurement is functional. It happens subconsciously in ancient and useful ways in our own brain, as when afferent cochlear enervation projects something like "expectations about what is to be heard" that make the cochlea differentially sensitive to inputs, effectively increasing the dynamic range in what sounds can be neurologically distinguished.
This theory predicts new wireheading failures at every new level of evolved organization. Each time you make several attempts to build variations on a new kind of measuring and optimizing process/module/layer, some of those processes will use their control elements to manipulate their perception elements, and sometimes they will do poorly rather than well, with wireheading as a large and probably dysfunctional attractor.
"Human partisanship" does seem to be an example of often-broken agency in an evolutionarily recent context (ie the context of super-Dunbar socially/verbally coordinated herds of meme-infected humans) and human partisanship does seem pretty bad... but as far as I can see, partisanship is not conceptually central here. And it isn't even the conceptually central negative thing.
The central negative thing, in my opinion, is wireheading.
comment by Vaniver · 2015-04-10T13:53:59.802Z · LW(p) · GW(p)
But from the perspective of a mind designer, it's bonkers. The world-model-generator isn't hooked up directly to reality!
I can't help but feel that this is a "lies to children" approach; it makes great sense to me why the module that determines other people's intentions takes input from conscious control. If nothing else, it allows the brain to throw highly variable amounts of resources at the problem--if Master Control thinks it's trivial, the problem can be dropped, but if it thinks it's worthwhile, the brain can spend hours worrying about the problem and calling up memories and forming plans for how to acquire additional information. (Who to ask for advice is itself a political play, and should be handed over to the political modules for contemplation.)
That is, it seems to me that physical reality (the color of the sky) and social reality (whether or not the coworker is complimenting or insulting you) are different classes of reality, that must be perceived and manipulated in different ways.
The valuable rationality lesson seems to be acknowledging that social reality, while it may seem the same sort of real as physical reality, is a different kind. I'm reading you, though, as claiming that they're treated differently, when they should be treated the same way.
Replies from: So8res, dxu↑ comment by So8res · 2015-04-10T17:57:11.307Z · LW(p) · GW(p)
I agree that there's some simplification going on. Also, though, I think your objection is perhaps a bit status quo blinded -- there are many better ways to come up with a brain design that can throw "highly variable amounts of resources" at problems depending on how important they are, which doesn't have nearly as much leakage between goals and beliefs (such as motivated cognition, confirmation bias, etc.). For example, you could imagine a mind where the Master Planner gets to ask for more resolution on various parts of the map, but doesn't get to choose how the get-more-resolution-process works, or you can imagine various other minds in the space between human and full consequentialist gene-propagator.
In other words, there are wide chasms between "some process in the brain regulates how much processing power goes towards building various different parts of the map" and "the planner that is trying to achieve certain goals gets to decide how certain parts of the map are filled in."
it seems to me that physical reality (the color of the sky) and social reality (whether or not the coworker is complimenting or insulting you) are different classes of reality, that must be perceived and manipulated in different ways.
They're both part of the same territory, of course :-) I agree that social monkeys have to treat these things differently, especially in contexts where it's easy to be killed for having unpopular beliefs in cache (even if they're accurate), and again, I can see reasons why evolution took the low road instead of successfully building a full consequentialist. But that doesn't make the mind design (where peer pressure is allowed to actually disconnect the map from the sensors) any more sane :-)
Replies from: Vaniver↑ comment by Vaniver · 2015-04-11T00:24:29.303Z · LW(p) · GW(p)
But that doesn't make the mind design (where peer pressure is allowed to actually disconnect the map from the sensors) any more sane :-)
On reflection, I think the claim I most want to make is something along the lines of "if you identify rationality!sane with 'hook the world-model-generator up to reality', then people will eventually realize rationality is insufficient for them and invent postrationality." If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I think your objection is perhaps a bit status quo blinded -- there are many better ways to come up with a brain design that can throw "highly variable amounts of resources" at problems depending on how important they are, which doesn't have nearly as much leakage between goals and beliefs (such as motivated cognition, confirmation bias, etc.).
I don't want to defend the claim that humans are optimal. I want to register discomfort that I think the claims you put forward trivialize the other view and overstate your position.
They're both part of the same territory, of course
Yes, but. We're really discussing maps, rather than territory, because we're talking about things like "skies" and "compliments," which while they could be learned from the territory are not atomic elements of the territory. I'd say they exist at higher 'levels of abstraction,' but I'm not sure that clarifies anything.
Replies from: So8res, dxu↑ comment by So8res · 2015-04-12T01:47:23.130Z · LW(p) · GW(p)
Thanks! I agree that this isn't the best set-up for getting people interested in instrumental rationality, but remember that these essays were generated as a set of things to say before reading Rationality: AI to Zombies -- they're my take on the unstated background assumptions that motivate R:A-Z a little better before people pick it up. For that reason, the essays have a strong "why epistemics?" bent :-)
↑ comment by dxu · 2015-04-11T18:25:30.278Z · LW(p) · GW(p)
On reflection, I think the claim I most want to make is something along the lines of "if you identify rationality!sane with 'hook the world-model-generator up to reality', then people will eventually realize rationality is insufficient for them and invent postrationality." If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I tend to be skeptical of claims that a rational agent cannot in principle do better with greater epistemic accuracy, much in the same way that I distrust claims that a rational agent cannot in principle do better with more information or greater computational power. To me, then, "hooking the world-model-generator up to reality" would equate with "winning", or at least "making winning a hell of a lot easier". So from my perspective, a rational agent should always seek greater epistemic accuracy.
Of course, if you're human, this does not always apply since humans are far from rational agents, but by much the same reasoning as leads to ethical injunctions, it seems like an excellent heuristic to follow.
↑ comment by dxu · 2015-04-10T15:37:52.871Z · LW(p) · GW(p)
The thing is, some people are really adept at perceiving (and manipulating) "social reality", as you put it. (Think politicians and salesmen, to name but a few.) Furthermore, this perception of "social reality" appears to occur in large part through "intuition"; things like body language, tone of voice, etc. all play a role, and these things are more or less evaluated unconsciously. It's not just the really adept people that do this, either; all neurotypical people perform this sort of unconscious evaluation to some extent. In that respect, at least, the way we perceive "social reality" is remarkably similar to the way we perceive "physical reality". That makes sense, too; the important tasks (from an evolutionary perspective) need to be automated in your brain, but the less important ones (like doing math, for example) require conscious control. So in my opinion, reading social cues would be an example of (in So8res' terminology) "leaving the world-model-generator hooked up to (social) reality".
However, we do in fact have a control group (or would that be experimental group?) for what happens when you attach the "world-model generator" to conscious thought: people with Asperger's Syndrome, for instance, are far less capable of picking up social cues and reading the general flow of the situation. (Writing this as someone who has Asperger's Syndrome, I should note that I'm speaking largely from personal experience here.) For them, the art of reading social situations needs to be learned pretty much from scratch, all at the level of conscious introspection. They don't have the benefit of automated, unconscious social evaluation software that just activates; instead, every decision has to be "calculated", so to speak. You'll note that the results are quite telling: people with Asperger's do significantly worse in day-to-day social interactions than neurotypical people, even after they've been "learning" how to navigate social interactions for quite some time.
In short, manual control is hard to wield, and we should be wary of letting our models be influenced by it. (There's also all the biases that humans suffer from that make it even more difficult to build accurate world-models.) Unfortunately, there's no real way to switch everything to "unconscious mode", so instead, we should strive to be rational so we can build the best models we can with our available information. That, I think, is So8res' point in this post. (If I'm mistaken, he should feel free to correct me.)
Replies from: Vaniver↑ comment by Vaniver · 2015-04-10T16:47:41.842Z · LW(p) · GW(p)
In that respect, at least, the way we perceive "social reality" is remarkably similar to the way we perceive "physical reality".
I agree that a neurotypical sees social cues on the perceptual level in much the same way as they recognize some photons as coming from "the sky" on the perceptual level. I think my complaint is that the question of "is my coworker complimenting or insulting me?" is operating on a higher level of abstraction, and has a strategic and tactical component. Even if your coworker has cued their statement as a compliment, that may in fact be evidence for it being an insult--and in order to determine that, you need a detailed model of your coworker and possibly conscious deliberation. Even if your coworker has genuinely intended a compliment, you may be better served by perceiving it as an insult.
To give a somewhat benign example, if the coworker cued something ambiguously positive and you infer that they wanted to compliment you, you might want to communicate to them that they would be better off cuing something unambiguously positive if they want to be perceived as complimenting others. (Less benign examples of deliberate misinterpretation probably suggest themselves.)
comment by John_Maxwell (John_Maxwell_IV) · 2015-04-12T00:42:04.698Z · LW(p) · GW(p)
Your world model vs executive framing is interesting, but I suspect the two-system model described in Thinking Fast and Slow carves reality at the joints better. From reading Kahneman, I get the impression that it’s not so much our conscious desires to modify our world model that trip us up. In fact, the opposite appears to be true in the sense that conscious, deliberative thinking, when applied well, almost always leaves you with a better model of the world than your subconscious System 1 processes, which have a tendency to quickly generate an explanation of the evidence that seems coherent and pleasing and then stop (why keep thinking if you’ve already figured it out?)
I guess maybe your framing is a useful way to think about motivated cognition, which Kahneman doesn’t discuss as much. I think I stopped doing as much motivated cognition when I installed a process in my brain that watches my thought patterns for things that look like they might be stereotypical examples of motivated cognition and makes me anxious until I dig a little deeper. (For example, writing the previous sentence caused the process to fire because it was an instance of me claiming that I’m less biased than I used to be, and “I’m not biased the way everyone else is” matches my motivated cognition pattern recognizer.) I have a personal hunch that motivated cognition is a basic function call that your brain uses all over the place, e.g. “what company should I start?” or “what sounds appealing for dinner?” initiates the same basic sort of thought process as “why is this blog post whose conclusion I disagree with wrong?” If my hunch is correct, the best way to use motivated cognition in a lot of cases may be to run it in both directions, e.g. spend a timed minute thinking of reasons the blog post is wrong, then spend a timed minute thinking of reasons the blog post is right. I suspect you’ll generate more ideas this way than if you spend a timed two minutes in directionless mulling.
I guess if anyone wants to install this observer process in themselves, probably the most general way to do it is flag instances where you find yourself searching for evidence to support a particular conclusion.
Another story about how confirmation bias arises is through the global approach/avoid assessments generated by System 1. Under this view, it’s not so much motivated cognition that gets you… even if you set out with the intention of seeing the world the way it really is, you’ll find yourself naturally shying away from explanations that make you uncomfortable and devoting less thought to them, while devoting more thought to pleasant explanations. On a community site like Less Wrong, say, users could find themselves mysteriously procrastinating more on writing a post that disputes a popularly held view than a post which no one is likely to disagree with, even when their conscious deliberative judgements about the value of writing each post are identical. Under this view, stoic resolve to think thoughts even when they seem uncomfortable would be key. If this prediction is correct, giving people the stoic resolve to overcome aversions would help them in many areas of their life and in particular help them fight confirmation bias.
I guess another strategy would be rather than learning to overcome aversions, reframe potentially uncomfortable truths by reminding yourself that you want to know what’s true?
By the way, I’m not sure why Thinking Fast and Slow is not discussed here on Less Wrong more. It’s such a fabulous book, and compares quite favorably with the sequences in my view. I found the content overlap to be surprisingly small though; I suspect it is worth many peoples’ time to read both.
comment by [deleted] · 2015-04-11T00:30:43.537Z · LW(p) · GW(p)
But from the perspective of a mind designer, it's bonkers. The world-model-generator isn't hooked up directly to reality! We occasionally get to choose how parts of the world-model look! We, the tribal monkeys known for self-deception and propensity to be manipulated, get a say on how the information engine builds the thing which is supposed to correspond to reality!
In theory, perception and cognition are not really different. In practice, the mental algorithms that implement "perception" are different ones, evolved a different stage of development, from the ones that implement "decisions" and your internal locus of control. Certain inferences have to be bounced back and forth between the two in order to come out right -- a mind designed a priori would just do bounded-rational cognition the whole way down, but your evolved brain has to bounce percepts and inductive biases back and forth between algorithms.
You yourself are part of the pathway between reality and your map of it, part of a fragile link between what is, and what is believed. And if you let your guard down, even for one moment, it is incredibly easy to flinch and shatter that ephemeral correspondence.
It's worse/better than that. The slash is because, well, I'm fairly sure this is a fairly intrinsic part of being a conscious being rather than a blind optimizer, so on the one hand, you've got the privilege of conscious experience, and on the other hand, it helps render you a delusional nutcase. You are the map. You are also the little monkey voice that screams about things. You are also the still, small voice who does the joined-up thinking and tries to manage all the other voices in the head, and tries to make the map correspond to the territory, but still irrevocably lives inside the map.
Welcome to life! It's sometimes fun. We're working on improving that.
comment by Unknowns · 2015-04-27T06:50:28.166Z · LW(p) · GW(p)
One obvious reason for being able to choose to determine various parts of the map is that it contributed to survival. For example, the leader of the tribe hates you and makes a few insulting remarks. You can choose to interpret these in a fairly neutral way, or you can interpret them as they are. If you choose to interpret them as hateful and insulting, as they are, you may have a hard time not responding in a corresponding manner, and so you may end up dead. You will be better off if you can choose to interpret them in the neutral way. Or again, the leader of the tribe proclaims an obviously false religious dogma. If you can choose to accept it, things will go on as usual. If you cannot accept it, you may have a hard time pretending well enough to avoid getting killed as a heretic. Again you will be better off if you are in control of your map.
Also, I disagree that there is any rigid distinction between beliefs we can control and others we cannot (as I suggested in my post on belief in belief). We cannot generally change the visual sensation when we look at the sky. But whether or not we believe the statement, "the sky is blue," is indeed up to us, and some people will e.g. deny that the sky is blue, since it is not really colored in the same way as other things. Or someone could indeed believe that the sky is fundamentally green, if that were e.g. a religious dogma.
comment by EphemeralNight · 2015-04-27T02:19:11.136Z · LW(p) · GW(p)
There is a wrong-note in the reasoning of this post that immediately started niggling at me, but it's subtle and I'm having trouble teasing out the underlying assumption. I want to say that you're taking "The purpose of consciousness is consciousness" as a given, when that is arguably false. Likewise, I want to accuse you of drawing causal arrows from consciousness to other modules of human mind design, which as far as I know is ruled out, evolutionarily speaking.
I offer this:
The "executive process" as you call it is part of the world-modeler. It is the world-modeling module that evolved in response to a very unique world-modeling challenge. There is a critical difference between sky-color and insult-vs-complement that you seem to be glossing over. A given wavelength of light always has the same properties. A given array of posture, facial expression, tone, etc. does not always map directly to the same social reality.
We can't chose to see a smile on a scowling face any more than we can chose to see a green sky, but unlike the sky, the same facial expression can mean vastly different things depending on context, because the causes underlying any given expression depends on a thing that is just as complicated as you are.
The "executive process" is how evolution solved the entirely new problem of adding other world-modelers to the world-model and that's what it does. If it is glitchy and unreliable, well, it is still very new. The very first functional wing to evolve probably wasn't all that good at producing lift, either.
Replies from: None↑ comment by [deleted] · 2015-04-27T06:19:41.395Z · LW(p) · GW(p)
I want to accuse you of drawing causal arrows from consciousness to other modules of human mind design, which as far as I know is ruled out, evolutionarily speaking.
Why would that be? Did evolution stop once man became conscious? Even if all the modules were there before consciousness arose that does not mean that evolution could not have given consciousness some sort of causal effects on some mind modules.
In fact, if consciousness did not have effects on our mind modules, what would it have an effect on?
Replies from: EphemeralNight↑ comment by EphemeralNight · 2015-04-28T18:47:30.740Z · LW(p) · GW(p)
Consciousness is the most recent module, and that does mean that. I'm sorry, I thought this was one point that wasn't even in dispute. It was laid out pretty clearly in the Evolution Sequence:
Complex adaptations take a very long time to evolve. First comes allele A, which is advantageous of itself, and requires a thousand generations to fixate in the gene pool. Only then can another allele B, which depends on A, begin rising to fixation. A fur coat is not a strong advantage unless the environment has a statistically reliable tendency to throw cold weather at you. Well, genes form part of the environment of other genes, and if B depends on A, B will not have a strong advantage unless A is reliably present in the genetic environment
Evolutions Are Stupid (But Work Anyway)
Replies from: None, Cyan, Lumifer↑ comment by [deleted] · 2015-04-28T21:04:05.584Z · LW(p) · GW(p)
Sorry, but no.
Yes, maybe some very similar forms of all the other modules which can still be found in our human mind design today were necessary for consciousness to arise. But this does not mean that the modules which are actually there now could not have evolved out of these modules because they work better for a conscious agent.
↑ comment by Cyan · 2015-04-28T20:11:04.254Z · LW(p) · GW(p)
Consciousness is the most recent module, and that does mean [that drawing causal arrows from consciousness to other modules of human mind design is ruled out, evolutionarily speaking.]
The causes of the fixation of a genotype in a population are distinct from the causal structures of the resulting phenotype instantiated in actual organisms.
↑ comment by Lumifer · 2015-04-28T20:30:09.090Z · LW(p) · GW(p)
First comes allele A, which is advantageous of itself, and requires a thousand generations to fixate in the gene pool. Only then can another allele B, which depends on A, begin rising to fixation.
That doesn't sound right to me.
First, fixation is much faster. The earliest known DNA sequence with the lactase tolerance gene is 4300 years old. That's less than 200 generations ago and the gene looks to me to be quite fixed in the Northern European populations.
Second, allele B can piggyback. Allele A spreads through children of allele A carriers. If some subpopulation of A carriers also has allele B, their children will also have both A and B and these children have even more of an evolutionary advantage than children of just A (but not B) carriers.