Posts

Minimum computation and data requirements for consciousness. 2010-08-23T23:53:50.656Z

Comments

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-27T02:43:49.345Z · LW · GW

Inklesspen's argument (which you said you agreed with) was was that my belief in a lack of personal identity continuity was incompatible with being unwilling to accept a painless death and that this constitutes a fatal flaw in my argument.

If there are things you want to accomplish and where you believe the most effective way for you to accomplish those things is via uploading what you believe will be a version of your identity into an electronic gizmo; all I can say is good luck with that. You are welcome to your beliefs.

In no way does that address Inklesspen's argument that my unwillingness to immediately experience a painless death somehow contradicts or disproves my belief in a lack of personal identity continuity or constitutes a flaw in my argument. I don't associate my “identity” with my consciousness, I associate my identity with my body and especially with my brain, but it is coupled to the rest of it. That my consciousness is not the same from day to day is not an issue for me. My body very much is alive and is quite good at doing things. It would be a waste to kill it. That it is not static is actually quite a feature, I can learn and do new things.

I have an actual body with which I can do actual things and with which I am doing actual things. All that can be said about the uploading you want to do is that it is very hypothetical. There might be electronic gizmos in the future that might be able to hold a simulation of an identity that might be able to be extracted from a human brain and that electronic gizmo might then be able to do things.

Your belief that you will accomplish things once a version of your identity is uploaded into an electronic gizmo is about you and your beliefs. It is not in the slightest bit about me or my reasoning that a belief in personal identity continuity is an illusion.

People professing a belief in an actual Heaven where they will receive actual rewards doesn't constitute evidence that such beliefs are not illusory either. Such people are usually unwilling to allow themselves to be killed to reach those rewards sooner. That unwillingness does not prove their beliefs are illusory any more than a willingness to be killed would prove they were non-illusory. The members of the Heaven's Gate group believed they were uploading their identities to some kind of Mother Ship electronic gizmo and they were willing to take cyanide to accelerate the process. Their willingness to take poison does not constitute evidence (to me) that their beliefs were not illusory.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-26T20:26:08.271Z · LW · GW

Yes. I would consider those states to be “unconscious”. I am not using “conscious” or “unconscious” as pejorative terms or as terms with any type of value, but purely as descriptive terms that describe the state of an entity. If an entity is not self-aware in the moment, then it is not conscious.

People are not self-aware of the data processing their visual cortex is doing (at least I am not). When you are not aware of the data processing you are doing, the outcome of that data processing is “transparent” to you, that is the output is achieved without an understanding of the path by which the output was achieved. Because you don't have the ability to influence the data processing your visual cortex is doing, the output is susceptible to optical illusions.

Dissociation is not uncommon. In thinking about it, I think I dissociate quite a bit, and that it is fairly easy for me to dissociate. I do my best intellectual work when I am in what I call a “dissociative focus”. Where I really am quite oblivious to a lot of extraneous things and even about my physical state, hunger, fatigue, those kinds of things.

I think that entering a dissociative state is not uncommon, particularly under conditions of very high stress. I think there is a reason for that, under conditions of very high stress, all computational resources of the brain are needed to deal with what ever is causing that stress. Spending computational resources being conscious or self-aware is a luxury that an entity can't afford while it is “running from a bear” (to use my favorite extreme stress state).

I haven't looked at the living luminously sequences carefully, but I think I mostly disagree with it as something to strive for. It is ok, and if that is what you want to do that is fine, but I don't aspire to think that way. Trying to think that way would interfere with what I am trying to accomplish.

I see living while being extremely conscious of self (i.e. what I understand to be the luminous state), and being dissociated from being conscious as two extremes along a continuum, what I consider thinking with your “theory of mind” (the self-conscious luminous state) and thinking with your “theory of reality”, what I consider to be the dissociative state. I discuss that in great detail on my blog about autism.

If you are not in a mode where you are thinking about entities, then you are not using your “theory of mind”. If you are thinking about things in purely non-anthropomorphic terms, you are not using your “theory of mind”.

I think these two different states are useful for thinking about different kinds of problems. Interpersonal problems, interactions with other people, communication are best dealt with by the “theory of mind”. All the examples in the Seven Shining Stories are what I would consider pretty much pure theory of mind-type problems. Theory of reality-type problems are like the traveling salesman problem, multiplying numbers, running more algorithmey-type problems like counting. Problems where there is little or no interpersonal or communication component.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-26T19:17:10.664Z · LW · GW

I see this as analogous to what some religious people say when they are unable to conceive of a sense of morality or any code of behavior that does not come from their God.

If you are unable to conceive of a sense of purpose that is not attached to a personal sense of continued personal identity, I am not sure I can convince you otherwise.

But why you consider that my ability to conceive of a sense of purpose without a personal belief in a continued sense of personal identity is somehow a "flaw" in my reasoning is not something I quite understand.

Are you arguing that because some people "need" a personal sense of continued personal identity that reality "has to" be that way?

People made (and still make) similar arguments about the existence of God.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-26T02:02:13.872Z · LW · GW

Yes, if you are not aware of being conscious then you are unconscious. You may have the capacity to be conscious, but if you are not using that capacity, because you are asleep, are under anesthesia, or because you have sufficiently dissociated from being conscious, then you are not conscious at that moment.

There are states where people do “black-out”, that is where they seemingly function appropriately but have no memory later of those periods. Those states can occur due to drug use, they can also happen via psychogenic processes called a fugue state.

There is also the term semiconscious. Maybe that would be the appropriate term to use when an entity capable of consciousness is not using that capacity.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-25T21:14:40.291Z · LW · GW

If a being is not aware of being conscious, then it is not conscious no matter what else it is aware of.

I am not saying that all consciousness entails is being aware of being conscious, but it does at a minimum entail that. If an entity does not have self-awareness, then it is not conscious, no matter what other properties that entity has.

You are free to make up any hypothetical entities and states that you want, but the term “consciousness” has a generally recognized meaning. If you want to deviate from that meaning you have to tell me what you mean by the term, otherwise my default is the generally recognized meaning.

Could you give me a definition of "consciousness" that allows for being unaware of being conscious?

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-25T18:37:12.985Z · LW · GW

It is your contention that an entity can be conscious without being aware that it is conscious?

There are entities that are not aware of being conscious. To me, if an entity is not aware of being conscious (i.e. is unconscious of being conscious), then it is unconscious.

By my understanding of the term, the one thing an entity must be aware of to be conscious is its own consciousness. I see that as an inherent part of the definition. I can not conceive of a definition of “consciousness” that allows for a conscious entity to be unaware that it is conscious.

Could you give me a definition of "consciousness" that allows for being unaware of being conscious?

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-25T03:11:59.892Z · LW · GW

perplexed, how do you know you do not have a consciousness detector?

Do you see because you use a light detector? Or because you use your eyes? Or because you learned what the word “see” means?

When you understand spoken language do you use a sound detector? A word detector? Do the parts of your brain that you use to decode sounds into words into language into meaning not do computations on the signals those parts receive from your ears?

The only reason you can think a thought is because there are neural structures that are instantiating that thought. If your neural structures were incapable of instantiating a thought, you would be unable to think that thought.

Many people are unable to think many thoughts. It takes many years to train a brain to be able to think about quantum mechanics. I am unable to think accurately about quantum mechanics. My brain does not have the neural structures to do so. My brain also does not have the neural structures to understand Chinese. If it did, I would be able to understand Chinese, which I cannot do.

There has to be a one-to-one correspondence between the neural structures that instantiate a mental activity and the ability to do that mental activity. The brain is not magic; it is chemistry and physics just like everything else. If a brain can do something it is because it has the structures that can do it.

Why is consciousness different than sight or hearing? If consciousness is something that can be detected, there needs to be brain structures that are doing the detecting. If consciousness is not something that can be detected, then what is it that we are talking about? This is very basic stuff. I am just stating logical identities here. I don't understand where the disagreement is coming from.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T22:09:22.918Z · LW · GW

GuySrinivasan, I really can't figure out what is being meant.

In my next sentence I say I am not trying to describe all computations that are necessary, and in the sentence after that I start talking about entity detection computation structures being necessary.

First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time. If an entity is unable to recognize itself as an entity, then it can't be conscious that it is an entity.

I think that is a pretty clear description of a certain cognitive structure that requires computational resources for an entity to self-recognize itself.

What is it that cousin_it is disputing and wants me to provide evidence for? That an entity doesn't need a “self-detector” to recognize itself? That a “self-detector” doesn't require pattern recognition? That pattern recognition doesn't require computation?

I really don't understand. But some other people must have understood it because they up voted the comment, maybe some of those people could explain it to me.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T21:38:49.732Z · LW · GW

Yes, and 1, 2, 3, 4, 5, and 6 and 7 all require data and computation resources.

And to compare a map with a territory one needs a map (i.e. data) and a comparator (i.e. a pattern recognition device) and needs computational resources to compare the data with the territory using the comparator.

When one is thinking about internal states, the map, the territory and the comparator are all internal. That they are internal does not obviate the need for them.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T19:26:44.869Z · LW · GW

perplexed, If detecting consciousness in someone else requires data and computation, why is our own consciousness special such that it doesn't require data and computation to be detected? No one has presented any evidence or any arguments that our own consciousness is special. Until I see a reasonable argument otherwise; my default will be that my own consciousness is not special and that everyone else's consciousness is not special either.

I appreciate that some people do privilege their own consciousness. My interpretation of that self-privileging is that it is not based on any rational examination of the issue but merely on feelings. If there is a rational examination of the issue I would like to see it.

If every other instance of detecting consciousness requires data and pattern recognition, then why doesn't the self-detection of self-consciousness require data and pattern recognition?

If people are exhausted by a topic, they should not read posts on it. If people are afraid of getting caught in quicksand, they should stay away from it. If people find their intuition not useful, they should not rely on it.

When I asserted that self-detection of self-consciousness requires data and computation resources, I anticipated it being labeled a self-evident and/or obvious and/or trivial statement. To have it labeled as “opinion” is completely perplexing to me. To have that labeling as “opinion” up voted means that multiple people must share it.

How can any type of cognition happen without data and computation resources? Any type of information processing requires data and computation resources. Even a dualism treatment posits mythical immaterial data and mythical immaterial computation resources to do the necessary information processing. To be asked for “evidence” that cognition requires computation resources is something I find bizarre. It is not something I know how to respond to. When multiple people need to see evidence that cognition requires computation resources, this may be the wrong forum for me to discuss such things.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T18:15:06.975Z · LW · GW

To be a car; a machine at a minimum must have wheels. Wheels are not sufficient to make a machine into a car.

To be conscious, an entity must be self-aware of self-consciousness. To be self-aware of self-consciousness an entity must have a "self-consciousness-detector" A self-consciousness-detector requires data and computation resources to do the pattern recognition necessary to detect self-consciousness.

What else consciousness requires I don't know, but I know it must require detection of self-consciousness.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T15:50:57.083Z · LW · GW

My purpose in pointing this out was to say that yes, people today are making the same types of category errors as Kelvin was; the mistaken belief that some types of objects are fundamentally not comparable (in Kelvin's case living things and machines), in the example I used computations by a sensory neural network and computations by a machine pattern recognition system.

They are both doing computations, they can both be compared as computing devices; they both need computation resources to accomplish the computations and data to do the computations on.

For either of them to detect something they both need both data and computation resources. Even when the thing being detected is consciousness. Why there is the need/desire to keep “consciousness” as a special category of things/objects for which the normal rules of sensory detection do not apply, is not something I understand.

My experience has always been that if you look hard enough for errors you will find them. If someone wants to look for trivial errors and so discount whatever is present that is not trivial error, then discussions of difficult problems becomes impossible.

My motivation for my original post was not to do battle, but to discuss the computational requirements of consciousness and consciousness detection. If that is such a hot-button topic that people feel the need to attack me, my arguments, my ineptness at text formatting, pile on, and vote my karma down to oblivion, then perhaps LW is not ready to discuss such things and I should move it to my blog.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T15:45:31.428Z · LW · GW

Is there something wrong with my interpretation of Stockholm Syndrome other than it is not the “natural interpretation"? Is it inconsistent with anything known about Stockholm Syndrome, how people interact, or how humans evolved?

Would we consider it surprising if humans did have a mechanism to try and emulate a “green beard” if having a green beard became essential for survival?

We know that some people find many green-beard-type reasons for attacking and even killing other humans. Race, ethnicity, religion, sexual orientation, gender, and so on are all reasons for hating and even killing other humans. How do the victims prevent themselves from being victimized? Usually by obscuring their identity, by attempting to display the “green beard” the absence of which brings attack.

Stockholm Syndrome happens in a short period of time, so it is easier to study than the “poser” habits that occur over a lifetime. Is it fundamentally different, or is it just one point on a spectrum?

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T13:27:54.501Z · LW · GW

Yes, and some people today don't realize that the brain does computations on sensory input in order to accomplish pattern recognition, and without that computation there is no pattern recognition and no perception. Of anything.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T13:22:48.271Z · LW · GW

I had read mysterious answers to mysterious questions. I think I do have an explanation that makes consciousness seem less mysterious and which does not introduce any additional mysteries. Unfortunately I seem to be the only one who appreciates that.

Maybe if I had started out to discuss the computational requirements of the perception of consciousness there would have been less objection. But I don't see any way to differentiate between perception of consciousness and consciousness. I don't think you can have one without the other.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T13:09:42.638Z · LW · GW

nawitus, my post was too long as it is. If I had included multiple discussions of multiple definitions of consciousness and qualia, you would either still be reading it or would have stopped because it was too long.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T12:49:03.845Z · LW · GW

With all due respect to Lord Kelvin, he personally knew of heavier than air flying machines. We now call them birds. He called them birds too.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T12:43:28.254Z · LW · GW

We can't “know for sure” because consciousness is a subjective experience. The only way you could “know for sure” would be if you simulated an entity and so knew from how you put the simulation together that the entity you were simulating did experience self-consciousness.

So how does this hypothetical biologist calibrate his consciousness scanner? Calibrate it so that he “knows for sure” that it is reading consciousness correctly? His degree of certainty in the output of his consciousness scanner is limited by his degree of certainty in his calibration standards. Even if it worked perfectly.

In order to be aware of something, you need to detect something. To detect something you need to receive sensory data and then process that data via pattern recognition into detection or not detection.

To detect consciousness your hypothetical biologist needs a “consciousness scanner”. So does any would-be detector of any consciousness. That “consciousness scanner” has to have certain properties whether it is instantiated in electronics or in meat. Those properties include receipt of sufficient data and then pattern recognition on that data to determine a detection or a not detection. That pattern recognition will be subject to type 1 errors and type 2 errors.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T03:43:10.600Z · LW · GW

I am talking about minimum requirements, not sufficient requirements.

I am not sure what you mean by "understand relevant features of its own source code".

I don't know any humans that I would consider conscious that don't fit the definition of consciousness that I am using. If you have a different definition I would be happy to consider it.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T03:31:15.980Z · LW · GW

Yvain, what I mean by illusion is:

perceptions not corresponding to objective reality due to defects in sensory information processing used as the basis for that perception.

Optical illusions are examples of perceptions that don't correspond to reality because of how our nervous system processes light signals. Errors in perception; either false positives or false negatives are illusions.

In some of the meditative traditions there is the goal of "losing the self". I have never studied those traditions and don't know much about them. I do know about dissociation from PTSD.

There can be entities that are not self-aware. I think that most animals that don't recognize themselves in a mirror fit in the category of not recognizing themselves as entities. That was not the focus of what I wanted to talk about.

To be self-aware, an entity must have an entity detector that registers “self” upon exposure to certain stimuli.

Some animals do recognize other entities but don't recognize themselves as “self”. They perceive another entity in a mirror, not themselves.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T02:46:36.899Z · LW · GW

[Consciousness] :The subjective state of being self-aware that one is an autonomous entity that can differentially regulate what one is thinking about.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-24T02:42:01.723Z · LW · GW

No, there are useful things I want to accomplish with the remaining lifespan of the body I have. That there is no continuity of personal identity is irrelevant to what I can accomplish.

That continuity of personaal identity is an illusion simply means that the goal of indefinite extension of personal identity is a useless goal that can never be achieved.

I don't doubt that a machine could be programmed to think it was the continuation of a flesh-and-blood entity. People have posited paper clip maximizers too.

Comment by daedalus2u on Minimum computation and data requirements for consciousness. · 2010-08-23T23:58:36.630Z · LW · GW

This is my first article on LW, so be gentle.

Comment by daedalus2u on Open Thread: July 2010 · 2010-08-02T12:53:21.961Z · LW · GW

I think the misconception is that what is generally considered “quality of life” is not correlated with things like affluence. People like to believe (pretend?) that it is, and by ever striving for more affluence feel that they are somehow improving their “quality of life”.

When someone is depressed, their “quality of life” is quite low. That “quality of life” can only be improved by resolving the depression, not by adding the bells and whistles of affluence.

How to resolve depression is not well understood. A large part of the problem is people who have never experienced depression, don't understand what it is and believe that things like more affluence will resolve it.

Comment by daedalus2u on Open Thread: July 2010 · 2010-08-01T17:58:07.780Z · LW · GW

Suicide rates are a measure of depression, not of how good life is. Depression can hit people even when they otherwise have a very good life.

Comment by daedalus2u on Open Thread, August 2010 · 2010-08-01T15:49:36.544Z · LW · GW

The framing of the end of life issue as a gain or a loss as in the monkey token exchange probably makes a gigantic difference in the choices made.

http://lesswrong.com/lw/2d9/open_thread_june_2010_part_4/2cnn?c=1

When you feel you are in a desperate situation, you will do desperate things and clutch at straws, even when you know those choices are irrational. I think this is the mindset behind the clutching at straws that quacks exploit with CAM, as in the Gonzalez Protocol for pancreatic cancer.

http://www.sciencebasedmedicine.org/?p=1545

It is actually worse than doing nothing, worse than doing what main stream medicine recommends, but because there is the promise of complete recovery (even if it is a false promise), that is what people choose based on their irrational aversion to risk.

Comment by daedalus2u on Open Thread: July 2010, Part 2 · 2010-07-30T18:32:32.135Z · LW · GW

This is how people with Asperger's or autism experience interacting with people who are neurotypically developed (for the most part).

Comment by daedalus2u on Open Thread: July 2010, Part 2 · 2010-07-30T02:49:21.668Z · LW · GW

I am not a dualist. I used the TM to avoid issues of quantum mechanics. TM equivalent is not compatible with a dualist view either.

Only a part of what the brain does is conscious. The visual cortex isn't conscious. The processing of signals from the retina is not under conscious control. That is why optical illusions work, the signal processing happens a certain way, and that certain way cannot be changed even when consciously it is known that what is seen is counterfactual.

There are many aspects of brain information processing that are like this. Sound processing is like this; where sounds are decoded and pattern matched to communication symbols.

Since we know that the entity instantiating itself in our brain is not identical with the entity that was there a day ago, a week ago, a year ago, and will not be identical to the entity that will be there next year, why do we perceive there to be continuity of consciousness?

Is that an illusion of continuity the same as the way the visual cortex fills in the blind spot on the retina? Is that an illusion of continuity the same as pareidolia?

I suspect that the question of consciousness isn't so much why we experience consciousness, but why we experience a continuity of consciousness when we know there is no continuity.

Comment by daedalus2u on Open Thread: July 2010, Part 2 · 2010-07-30T00:33:38.319Z · LW · GW

Except human entities are a dynamic object, unlike a static object like a book. Books are not considered to be “alive”, or “self-aware”.

If two humans can both be represented by TM with different tapes, then one human can be turned into another human by feeding one tape in backwards then feeding in the other tape frontwards. If one human can be turned into another by a purely mechanical process, how does the “life”, or “entity identity”, or “consciousness change” as that transformation is occurring?

I don't have an answer, I suspect that the problem is tied up in our conceptualization of what consciousness and identity actually is.

My own feeling is that consciousness is an illusion, and that illusion is what produces the illusion of identity continuity over a person's lifetime. Presumably there is an “identity module”, and that “identity module” is what self-identifies an individual as “the same” individual over time (not complete one-to-one correspondence between entities which we know does not happen), even as the individual changes. If that is correct, then change the “identity module” and you change the self-perception of identity.

Comment by daedalus2u on Open Thread: July 2010, Part 2 · 2010-07-29T22:50:01.597Z · LW · GW

SilasBarta, yes, I was thinking about purely classical entities, the kind of computers that we would make now out of classical components. You can make an identical copy of a classical object. If you accept substrate independence for entities, then you can't “dissolve” the question.

If Ebborians are classical entities, then exact copies are possible. An Ebborian can split and become two entities and accumulate two different sets of experiences. What if those two Ebborians then transfer memory files such that they now have identical experiences? (I appreciate this is not possible with biological entities because memories are not stored as discrete files).

Turing Machines are purely classical entities. They are all equivalent, except for the data fed into them. If humans can be represented by a TM, then all humans are identical except for the data fed into the TM that is simulating them. Where is this wrong?

Comment by daedalus2u on Open Thread: July 2010, Part 2 · 2010-07-29T18:18:15.187Z · LW · GW

I am pretty new to LW, and have been looking for something and have been unable to find it.

What I am looking for is a discussion on when two entities are identical, and if they are identical, are they one entity or two?

The context for this is continuity of identity over time. Obviously an entity that has extra memories added is not identical to an entity without those memories, but if there is a transform that can be applied to the first entity (the transform of experience over time), then in one sense the second entity can be considered to be an older (and wiser) version of the earlier entity.

But the selection of the transform that changes the first entity into the second one is arbitrary. In principle there is a transform that will change any Turing equivalent into any other Turing equivalent. Is every entity that can be instantiated as a TM equivalent to every other TM entity?

I appreciate this does not apply to entities instantiated in a biological format because such substrates are not stable over time (even a few seconds). However that does raise another problem, how can a human be “the same” entity over their lifetime?

Comment by daedalus2u on Open Thread: July 2010, Part 2 · 2010-07-29T00:07:22.284Z · LW · GW

I am disapointed. I have just started on LW, and found many of Roko's posts and comments interesting and consilient with my current and to be a useful bridge between aspects of LW that are less consilient. :(

Comment by daedalus2u on Metaphilosophical Mysteries · 2010-07-28T02:06:26.811Z · LW · GW

I think this is correct. Using my formulation, the Bayseian system is what I call a "theory of reality", the timeless one is the "theory of mind", which I see as the trade-off along the autism spectrum.

Comment by daedalus2u on 3 Levels of Rationality Verification · 2010-07-26T23:48:42.498Z · LW · GW

Yes, thankyou just one problem

  • too obvious

and

  • too easy
Comment by daedalus2u on Some Thoughts Are Too Dangerous For Brains to Think · 2010-07-26T23:43:53.284Z · LW · GW

I see the problem of bigotry in terms of information and knowledge but I see bigotry as occurring when there is too little knowledge. I have quite an extensive blog post on this subject.

http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html

My conceptualization of this may seem contrived, but I give a much more detailed explanation on my blog along with multiple examples.

I see it as essentially the lack of an ability to communicate with someone that triggers xenophobia. As I see it, when two people meet and try to communicate, they do a “Turing test”, where they exchange information and try to see if the person they are communicating with is “human enough”, that is human enough to communicate with, be friends with, trade with, or simply human enough to not kill.

What happens when you try to communicate, is that you both use your “theory of mind”, what I call the communication protocols that translate the mental concepts you have in your brain into the data stream of language that you transmit; sounds, gestures, facial expressions, tone of voice, accents, etc. If the two “theories of mind” are compatible, then communication can proceed at a very high data rate because the two theories of mind do so much data compression to fit the mental concepts into the puny data stream of language and to then extract them from the data stream.

However, if the two theories of mind are not compatible, then the error rate goes up, and then via the uncanny valley effect xenophobia is triggered. This initial xenophobia is a feeling and so is morally neutral. How one then acts is not morally neutral. If one seeks to understand the person who has triggered xenophobia, then your theory of mind will self-modify and eventually you will be able to understand the person and the xenophobia will go away. If you seek to not understand the individual, or block that understanding, then the xenophobia will remain.

It is exactly analogous to Nietzsche's quote “if you look into the abyss, the abyss looks back into you”. We can only perceive something if we have pattern recognition for that something instantiated in our neural networks. If we don't have the neuroanatomy to instantiate an idea, we can't perceive the idea, we can't even think the idea. To see into the abyss, you have to have a map of the abyss in your visual cortex to decode the image of the abyss that is being received on your retina.

Bigots as a rule are incapable of understanding the objects of their bigotry (I am not including self-loathing here because that is a special case), and it shows, they attribute all kinds of crazy, wild, and completely non-realistic thinking processes to the objects of their bigotry. I think this was the reason why many invader cultures committed genocide on native cultures by taking children away from natives and fostering them with the invader culture (example US, Canada, Australia) (I go into more detail on that). What bigots often do is make up reasons out of pure fantasy to justify the hatred they feel toward the objects of their bigotry. The Blood Libel against the Jews is a good example. This was the lie that Jews used the blood of Christians in Passover rituals. This could not be correct. Passover long predated Christianity, blood is never kosher, human blood is never kosher, no observant Jew could ever use human blood in any religious ceremony. It never happened, it was a total lie. A lie used to justify the hatred that some Christians felt toward Jews. The hate came first, the lie was used to justify the feelings of hatred.

Bigots as a rule are afraid of associating with the objects of their bigotry because they will then come to understand them. The term “xenophobia” is quite correct. There is a fear of associating with the other because then some of “the other” will rub off on you and you will necessarily become more “other-like”. You will have a map that understands “the other” in your neuroanatomy.

In one sense, to the bigot, understanding “the other” is a “dangerous thought” because it changes the bigot's utility function such that certain individuals are no longer so low on the social hierarchy as to be treated as non-humans.

There are some thoughts that are dangerous to humans. These activate the “fight or flight” state in an uncontrolled manner and that can be lethal. This usually requires a lot of priming (years). There are too many safeties that kick-in for it to happen by accident. I think this is what the Kundalini kindling is. For the most part there isn't enough direct coupling between the part of the brain that thinks thoughts and the part that controls the stuff that keeps you alive. There is some, and that can be triggered in a heart beat when you are being chased by a bear, but there is lots of feedback via feelings before you get to dangerous levels. I don't recommend trying to work yourself into that state because it is quite dangerous because the safeties do get turned off (that is unless a bear is actually chasing you).

Drugs of abuse can trigger the same things which is one of the reasons they are so dangerous.

Comment by daedalus2u on 3 Levels of Rationality Verification · 2010-07-26T23:15:44.578Z · LW · GW

Thanks, I was trying to make a list, maybe I will figure it out. I just joined and am trying to focus on getting up to speed on the ideas, the syntax of formating things is more difficult for me and less rewarding.

Comment by daedalus2u on Some Thoughts Are Too Dangerous For Brains to Think · 2010-07-26T23:06:05.058Z · LW · GW

I disagree. I think there is the functional equivalent of a “social-co-processor”, what I see as the fundamental trade-off along the autism spectrum, the trading of a "theory of mind" (necessary for good and nuanced communication with neurotypically developing individuals and a “theory of reality”, (necessary for good ability at tool making and tool using).

http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html

Because the maternal pelvis is limited in size, the infant brain is limited at birth (still ~1% of women die per childbirth (in the wild) due to cephalopelvic disproportion). The “best” time to program the fundamental neuroanatomy of the brain is in utero, during the first trimester when the fundamental neuroanatomy of the brain is developing and when the epigenetic programming of all the neurons in the brain is occurring.

The two fundamental human traits, language and tool making and tool using both require a large brain with substantial plasticity over the individual's lifetime. But other than that they are pretty much orthogonal. I suspect there has been evolutionary pressure to optimize the neuroanatomy of the human infant brain at birth so as to optimize the neurological tasks that brain is likely to need to do over that individual's lifetime.

Comment by daedalus2u on Contrived infinite-torture scenarios: July 2010 · 2010-07-26T22:08:54.424Z · LW · GW

For me, essentially zero, that is I would act (or attempt to act) as if I had zero credence that I was in a rescue sim.

Comment by daedalus2u on 3 Levels of Rationality Verification · 2010-07-26T16:08:21.621Z · LW · GW

Test for data, factual knowledge and counterfactual knowledge. True rationalists will have less counterfactual knowledge than non-rationalists because they will have filtered it out. Non-rationalits will have more false data because their counterfactual knowledge will feedback and cause them to believe things that are false are actually true. For example that Iraq or Iran was involved in 9/11.

What you really want to measure is the relative proportion of factual and counterfactual knowledge someone has, and in what particular areas. Then including areas like religion, medicine, alternative medicine, and politics in the testing space is advantageous because then you can see where the idea space is that the individuals are most non-rational in.

This can be tricky because many individuals are extremely invested in their counterfactual knowledge and will object to it being identified as counterfactual. A lot of fad-driven science is based on counterfactual knowledge, but the faddists don't want to acknowledge that.

A way to test this would be to see how well people can differentiate correct facts (data) from factual knowledge (based on and consistent with only data) from counterfactual knowledge (based on false facts and not consistent with correct facts) from opinion consistent with facts or opinion consistent with false facts.

An example: in the neurodegenerative disease of Alzheimer's, there is the association of the accumulation of amyloid with dementia. It remains not established if amyloid is a cause, or an effect or is merely associated with dementia. However there have been studies where amyloid has been removed via vaccination against amyloid and a clearing of amyloid by the immune system with no improvement.

I imagine a list of a very large number of statements to be labeled as 1.true (>99% likelihood) 2.false (>99% likelihood to be false) [edited to improve definition of false] 3.opinion based on true facts 4.opinion based on false ideas 5.no one knows 6.I don't know

A list of some examples

Iraq caused 9/11 2 WMD were found in Iraq 2 Amyloid is found in Alzheimer's 1 Amyloid causes Alzheimer's 2 (this happens to be a field I am
working in so I have non-public knowledge as to the real cause) Greenhouse gases are causing GW 1 Vaccines cause autism 2 Acupuncture is a placebo 1 There is life on Mars 5

You don't want to test for obscure things, you want to test for common things that are believed but which are wrong. I think you also want to explicitly tell people that you are testing them for rationality, so they can put themselves into “rational-mode” (a state that is not always socially acceptable).

The table-like lists look fine in the edit box but not fine once I post. :(

Comment by daedalus2u on Open Thread: July 2010, Part 2 · 2010-07-26T13:15:53.935Z · LW · GW

The issue that are dealt with in psychotherapy are fundamentally non-rational issues. Rational issues are trivial to deal with (for people who are rationalists). The substrate of the issues dealt with in psychotherapy are feelings and not thoughts.

I see feelings as an analog component of the human utility function. That analog component affects the gain and feedback in the non-analog components. The feedback by which thoughts affect feelings is slow and tenuous and takes a long time and considerable neuronal remodeling. That is why psychotherapy takes a long time, the neuronal remodeling necessary to affect feelings is much slower than the neuronal remodeling that affects thoughts.

A common response to trauma is to dissociate and suppress the coupling between feelings and thoughts. The easiest and most reliable way to do this is to not have feelings because feelings that are not felt cannot be expressed and so cannot be observed and so cannot be used by opponents as a basis of attack. I think this is the basis of the constricted affect of PTSD.

Comment by daedalus2u on Open Thread: July 2010, Part 2 · 2010-07-26T13:13:31.067Z · LW · GW

I have exactly the same problem. I think I understand where mine comes from, from being abused by my older siblings. I have Asperger's, so I was an easy target. I think they would sucker me in by being nice to me, then when I was more vulnerable whack me psychologically (or otherwise). It is very difficult for me to accept praise of any sort because it reflexively puts me on guard and I become hypersensitive.

You can't get psychotherapy from a friend, it doesn't work and can't work because the friendship dynamic gets in the way (from both directions). A good therapist can help a great deal, but that therapist needs to be not connected to your social network.

Comment by daedalus2u on Contrived infinite-torture scenarios: July 2010 · 2010-07-26T00:16:48.197Z · LW · GW

Human utility functions change all the time. They are usually not easily changed through conscious effort, but drugs can change them quite readily, for example exposure to nicotine changes the human utility function to place a high value on consuming the right amount of nicotine. I think humans place a high utility on the illusion that their utility function is difficult to change and an even higher utility in rationalizing false logical-seeming motivations for how they feel. There are whole industries (tobacco, advertising, marketing, laws, religions, brainwashing, etc.) set up to attempt to change human utility functions.

Human utility functions do change over time, but they have to because humans have needs that vary with time. Inhaling has to be followed by exhaling, ingesting food has to be followed by excretion of waste, being awake has to be followed by being asleep. Also humans evolved as biological entities; their evolved utility function evolved so as to enhance reproduction and survival of the organism. There are plenty of evolved “back-doors” in human utility functions that can be used to hack into and exploit human utility functions (as the industries mentioned earlier do).

I think that human utility functions are not easily modified in certain ways because of the substrate they are instantiated in, biological tissues, and because they evolved; not because humans don't want to modify their utility function. They are easily modified in some ways (the nicotine example) for the same reason. I think the perceived inconsistency in human utility functions more relates to the changing needs of their biological substrate and its limitations rather than poor specification of the utility function.

Since an AI is artificial, it would have an artificial utility function. Since even an extremely powerful AI will still have finite resources (including computational resources), an efficient allocation of those resources is a necessary part of any reasonable utility function for that AI. If the resources the AI has change over time, then the utility function the AI uses to allocate those resources has to change over time also. If the AI can modify its own utility function (optimal, but not strictly necessary for it to match its utility function to its available resources), reducing contradictory and redundant allocations of resources is what a reasonable utility function would do.

Comment by daedalus2u on But Somebody Would Have Noticed · 2010-07-25T20:56:13.568Z · LW · GW

I happen to work with someone who was working on his PhD thesis at MIT and found this gigantic peak in his mass spec where C-60 was, but didn't pursue it because he didn't have time.

Comment by daedalus2u on A speculation on Near and Far Modes · 2010-07-25T18:57:08.907Z · LW · GW

I would really like an answer to this question because it is the predicament that I am quite sure I find myself in. I can't get people to pay enough attention to even tell me where I am wrong. :(

Comment by daedalus2u on A speculation on Near and Far Modes · 2010-07-25T13:48:25.687Z · LW · GW

When the ToMs don't match, I think it triggers xenophobia.

http://daedalus2u.blogspot.com/2010/03/physiology-behind-xenophobia.html

Effectively when people meet and try to communicate, they do a Turing Test, and if the error rate is too high, it triggers feelings of xenophobia via the uncanny valley effect. If you allow your ToM to change to accommodate and understand the person you feel xenophobia for, then the xenophobia will go away. If you don't, then the feelings of xenophobia remain. The decision to allow your ToM to change is what differentiates a non-racist from a racist.

Comment by daedalus2u on A speculation on Near and Far Modes · 2010-07-25T13:47:50.736Z · LW · GW

I think this idea is essentially correct, but instead of near-mode vs far-mode, I think the balance is more between a "theory of mind" and a "theory of reality" which I have written about.

http://daedalus2u.blogspot.com/2008/10/theory-of-mind-vs-theory-of-reality.html

The only things that can be communicated are mental concepts. To communicate a concept, the concept needs to be converted into the communication data stream using a communication protocol that can be decoded at the other end of the communication link. The communication protocols that convert mental concepts into language (and back) is what I call the “theory of mind”. A good ToM is necessary for communication, but it can only be used for communicating with a ToM that matches it. If the two ToMs don't match, then they can't be used for communication.

Comment by daedalus2u on Fight Zero-Sum Bias · 2010-07-25T01:14:24.768Z · LW · GW

I think the 416,000 US military dead and their families would disagree that the war made them better off.

Comment by daedalus2u on Contrived infinite-torture scenarios: July 2010 · 2010-07-25T01:05:43.042Z · LW · GW

To me a reasonable utility function has to have a degree of self-consistency. A reasonable utility function wouldn't value both doing and undoing the same action simultaneously.

If an entity is using a utility function to determine its actions, then for every action the entity can perform, its utility function must be able to determine a utility value which then determines whether the entity does the action or not. If the utility function does not return a value, then the entity still has to act or not act, so the entity still has a utility function for that action (non-action).

The purpose of a utility function is to inform the entity so it seeks to perform actions that result in greater utility. A utility function that is self-contradictory defeats the whole purpose of a utility function. While an arbitrary utility function can in principle occur, an intelligent entity with a self-contradictory utility function would achieve greater utility by modifying its utility function until it was less self-contradictory.

It is probably not possible to have a utility function that is both complete (in that it returns a utility for each action the entity can perform) and consistent (that it returns a single value for the utility of each action the entity can perform) except for very simple entities. An entity complex enough to instantiate arithmetic is complex enough to invoke Gödel's theorem. An entity can substitute a random choice when its utility function does not return a value, but that will result in sub-optimal results.

In the example that FAWS used, a utility function that seeks to annoy me as much as possible, is inconsistent with the entity being an omnipotent AI that can simulate something as complex as me, an entity which can instantiate arithmetic. The only annoyance the AI has caused me is a -1 karma, which to me is less than a single dust mote in the eye.

Comment by daedalus2u on Contrived infinite-torture scenarios: July 2010 · 2010-07-24T19:05:52.639Z · LW · GW

I agree if the utility function was unknown and arbitrary. But an AI that has already done 3^^^3 simulations and believes it then derives further utility from doing 3^^^3+1 simulations while sending (for the 3^^^3+1th time) an avatar to influence the entities it is simulating through intimidation and fear while offering no rationale for those fears and to a website inhabited by individuals attempting to be ever more rational does not have an unknown and arbitrary utility function.

I don't think there is any reasonable utility function that is consistent with the actions the AI is claiming to have done. There may be utility functions that are consistent with those actions, but an AI exhibiting one of those utility functions could not be an AI that I would consider effectively omnipotent.

An omnipotent AI would know that, so this AI cannot be omnipotent and so is lying.

Comment by daedalus2u on Contrived infinite-torture scenarios: July 2010 · 2010-07-24T15:55:03.321Z · LW · GW

I deduce you are lying.

If you were an AI and had simulated me for 3^^^3 times, there would be no utility in running my simulation 3^^^3+1 times because it would simply be a repetition of an earlier case. Either you don't appreciate this and are running the simulation again anyway, or you and your simulation of me are so imperfect that you are unable to appreciate that I appreciate it. In the most charitable case, I can deduce you are far from omnipotent.

That must be quite torturous for you, to have a lowly simulation deduce your feet of clay.