Friendly AI and the limits of computational epistemology

post by Mitchell_Porter · 2012-08-08T13:16:27.269Z · LW · GW · Legacy · 148 comments

Contents

148 comments

Very soon, Eliezer is supposed to start posting a new sequence, on "Open Problems in Friendly AI". After several years in which its activities were dominated by the topic of human rationality, this ought to mark the beginning of a new phase for the Singularity Institute, one in which it is visibly working on artificial intelligence once again. If everything comes together, then it will now be a straight line from here to the end.

I foresee that, once the new sequence gets going, it won't be that easy to question the framework in terms of which the problems are posed. So I consider this my last opportunity for some time, to set out an alternative big picture. It's a framework in which all those rigorous mathematical and computational issues still need to be investigated, so a lot of "orthodox" ideas about Friendly AI should carry across. But the context is different, and it makes a difference.

Begin with the really big picture. What would it take to produce a friendly singularity? You need to find the true ontology, find the true morality, and win the intelligence race. For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly ("true ontology"), value the world correctly ("true morality"), and it would need to outsmart its opponents ("win the intelligence race").

Now let's consider how SI will approach these goals.

The evidence says that the working ontological hypothesis of SI-associated researchers will be timeless many-worlds quantum mechanics, possibly embedded in a "Tegmark Level IV multiverse", with the auxiliary hypothesis that algorithms can "feel like something from inside" and that this is what conscious experience is.

The true morality is to be found by understanding the true decision procedure employed by human beings, and idealizing it according to criteria implicit in that procedure. That is, one would seek to understand conceptually the physical and cognitive causation at work in concrete human choices, both conscious and unconscious, with the expectation that there will be a crisp, complex, and specific answer to the question "why and how do humans make the choices that they do?" Undoubtedly there would be some biological variation, and there would also be significant elements of the "human decision procedure",  as instantiated in any specific individual, which are set by experience and by culture, rather than by genetics. Nonetheless one expects that there is something like a specific algorithm or algorithm-template here, which is part of the standard Homo sapiens cognitive package and biological design; just another anatomical feature, particular to our species.

Having reconstructed this algorithm via scientific analysis of human genome, brain, and behavior, one would then idealize it using its own criteria. This algorithm defines the de-facto value system that human beings employ, but that is not necessarily the value system they would wish to employ; nonetheless, human self-dissatisfaction also arises from the use of this algorithm to judge ourselves. So it contains the seeds of its own improvement. The value system of a Friendly AI is to be obtained from the recursive self-improvement of the natural human decision procedure.

Finally, this is all for naught if seriously unfriendly AI appears first. It isn't good enough just to have the right goals, you must be able to carry them out. In the global race towards artificial general intelligence, SI might hope to "win" either by being the first to achieve AGI, or by having its prescriptions adopted by those who do first achieve AGI. They have some in-house competence regarding models of universal AI like AIXI, and they have many contacts in the world of AGI research, so they're at least engaged with this aspect of the problem.

Upon examining this tentative reconstruction of SI's game-plan, I find I have two major reservations. The big one, and the one most difficult to convey, concerns the ontological assumptions. In second place is what I see as an undue emphasis on the idea of outsourcing the methodological and design problems of FAI research to uploaded researchers and/or a proto-FAI which is simulating or modeling human researchers. This is supposed to be a way to finesse philosophical difficulties like "what is consciousness anyway"; you just simulate some humans until they agree that they have solved the problem. The reasoning goes that if the simulation is good enough, it will be just as good as if ordinary non-simulated humans solved it.

I also used to have a third major criticism, that the big SI focus on rationality outreach was a mistake; but it brought in a lot of new people, and in any case that phase is ending, with the creation of CFAR, a separate organization. So we are down to two basic criticisms.

First, "ontology". I do not think that SI intends to just program its AI with an apriori belief in the Everett multiverse, for two reasons. First, like anyone else, their ventures into AI will surely begin with programs that work within very limited and more down-to-earth ontological domains. Second, at least some of the AI's world-model ought to be obtained rationally. Scientific theories are supposed to be rationally justified, e.g. by their capacity to make successful predictions, and one would prefer that the AI's ontology results from the employment of its epistemology, rather than just being an axiom; not least because we want it to be able to question that ontology, should the evidence begin to count against it.

For this reason, although I have campaigned against many-worlds dogmatism on this site for several years, I'm not especially concerned about the possibility of SI producing an AI that is "dogmatic" in this way. For an AI to independently assess the merits of rival physical theories, the theories would need to be expressed with much more precision than they have been in LW's debates, and the disagreements about which theory is rationally favored would be replaced with objectively resolvable choices among exactly specified models.

The real problem, which is not just SI's problem, but a chronic and worsening problem of intellectual culture in the era of mathematically formalized science, is a dwindling of the ontological options to materialism, platonism, or an unstable combination of the two, and a similar restriction of epistemology to computation.

Any assertion that we need an ontology beyond materialism (or physicalism or naturalism) is liable to be immediately rejected by this audience, so I shall immediately explain what I mean. It's just the usual problem of "qualia". There are qualities which are part of reality - we know this because they are part of experience, and experience is part of reality - but which are not part of our physical description of reality. The problematic "belief in materialism" is actually the belief in the completeness of current materialist ontology, a belief which prevents people from seeing any need to consider radical or exotic solutions to the qualia problem. There is every reason to think that the world-picture arising from a correct solution to that problem will still be one in which you have "things with states" causally interacting with other "things with states", and a sensible materialist shouldn't find that objectionable.

What I mean by platonism, is an ontology which reifies mathematical or computational abstractions, and says that they are the stuff of reality. Thus assertions that reality is a computer program, or a Hilbert space. Once again, the qualia are absent; but in this case, instead of the deficient ontology being based on supposing that there is nothing but particles, it's based on supposing that there is nothing but the intellectual constructs used to model the world.

Although the abstract concept of a computer program (the abstractly conceived state machine which it instantiates) does not contain qualia, people often treat programs as having mind-like qualities, especially by imbuing them with semantics - the states of the program are conceived to be "about" something, just like thoughts are. And thus computation has been the way in which materialism has tried to restore the mind to a place in its ontology. This is the unstable combination of materialism and platonism to which I referred. It's unstable because it's not a real solution, though it can live unexamined for a long time in a person's belief system.

An ontology which genuinely contains qualia will nonetheless still contain "things with states" undergoing state transitions, so there will be state machines, and consequently, computational concepts will still be valid, they will still have a place in the description of reality. But the computational description is an abstraction; the ontological essence of the state plays no part in this description; only its causal role in the network of possible states matters for computation. The attempt to make computation the foundation of an ontology of mind is therefore proceeding in the wrong direction.

But here we run up against the hazards of computational epistemology, which is playing such a central role in artificial intelligence. Computational epistemology is good at identifying the minimal state machine which could have produced the data. But it cannot by itself tell you what those states are "like". It can only say that X was probably caused by a Y that was itself caused by Z.

Among the properties of human consciousness are knowledge that something exists, knowledge that consciousness exists, and a long string of other facts about the nature of what we experience. Even if an AI scientist employing a computational epistemology managed to produce a model of the world which correctly identified the causal relations between consciousness, its knowledge, and the objects of its knowledge, the AI scientist would not know that its X, Y, and Z refer to, say, "knowledge of existence", "experience of existence", and "existence". The same might be said of any successful analysis of qualia, knowledge of qualia, and how they fit into neurophysical causality.

It would be up to human beings - for example, the AI's programmers and handlers - to ensure that entities in the AI's causal model were given appropriate significance. And here we approach the second big problem, the enthusiasm for outsourcing the solution of hard problems of FAI design to the AI and/or to simulated human beings. The latter is a somewhat impractical idea anyway, but here I want to highlight the risk that the AI's designers will have false ontological beliefs about the nature of mind, which are then implemented apriori in the AI. That strikes me as far more likely than implanting a wrong apriori about physics; computational epistemology can discriminate usefully between different mathematical models of physics, because it can judge one state machine model as better than another, and current physical ontology is essentially one of interacting state machines. But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

In a phrase: to use computational epistemology is to commit to state-machine materialism as your apriori ontology. And the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can. Something about the ontological constitution of consciousness makes it possible for us to experience existence, to have the concept of existence, to know that we are experiencing existence, and similarly for the experience of color, time, and all those other aspects of being that fit so uncomfortably into our scientific ontology.

It must be that the true epistemology, for a conscious being, is something more than computational epistemology. And maybe an AI can't bootstrap its way to knowing this expanded epistemology - because an AI doesn't really know or experience anything, only a consciousness, whether natural or artificial, does those things - but maybe a human being can. My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology. But transcendental phenomenology is very unfashionable now, precisely because of apriori materialism. People don't see what "categorial intuition" or "adumbrations of givenness" or any of the other weird phenomenological concepts could possibly mean for an evolved Bayesian neural network; and they're right, there is no connection. But the idea that a human being is a state machine running on a distributed neural computation is just a hypothesis, and I would argue that it is a hypothesis in contradiction with so much of the phenomenological data, that we really ought to look for a more sophisticated refinement of the idea. Fortunately, 21st-century physics, if not yet neurobiology, can provide alternative hypotheses in which complexity of state originates from something other than concatenation of parts - for example, entanglement, or from topological structures in a field. In such ideas I believe we see a glimpse of the true ontology of mind, one which from the inside resembles the ontology of transcendental phenomenology; which in its mathematical, formal representation may involve structures like iterated Clifford algebras; and which in its biophysical context would appear to be describing a mass of entangled electrons in that hypothetical sweet spot, somewhere in the brain, where there's a mechanism to protect against decoherence.

Of course this is why I've talked about "monads" in the past, but my objective here is not to promote neo-monadology, that's something I need to take up with neuroscientists and biophysicists and quantum foundations people. What I wish to do here is to argue against the completeness of computational epistemology, and to caution against the rejection of phenomenological data just because it conflicts with state-machine materialism or computational epistemology. This is an argument and a warning that should be meaningful for anyone trying to make sense of their existence in the scientific cosmos, but it has a special significance for this arcane and idealistic enterprise called "friendly AI". My message for friendly AI researchers is not that computational epistemology is invalid, or that it's wrong to think about the mind as a state machine, just that all that isn't the full story. A monadic mind would be a state machine, but ontologically it would be different from the same state machine running on a network of a billion monads. You need to do the impossible one more time, and make your plans bearing in mind that the true ontology is something more than your current intellectual tools allow you to represent.

148 comments

Comments sorted by top scores.

comment by Cyan · 2012-08-09T04:15:17.207Z · LW(p) · GW(p)

Upvoted for the accurate and concise summary of the big picture according to SI.

There are qualities which are part of reality - we know this because they are part of experience, and experience is part of reality - but which are not part of our physical description of reality.

This continues to strike me as a category error akin to thinking that our knowledge of integrated circuit design is incomplete because we can't use it to account for Java classes.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-10T16:39:43.205Z · LW(p) · GW(p)

I have been publicly and repeatedly skeptical of any proposal to make an AI compute the answer to a philosophical question you don't know how to solve yourself, not because it's impossible in principle, but because it seems quite improbable and definitely very unreliable to claim that you know that computation X will output the correct answer to a philosophical problem and yet you've got no idea how to solve it yourself. Philosophical problems are not problems because they are well-specified and yet too computationally intensive for any one human mind. They're problems because we don't know what procedure will output the right answer, and if we had that procedure we would probably be able to compute the answer ourselves using relatively little computing power. Imagine someone telling you they'd written a program requiring a thousand CPU-years of computing time to solve the free will problem.

And once again, I expect that the hardest part of the FAI problem is not "winning the intelligence race" but winning it with an AI design restricted to the much narrower part of the cognitive space that integrates with the F part, i.e., all algorithms must be conducive to clean self-modification. That's the hard part of the work.

Replies from: Wei_Dai, None
comment by Wei Dai (Wei_Dai) · 2012-08-17T17:10:23.274Z · LW(p) · GW(p)

What do you think the chances are that there is some single procedure that can be used to solve all philosophical problems? That for example the procedure our brains are using to try to solve decision theory is essentially the same as the one we'll use to solve consciousness? (I mean some sort of procedure that we can isolate and not just the human mind as a whole.)

If there isn't such a single procedure, I just don't see how we can possibly solve all of the necessarily philosophical problems to build an FAI before someone builds an AGI, because we are still at the stage where every step forward we make just lets us see how many more problems there are (see Open Problems Related to Solomonoff Induction for example) and we are making forward steps so slowly, and worse, there's no good way of verifying that each step we take really is a step forward and not some erroneous digression.

Replies from: Eliezer_Yudkowsky, Mitchell_Porter
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-17T21:46:47.768Z · LW(p) · GW(p)

What do you think the chances are that there is some single procedure that can be used to solve all philosophical problems?

Very low, of course. (Then again, relative to the perspective of nonscientists, there turned out to be a single procedure that could be used to solve all empirical problems.) But in general, problems always look much more complicated than solutions do; the presence of a host of confusions does not indicate that the set of deep truths underlying all the solutions is noncompact.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-08-17T22:50:13.791Z · LW(p) · GW(p)

Do you think it's reasonable to estimate the amount of philosophical confusion we will have at some given time in the future by looking at the amount of philosophical confusions we currently face, and compared that to the rate at which we are clearing them up minus the rate at which new confusions are popping up? If so, how much of your relative optimism is accounted for by your work on meta-ethics? (Recall that we have a disagreement over how much progress that work represents.) Do you think my pessimism would be reasonable if we assume for the sake of argument that that work does not actually represent much progress?

comment by Mitchell_Porter · 2012-08-24T18:32:22.782Z · LW(p) · GW(p)

some single procedure that can be used to solve all philosophical problems

This is why I keep mentioning transcendental phenomenology. It is for philosophy what string theory is for physics, a strong candidate for the final answer. It's epistemologically deeper than natural science or mathematics, which it treats as specialized forms of rational subjective activity. But it's a difficult subject, which is why I mention it more often than I explain it. To truly teach it, I'd first need to understand, reproduce, and verify all its claims and procedures for myself, which I have not done. But I've seen enough to be impressed. Regardless of whether it is the final answer philosophically, I guarantee that mastering its concepts and terminology is a goal that would take a person philosophically deeper than anything else I could recommend.

comment by [deleted] · 2012-08-16T04:37:04.749Z · LW(p) · GW(p)

AI design restricted to the much narrower part of the cognitive space that integrates with the F part, i.e., all algorithms must be conducive to clean self-modification.

So many questions! Excited for the Open Problems Sequence.

comment by Steve_Rayhawk · 2012-08-08T22:02:29.529Z · LW(p) · GW(p)

You invoke as granted the assumption that there's anything besides your immediately present self (including your remembered past selves) that has qualia, but then you deny that some anticipatable things will have qualia. Presumably there are some philosophically informed epistemic-ish rules that you have been using, and implicitly endorsing, for the determination of whether any given stimuli you encounter were generated by something with qualia, and there are some other meta-philosophical epistemology-like rules that you are implicitly using and endorsing for determining whether the first set of rules was correct. Can you highlight any suitable past discussion you have given of the epistemology of the problem of other minds?

eta: I guess the discussions here, or here, sort of count, in that they explain how you could think what you do... except they're about something more like priors than like likelihoods.

In retrospect, the rest of your position is like that too, based on sort of metaphysical arguments about what is even coherently postulable, though you treat the conclusions with a certainty I don't see how to justify (e.g. one of your underlying concepts might not be fundamental the way you imagine). So, now that I see that, I guess my question was mostly just a passive-aggressive way to object to your argument procedure. The objectionable feature made more explicit is that the constraint you propose on the priors requires such a gerrymandered-seeming intermediate event -- that consciousness-simulating processes which are not causally (and, therefore, in some sense physically) 'atomic' are not experienced, yet would still manage to generate the only kind of outward evidence about their experiencedness that anyone else could possibly experience without direct brain interactions or measurements -- in order to make the likelihood of the (hypothetical) observations (of the outward evidence of experiencedness, and of the absence of that outward evidence anywhere else) within the gerrymandered event come out favorably.

comment by Kingoftheinternet · 2012-08-08T14:29:49.564Z · LW(p) · GW(p)

the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can.

I may have missed the part where you explained why qualia can't fit into a state machine-model of the universe. Where does the incompatibility come from? I'm aware that it looks like no human-designed mathematical objects have experienced qualia yet, which is some level of evidence for it being impossible, but not so strong that I think you're justified in saying a materialist/mathematical platonist view of reality can never account for conscious experiences.

Replies from: dbc
comment by dbc · 2012-08-08T15:17:07.973Z · LW(p) · GW(p)

I may have missed the part where you explained why qualia can't fit into a state machine-model of the universe.

I think Mitchell's point is that we don't know whether state-machines have qualia, and the costs of making assumptions could be large.

comment by Grognor · 2012-08-08T14:19:42.565Z · LW(p) · GW(p)

Parts of this I think are brilliant, other parts I think are absolute nonsense. Not sure how I want to vote on this.

there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

This strikes me as probably true but unproven.

My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology.

You are anthropomorphizing the universe.

Replies from: Mitchell_Porter, David_Allen
comment by Mitchell_Porter · 2012-08-10T03:39:43.174Z · LW(p) · GW(p)

the philosophical school known as transcendental phenomenology.

You are anthropomorphizing the universe.

Phenomenology is the study of appearances. The only part of the universe that it is directly concerned with is "you experiencing existence". That part of the universe is anthropomorphic by definition.

comment by David_Allen · 2012-08-09T17:25:18.195Z · LW(p) · GW(p)

there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

This strikes me as probably true but unproven

It seems possible for an AI to engage in a process of search within the ontological Hilbert space. It may not be efficient, but a random search should make all parts of any particular space accessible, and a random search across a Hilbert space of ontological spaces should make other types of ontological spaces accessible, and a random search across a Hilbert space containing Hilbert spaces of ontological spaces should... and on up the meta-chain. It isn't clear why such a system wouldn't have access to any ontology that is accessible by the human mind.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-10T07:28:51.289Z · LW(p) · GW(p)

My original formulation is that AI = state-machine materialism = computational epistemology = a closed circle. However, it's true that you could have an AI which axiomatically imputes a particular phenomenology to the physical states, and such an AI could even reason about the mental life associated with transhumanly complex physical states, all while having no mental life of its own. It might be able to tell us that a certain type of state machine is required in order to feel meta-meta-pain, meta-meta-pain being something that no human being has ever felt or imagined, but which can be defined combinatorically as a certain sort of higher-order intentionality.

However, an AI cannot go from just an ontology of physical causality, to an ontology which includes something like pain, employing only computational epistemology. It would have to be told that state X is "pain". And even then it doesn't really know that to be in state X is to feel pain. (I am assuming that the AI doesn't possess consciousness; if it does, then it may be capable of feeling pain itself, which I take to be a prerequisite for knowing what pain is.)

Replies from: David_Allen, David_Allen
comment by David_Allen · 2012-08-15T00:02:43.357Z · LW(p) · GW(p)

Continuing my argument.

It appears to me that you are looking for an ontology that provides a natural explanation for things like "qualia" and "consciousness" (perhaps by way of phenomenology). You would refer to this ontology as the "true ontology". You reject Platonism "an ontology which reifies mathematical or computational abstractions", because things like "qualia" are absent.

From my perspective, your search for the "true ontology"--which privileges the phenomenological perspective of "consciousness"--is indistinguishable from the scientific realism that you reject under the name "Platonism"--which (by some accounts) privileges a materialistic or mathematical perspective of everything.

For example, using a form of your argument I could reject both of these approaches to realism because they fail to directly account for the phenomenological existence of SpongeBob SquarePants, and his wacky antics.

Much of what you have written roughly matches my perspective, so to be clear I am objecting to the following concepts and many of the conclusions you have drawn from them:

  • "true ontology"
  • "true epistemology"
  • "Consciousness objectively exists"

I claim that variants of antirealism have more to offer than realism. References to "true" and "objective" have implied contexts from which they must be considered, and without those contexts they hold no meaning. There is nothing that we can claim to be universally true or objective that does not have this dependency (including this very claim (meta-recursively...)). Sometimes this concept is stated as "we have no direct access to reality".

So from what basis can we evaluate "reality" (whatever that is)? We clearly are evaluating reality from within our dynamic existence, some of which we refer to as consciousness. But consciousness can't be fundamental, because its identification appears to depend upon itself performing the identification; and a description of consciousness appears to be incomplete in that it does not actually generate the consciousness it describes.

Extending this concept a bit, when we go looking for the "reality" that underpins our consciousness, we have to model that it based in terms of our experience which is dependent upon... well it depends on our consciousness and its dynamic dependence on "reality". Also, these models don't appear to generate the phenomenon they describe, and so it appears that circular reasoning and incompleteness are fundamental to our experience.

Because of this I suggest that we adopt an epistemology that is based on the meta-recursive dependence of descriptions on dynamic contexts. Using an existing dynamic context (such as our consciousness) we can explore reality in the terms that are accessible from within that context. We may not have complete objective access to that context, but we can explore it and form models to describe it, from inside of it.

We can also form new dynamic contexts that operate in terms of the existing context, and these newly formed inner contexts can interact with each in terms of dynamic patterns of the terms of the existing context. From our perspective we can only interact with our child contexts in the terms of the existing context, but the inner contexts may be generating internal experiences that are very different than those existing outside of it, based on the interaction of the dynamic patterns we have defined for them.

Inverting this perspective, then perhaps our consciousness is formed from the experiences generated from the dynamic patterns formed within an exterior context, and that context is itself generated from yet another set of interacting dynamic patterns... and so on. We could attempt to identify this nested set of relationships as its own ontology... only it may not actually be so well structured. It may actually be organized more like a network of partially overlapping contexts, where some parts interact strongly and other parts interact very weakly. In any case, our ability to describe this system will depend heavily on the dynamic perspective from which we observe the related phenomenon; and our perspective is of course embedded within the system we are attempting to describe.

I am not attempting to confuse the issues by pointing out how complex this can be. I am attempting to show a few things:

  • There is no absolute basis, no universal truth, no center, no bottom layer... from our perspective which is embedded in the "stuff of reality". I make no claims about anything I don't have access to.
  • Any ontology or epistemology will inherently be incomplete and circularly self-dependent, from some perspective.
  • The generation of meaning and existence is dependent on dynamic contexts of evaluation. When considering meaning or existence it is best to consider them in the terms of the context that is generating them.
  • Some models/ontologies/epistemologies are better than others, but the label "better" is dependent on the context of evaluation and is not fundamental.
  • The joints that we are attempting to carve the universe at are dependent upon the context of evaluation, and are not fundamental.
  • Meaning and existence are dynamic, not static. A seemingly static model is being dynamically generated, and stops existing when that modeling stops.
  • Using a model of dynamic patterns, based in terms of dynamic patterns we might be able to explain how consciousness emerges from non-conscious stuff, but this model will not be fundamental or complete, it will simply be one way to look at the Whole Sort of General Mish Mash of "reality".

To apply this to your "principle of non-vagueness". There is no reason to expect that mapping between pairs of arbitrary perspectives--between physical and phenomenological states in this case--is necessarily precise (or even meaningful). Simply because they are two different ways of describing arbitrary slices of "reality" means that they may refer to not-entirely overlapping parts of "reality". Certainly physical and phenomenological states are modeled and measured in very different ways, so a great deal of non-overlap caused uncertainty/vagueness should be expected.

And this claim:

But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

Current software is rarely programmed to directly model state-machines. It may be possible to map the behavior of existing systems to state machines, but it is not generally the perspective generally held by the programmers, or by the dynamically running software. The same is true for current AI, so from that perspective your claim seems a bit odd to me. The perspective that an AI can be mapped to a state-machine is based on a particular perspective on the AI involved, but in fact that mapping does not discount that the AI is implemented within the same "reality" that we are. If our physical configuration (from some perspective) allows us to generate consciousness then there is no general barrier that should prevent AI systems from achieving a similar form of consciousness.

I recognize that these descriptions that may not bridge our inference gap; in fact they may not even properly encode my intended meaning. I can see that you are searching for an epistemology that better encodes for your understanding of the universe; I'm just tossing in my thoughts to see if we can generate some new perspectives.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-17T10:12:10.265Z · LW(p) · GW(p)

People have noticed circular dependencies among subdisciplines of philosophy before. A really well-known one is the cycle connecting ontology and epistemology: your epistemology should imply your ontology, and your ontology must permit your epistemology. More arcane is the interplay between phenomenology, epistemology, and methodology.

Your approach to ontology seems to combine these two cycles, with the p/e/m cycle being more fundamental. All ontological claims are said to be dependent on a cognitive context, and this justifies ontological relativism.

That's not my philosophy; I see the possibility of reaching foundations, and also the possibility of countering the relativistic influence of the p/e/m perspective, simply by having a good ontological account of what the p/e/m cycle is about. From this perspective, the cycle isn't an endless merry-go-round, it's a process that you iterate in order to perfect your thinking. You chase down the implications of one ology for another, and you keep that up until you have something that is complete and consistent.

Or until you discover the phenomenological counterpart of Gödel's theorem. In what you write I don't see a proof that foundations don't exist or can't be reached. Perhaps they can't, but in the absence of a proof, I see no reason to abandon cognitive optimism.

Replies from: David_Allen
comment by David_Allen · 2012-08-18T23:45:25.972Z · LW(p) · GW(p)

A really well-known one is the cycle connecting ontology and epistemology: your epistemology should imply your ontology, and your ontology must permit your epistemology. More arcane is the interplay between phenomenology, epistemology, and methodology.

I have read many of your comments and I am uncertain how to model your meanings for 'ontology', 'epistemology' and 'methodology', especially in relation to each other.

Do you have links to sources that describe these types of cycles, or are you willing to describe the cycles you are referring to--in the process establishing the relationship between these terms?

Your approach to ontology seems to combine these two cycles, with the p/e/m cycle being more fundamental. All ontological claims are said to be dependent on a cognitive context, and this justifies ontological relativism.

The term "cycles" doesn't really capture my sense of the situation. Perhaps the sense of recurrent hypergraphs is closer.

Also, I do not limit my argument only to things we describe as cognitive contexts. My argument allows for any type of context of evaluation. For example an antennae interacting with a photon creates a context of evaluation that generates meaning in terms of the described system.

...and this justifies ontological relativism.

I think that this epistemology actually justifies something more like an ontological perspectivism, but it generalizes the context of evaluation beyond the human centric concepts found in relativism and perspectivism. Essentially it stops privileging human consciousness as the only context of evaluation that can generate meaning. It is this core idea that separates my epistemology from most of the related work I have found in epistemology, philosophy, linguistics and semiotics.

In what you write I don't see a proof that foundations don't exist or can't be reached.

I'm glad you don't see those proofs because I can't claim either point from the implied perspective of your statement. Your statement assumes that there exists an objective perspective from which a foundation can be described. The problem with this concept is that we don't have access to any such objective perspective. We can only identify the perspective as "objective" from some perspective... which means that the identified "objective" perspective depends upon the perspective that generated the label, rendering the label subjective.

You do provide an algorithm for finding an objective description:

I see the possibility of reaching foundations, and also the possibility of countering the relativistic influence of the p/e/m perspective, simply by having a good ontological account of what the p/e/m cycle is about. From this perspective, the cycle isn't an endless merry-go-round, it's a process that you iterate in order to perfect your thinking. You chase down the implications of one ology for another, and you keep that up until you have something that is complete and consistent.

Again from this it seems that while you reject some current conclusions of science, you actually embrace scientific realism--that there is an external reality that can be completely and consistently described.

As long as you are dealing in terms of maps (descriptions) it isn't clear that to me that you ever escape the language hierarchy and therefore you are never free of Gödel's theorems. To achieve the level of completeness and consistency you strive for, it seems that you need to describe reality in terms equivalent to those it uses... which means you aren't describing it so much as generating it. If this description of a reality is complete then it is rendered in terms of itself, and only itself, which would make it a reality independent of ours, and so we would have no access to it (otherwise it would simply be a part of our reality and therefore not complete). Descriptions of reality that generate reality aren't directly accessible by the human mind; any translation of these descriptions to human accessible terms would render the description subject to Gödel's theorems.

I see no reason to abandon cognitive optimism.

I don't want anybody to abandon the search for new and better perspectives on reality just because we don't have access to an objective perspective. But by realizing that there are no objective perspectives we can stop arguing about the "right" way of viewing all of reality and spend that time finding "good" or "useful" ways to view parts of it.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-24T07:17:10.590Z · LW(p) · GW(p)

Do you have links to sources that describe these types of cycles, or are you willing to describe the cycles you are referring to--in the process establishing the relationship between these terms?

Let's say that ontology is the study of that which exists, epistemology the study of knowledge, phenomenology the study of appearances, and methodology the study of technique. There's naturally an interplay between these disciplines. Each discipline has methods, the methods might be employed before you're clear on how they work, so you might perform a phenomenological study of the methods in order to establish what it is that you're doing. Reflection is supposed to be a source of knowledge about consciousness, so it's an epistemological methodology for constructing a phenomenological ontology... I don't have a formula for how it all fits together (but if you do an image search on "hermeneutic circle" you can find various crude flowcharts). If I did, I would be much more advanced.

For example an antennae interacting with a photon creates a context of evaluation that generates meaning in terms of the described system.

I wouldn't call that meaning, unless you're going to explicitly say that there are meaning-qualia in your antenna-photon system. Otherwise it's just cause and effect. True meaning is an aspect of consciousness. Functionalist "meaning" is based on an analogy with meaning-driven behavior in a conscious being.

it stops privileging human consciousness as the only context of evaluation that can generate meaning. It is this core idea that separates my epistemology from most of the related work

Does your philosophy have a name? Like "functionalist perspectivism"?

Replies from: David_Allen
comment by David_Allen · 2012-08-29T05:14:57.980Z · LW(p) · GW(p)

Let's say that ontology is the study of that which exists, epistemology the study of knowledge, phenomenology the study of appearances, and methodology the study of technique.

Thanks for the description. That would place the core of my claims as an ontology, with implications for how to approach epistemology, and phenomenology.

I wouldn't call that meaning, unless you're going to explicitly say that there are meaning-qualia in your antenna-photon system. Otherwise it's just cause and effect. True meaning is an aspect of consciousness. Functionalist "meaning" is based on an analogy with meaning-driven behavior in a conscious being.

I recognize that my use of meaning is not normative. I won't defend this use because my model for it is still sloppy, but I will attempt to explain it.

The antenna-photon interaction that you refer to as cause and effect I would refer to as a change in the dynamics of the system, as described from a particular perspective.

To refer to this interaction as cause and effect requires that some aspect of the system be considered the baseline; the effect then is how the state of the system is modified by the influencing entity. Such a perspective can be adopted and might even be useful. But the perspective that I am holding is that the antenna and the photon are interacting. This is a process that modifies both systems. The "meaning" that is formed is unique to the system; it depends on the particulars of the systems and their interactions. Within the system that "meaning" exists in terms of the dynamics allowed by the nature of the system. When we describe that "meaning" we do so in the terms generated from an external perspective, but that description will only capture certain aspects of the "meaning" actually generated within the system.

How does this description compare with your concept of "meaning-qualia"?

Does your philosophy have a name? Like "functionalist perspectivism"?

I think that both functionalism and perspectivism are poor labels for what I'm attempting to describe; because both philosophies pay too much attention to human consciousness and neither are set to explain the nature of existence generally.

For now I'm calling my philosophy the interpretive context hypothesis (ICH), at least until I discover a better name or a better model.

comment by David_Allen · 2012-08-10T17:28:39.237Z · LW(p) · GW(p)

The contexts from which you identify "state-machine materialism" and "pain" appear to be very different from each other, so it is no surprise that you find no room for "pain" within your model of "state-machine materialism".

You appear to identify this issue directly in this comment:

My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia.

Looking for the qualia of "pain" in a state-machine model of a computer is like trying to find out what my favorite color is by using a hammer to examine the contents of my head. You are simply using the wrong interface to the system.

If you examine the compressed and encrypted bit sequence stored on a DVD as a series of 0 and 1 characters, you will not be watching the movie.

If you don't understand the Russian language, then for a novel written in Russian you will not find the subtle twists of plot compelling.

If you choose some perspectives on Searle's Chinese room thought experiment you will not see the Chinese speaker, you will only see the mechanism that generates Chinese symbols.

So stuff like "qualia", "pain", "consciousness", and "electrons" only exist (hold meaning) from perspectives that are capable of identifying them. From other perspective they are non-existent (have no meaning).

If you chose a perspective on "conscious experience" that requires a specific sort of physical entity to be present, then a computer without that will never qualify as "conscious", for you. Others may disagree, perhaps pointing out aspects of its responses to them, or how some aspects of the system are functionally equivalent to the physical entity you require. So, which is the right way to identify consciousness? To figure that out you need to create a perspective from which you can identify one as right, and the other as wrong.

comment by cousin_it · 2012-08-08T16:17:32.559Z · LW(p) · GW(p)

Your point sounds similar to Wei's point that solving FAI requires metaphilosophy.

comment by novalis · 2012-08-08T18:29:58.159Z · LW(p) · GW(p)

Maybe I missed this, but did you ever write up the Monday/Tuesday game with your views on consciousness? On Monday, consciousness is an algorithm running on a brain, and when people say they have consciously experienced something, they are reporting the output of this algorithm. On Tuesday, the true ontology of mind resembles the ontology of transcendental phenomenology. What's different?

I'm also confused about why an algorithm couldn't represent a mass of entangled electrons.

Replies from: novalis
comment by novalis · 2012-08-08T18:35:57.135Z · LW(p) · GW(p)

Oh, also: imagine that SIAI makes an AI. Why should they make it conscious at all? They're just trying to create an intelligence, not a consciousness. Surely, even if consciousness requires whatever it is you think it requires, an intelligence does not.

Replies from: David_Gerard
comment by David_Gerard · 2012-08-08T23:33:07.229Z · LW(p) · GW(p)

Indeed. Is my cat conscious? It's certainly an agent (it appears to have its own drives and motivations), with considerable intelligence (for a cat) and something I'd call creativity (it's an ex-stray with a remarkable ability to work out how to get into places with food it's after).

Replies from: David_Gerard
comment by David_Gerard · 2012-08-13T19:26:20.842Z · LW(p) · GW(p)

And the answer appears to be: yes. “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”

comment by Richard_Kennaway · 2012-08-08T15:06:41.268Z · LW(p) · GW(p)

That is all very interesting, but what difference does it practically make?

Suppose I were trying to build an AGI out of computation and physical sensors and actuators, and I had what appeared to me to be a wonderful new approach, and I was unconcerned with whether the device would "really" think or have qualia, just with whether it worked to do practical things. Maybe I'm concerned with fooming and Friendliness, but again, only in terms of the practical consequences, i.e. I don't want the world suddenly turned into paperclips. At what point, if any, would I need to ponder these epistemological issues?

Replies from: Mitchell_Porter, David_Gerard, thomblake
comment by Mitchell_Porter · 2012-08-09T04:43:13.387Z · LW(p) · GW(p)

what difference does it practically make?

It will be hard for your AGI to be an ethical agent if it doesn't know who is conscious and who is not.

Replies from: Richard_Kennaway, David_Gerard
comment by Richard_Kennaway · 2012-08-09T06:04:25.710Z · LW(p) · GW(p)

It's easy enough for us (leaving aside edge cases about animals, the unborn, and the brain dead, which in fact people find hard, or at least persistently disagree on) How do we do it? By any other means than our ordinary senses?

Replies from: JQuinton, haig, Mitchell_Porter
comment by JQuinton · 2012-08-10T18:04:14.197Z · LW(p) · GW(p)

I would argue that humans are not very good at this. If by "good" you mean high succcess rate and low false positive rate for detecting consciousness. It seems to me that the only reason that we have a high success rate for detecting consciousness is because our false positive rate for detecting consciousness is also high (e.g. religion, ghosts, fear of the dark, etc.)

comment by haig · 2012-08-10T03:58:43.243Z · LW(p) · GW(p)

We have evolved moral intuitions such as empathy and compassion that underly what we consider to be right or wrong. These intuitions only work because we consciously internalize another agent's subjective experience and identify with it. In other words, without the various quales that we experience we would have no foundation to act ethically. An unconscious AI that does not experience these quales could, in theory, act the way we think it should act by mimicking behaviors from a repertoire of rules (and ways to create further rules) that we give it, but that is a very brittle and complicated route, and is the route the SIAI has been taking because they have discounted qualia, which is what this post is really all about.

comment by Mitchell_Porter · 2012-08-10T02:41:50.122Z · LW(p) · GW(p)

A human being does it by presuming that observed similarities, between themselves and the other humans around them, extend to the common possession of inner states. You could design an AI to employ a similar heuristic, though perhaps it would be pattern-matching against a designated model human, rather than against itself. But the edge cases show that you need better heuristics than that, and in any case one would expect the AI to seek consistency between its ontology of agents worth caring about and its overall ontology, which will lead it down one of the forking paths in philosophy of mind. If it arrives at the wrong terminus...

Replies from: Richard_Kennaway, scav
comment by Richard_Kennaway · 2012-08-10T08:46:46.325Z · LW(p) · GW(p)

You could design an AI to employ a similar heuristic, though perhaps it would be pattern-matching against a designated model human, rather than against itself. But the edge cases show that you need better heuristics than that

I don't see how this is different from having the AI recognise a teacup. We don't actually know how we do it. That's why it is difficult to make a machine to do it. We also don't know how we recognise people. "Presuming that observed similarities etc." isn't a useful description of how we do it, and I don't think any amount of introspection about our experience of doing it will help, any more than that sort of thinking has helped to develop machine vision, or indeed any of the modest successes that AI has had.

comment by scav · 2012-08-10T08:19:14.782Z · LW(p) · GW(p)

Firstly, I honestly don't see how you came to the conclusion that the qualia you and I (as far as you know) experience are not part of a computational process. It doesn't seem to be a belief that makes testable predictions.

Since the qualia of others are not accessible to you, you can't know that any particular arrangement of matter and information doesn't have them, including people, plants, and computers. You also cannot know whether my qualia feel anything like your own when subjected to the same stimuli. If you have any reason to believe they do (for your model of empathy to make sense), what reason do you have to believe it is due to something non-computable?

It seems intuitively appealing that someone who is kind to you feels similarly to you and is therefore similar to you. It helps you like them, and reciprocate the kindness, which has advantages of its own. But ultimately, your experience of another's kindness is about the consequences to you, not their intentions or mental model of you.

If a computer with unknowable computational qualia is successfully kind to me, I'll take that over a human with unknowable differently-computational qualia doing what they think would be best for me and fucking it up because they aren't very good at evaluating the possible consequences.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-10T08:56:56.355Z · LW(p) · GW(p)

I honestly don't see how you came to the conclusion that the qualia you and I (as far as you know) experience are not part of a computational process ... what reason do you have to believe it is due to something non-computable?

Qualia are part of some sort of causal process. If it's cognition, maybe it deserves the name of a computational process. It certainly ought to be a computable process, in the sense that it could be simulated by a computer.

My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia. The various attempts of materialist philosophers of mind to define qualia solely in terms of physical or computational properties do not work. The physical and computational descriptions are black-box descriptions of "things with states", and you need to go into more detail about those states in order to be talking about qualia. Those more detailed descriptions will contain terms whose meaning one can only know by having been conscious and thereby being familiar with the relevant phenomenological realities, like pain. Otherwise, these terms will just be formal properties p, q, r... known only by how they enter into causal relations.

Moving one step up in controversialness, I also don't believe that computational simulation of qualia will itself produce qualia. This is because the current theories about what the physical correlates of conscious states are, already require an implausible sort of correspondence between mesoscopic "functional" states (defined e.g. by the motions of large numbers of ions) and the elementary qualia which together make up an overall state of consciousness. The theory that any good enough simulation of this will also have qualia requires that the correspondence be extended in ways that no-one anywhere can specify (thus see the debates about simulations running on giant look-up tables, or the "dust theory" of simulations whose sequential conscious states are scattered across the multiverse, causally disconnected in space and time).

The whole situation looks intellectually pathological to me, and it's a lot simpler to suppose that you get a detailed conscious experience, a complex-of-qualia, if and only if a specific sort of physical entity is realized. One state of that entity causes the next state, so the qualia have causal consequences and a computational model of the entity could exist, but the computational model of the entity is not an instance of the entity itself. I have voiced ideas about a hypothetical locus of quantum entanglement in the brain as the conscious entity. That idea may be right or wrong. It is logically independent of the claims that standard theories of consciousness are implausible, and that you can't define consciousness just in terms of physics or computation.

Replies from: scav, torekp
comment by scav · 2012-08-11T09:31:21.151Z · LW(p) · GW(p)

it's a lot simpler to suppose that you get a detailed conscious experience, a complex-of-qualia, if and only if a specific sort of physical entity is realized.

How is that simpler? If there is a theory that qualia can only occur in a specific sort of physical entity, then that theory must delineate all the complicated boundary conditions and exceptions as to why similar processes on entities that differ in various ways don't count as qualia.

It must be simpler to suppose that qualia are informational processes that have certain (currently unknown) mathematical properties.

When you can identify and measure qualia in a person's brain and understand truly what they are, THEN you can say whether they can or can't happen on a semiconductor and WHY. Until then, words are wind.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-11T13:15:17.108Z · LW(p) · GW(p)

It must be simpler to suppose that qualia are informational processes that have certain (currently unknown) mathematical properties.

Physically, an "informational process" involves bulk movements of microphysical entities, like electrons within a transistor or ions across a cell membrane.

So let's suppose that we want to know the physical conditions under which a particular quale occurs in a human being (something like a flash of red in your visual field), and that the physical correlate is some bulk molecular process, where N copies of a particular biomolecule participate. And let's say that we're confident that the quale does not occur when N=0 or 1, and that it does occur when N=1000.

All I have to do is ask, for what magic value of N does the quale start happening? People characteristically evade such questions, they wave their hands and say, that doesn't matter, there doesn't have to be a definite answer to that question. (Just as most MWI advocates do, when asked exactly when it is that you go from having one world to two worlds.)

But let's suppose we have 1000 people, numbered from 1 to 1000, and in each one the potentially quale-inducing process is occurring, with that many copies of the biomolecule participating. We can say that person number 1 definitely doesn't have the quale, and person number 1000 definitely does, but what about the people in between? The handwaving non-answer, "there is no definite threshold", means that for people in the middle, with maybe 234 or 569 molecules taking part, the answer to the question "Are they having this experience or not?" is "none of the above". There's supposed to be no exact fact about whether they have that flash of red or not.

There is absolutely no reason to take that seriously as an intellectual position about the nature of qualia. It's actually a reductio ad absurdum of a commonly held view.

The counterargument might be made, what about electrons in a transistor? There doesn't have to be an exact answer to the question, how many electrons is enough for the transistor to really be in the "1" state rather than the "0" state. But the reason there doesn't have to be an exact answer, is that we only care about the transistor's behavior, and then only its behavior under conditions that the device might encounter during its operational life. If under most circumstances there are only 0 electrons or 1000 electrons present, and if those numbers reliably produce "0 behavior" or "1 behavior" from the transistor, then that is enough for the computer to perform its function as a computational device. Maybe a transistor with 569 electrons is in an unstable state that functionally is neither definitely 0 nor definitely 1, but if those conditions almost never come up in the operation of the device, that's OK.

With any theory about the presence of qualia, we do not have the luxury of this escape via functional pragmatism. A theory about the presence of qualia needs to have definite implications for every physically possible state - it needs to say whether the qualia are present or not in that state - or else we end up with situations as in the reductio, where we have people who allegedly neither have the quale nor don't have the quale.

This argument is simple and important enough that it deserves to have a name, but I've never seen it in the philosophy literature. So I'll call it the sorites problem for functionalist theories of qualia. Any materialist theory of qualia which identifies them with bulk microphysical processes faces this sorites problem.

Replies from: scav
comment by scav · 2012-08-12T16:15:35.860Z · LW(p) · GW(p)

Why?

I don't seem to experience qualia as all-or-nothing. I doubt that you do either. I don't see a problem with the amount of qualia experienced being a real number between 0 and 1 in response to varying stimuli of pain or redness.

Therefore I don't see a problem with qualia being measurable on a similar scale across different informational processes with more or fewer neurons or other computing elements involved in the structure that generates them.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-13T03:35:34.140Z · LW(p) · GW(p)

Do you think that there is a slightly different quale for each difference in the physical state, no matter how minute that physical difference is?

Replies from: scav
comment by scav · 2012-08-13T08:09:50.165Z · LW(p) · GW(p)

I don't know. But I don't think so, not in the sense that it would feel like a different kind of experience. More or less intense, more definite or more ambiguous perhaps. And of course there could always be differences too small to be noticeable.

As a wild guess based on no evidence, I suppose that different kinds of qualia have different functions (in the sense of uses, not mathematical mappings) in a consciousness, and equivalent functions can be performed by different structures and processes.

I am aware of qualia (or they wouldn't be qualia), but I am not aware of the mechanism by which they are generated, so I have no reason to believe that mechanism could not be implemented differently and still have the same outputs, and feel the same to me.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-13T08:40:36.485Z · LW(p) · GW(p)

I have just expanded on the argument that any mapping between "physics" and "phenomenology" must fundamentally be an exact one. This does not mean that a proposed mapping, that would be inexact by microphysical standards, is necessarily false, it just means that it is necessarily incomplete.

The argument for exactness still goes through even if you allow for gradations of experience. For any individual gradation, it's still true that it is what it is, and that's enough to imply that the fundamental mapping must be exact, because the alternative would lead to incoherent statements like "an exact physical configuration has a state of consciousness associated with it, but not a particular state of consciousness".

The requirement that any "law" of psychophysical correspondence must be microphysically exact in its complete form, including for physical configurations that we would otherwise regard as edge cases, is problematic for conventional functionalism, precisely because conventional functionalism adopts the practical rough-and-ready philosophy used by circuit designers. Circuit designers don't care if states intermediate between "definitely 0" and "definitely 1" are really 0, 1, or neither; they just want to make sure that these states don't show up during the operation of their machine, because functionally they are unpredictable, that's why their semantics would be unclear.

Scientists and ontologists of consciousness have no such option, because the principle of ontological non-vagueness (mentioned in the other comment) applies to consciousness. Consciousness objectively exists, it's not just a useful heuristic concept, and so any theory of how it relates to physics must admit of a similarly objective completion; and that means there must be a specific answer to the question, exactly what state(s) of consciousness, if any, are present in this physical configuration... there must be a specific answer to that question for every possible physical configuration.

But the usual attitude of functionalists is that they can be fuzzy about microscopic details; that there is no need, not even in principle, for their ideas to possess a microphysically exact completion.

In these monad theories that I push, the "Cartesian theater", where consciousness comes together into a unitary experience, is defined by a set of exact microphysical properties, e.g. a set of topological quantum numbers (a somewhat arbitrary example, but I need to give an example). For a theory like that, the principle associating physical and phenomenological states could be both functional and exact, but that's not the sort of theory that today's functionalists are discussing.

Replies from: scav
comment by scav · 2012-08-14T07:53:56.359Z · LW(p) · GW(p)

What predictions does your theory make?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-14T08:46:35.063Z · LW(p) · GW(p)

The idea, more or less, is that there is a big ball of quantum entanglement somewhere in the brain, and that's the locus of consciousness. It might involve phonons in the microfilaments, anyons in the microtubules, both or neither of these; it's presumably tissue-specific, involving particular cell types where the relevant structures are optimized for this role; and it must be causally relevant for conscious cognition, which should do something to pin down its anatomical location.

You could say that one major prediction is just that there will be such a thing as respectable quantum neurobiology and cognitive quantum neuroscience. From a quantum-physical and condensed-matter perspective, biomolecules and cells are highly nontrivial objects. By now "quantum biology" has a long history, and it's a topic that is beloved of thinkers who are, shall we say, more poetic than scientific, but we're still at the very beginning of that subject.

We basically know nothing about the dynamics of quantum coherence and decoherence in living matter. It's not something that's easily measured, and the handful of models that have been employed in order to calculate this dynamics are "spherical cow" models; they're radically oversimplified for the sake of calculability, and just a first step into the unknown.

What I write on this subject is speculative, and it's idiosyncratic even when compared to "well-known" forms of quantum-mind discourse. I am more interested in establishing the possibility of a very alternative view, and also in highlighting implausibilities of the conventional view that go unnoticed, or which are tolerated because the conventional picture of the brain appears to require them.

comment by torekp · 2012-08-11T15:51:03.583Z · LW(p) · GW(p)

My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia.

If this is an argument with the second sentence as premise, it's a non sequitur. I can give you a description of the 1000 brightest objects in the night sky without mentioning the Evening Star; but that does not mean that the night sky lacked the Evening Star or that my description was incomplete.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-11T16:41:32.288Z · LW(p) · GW(p)

The rest of the paragraph covers the case of indirect reference to qualia. It's sketchy because I was outlining an argument rather than making it, if you know what I mean. I had to convey that this is not about "non-computability".

comment by David_Gerard · 2012-08-10T23:11:11.414Z · LW(p) · GW(p)

Is a human who is dissociating conscious? Or one who spaces out for a couple of seconds then retcons continuous consciousness later (as appears to be what brains actually do)? Or one who is talking and doing complicated things while sleepwalking?

comment by David_Gerard · 2012-08-08T16:04:04.482Z · LW(p) · GW(p)

Indeed. We're after intelligence that behaves in a particular way. At what point do qualia enter our model? What do they do in a model? To answer this question we need to be using an expansion of the term "qualia" which can be observed from the outside.

comment by thomblake · 2012-08-09T15:25:55.797Z · LW(p) · GW(p)

I was unconcerned with whether the device would "really" think or have qualia

I get the impression that Mitchell_Porter is tentatively accepting Eliezer's assertion that FAI should not be a person, but nonetheless those "epistemological issues" seem relevant to the content of ethics. A machine with the wrong ideas about ontology might make huge mistakes regarding what makes life worth living for humans.

comment by Steve_Rayhawk · 2012-08-08T19:54:43.417Z · LW(p) · GW(p)

Some brief attempted translation for the last part:

A "monad", in Mitchell Porter's usage, is supposed to be a somewhat isolatable quantum state machine, with states and dynamics factorizable somewhat as if it was a quantum analogue of a classical dynamic graphical model such as a dynamic Bayesian network (e.g., in the linked physics paper, a quantum cellular automaton). (I guess, unlike graphical models, it could also be supposed to not necessarily have a uniquely best natural decomposition of its Hilbert space for all purposes, like how with an atomic lattice you can analyze it either in terms of its nuclear positions or its phonons.) For a monad to be a conscious mind, the monad must also at least be complicated and [this is a mistaken guess] capable of certain kinds of evolution toward something like equilibria of tensor-product-related quantum operators having to do with reflective state representation[/mistaken guess]. His expectation that this will work out is based partly on intuitive parallels between some imaginable combinatorially composable structures in the kind of tensor algebra that shows up in quantum mechanics and the known composable grammar-like structures that tend to show up whenever we try to articulate concepts about representation (I guess mostly the operators of modal logic).

(Disclaimer: I know almost only just enough quantum physics to get into trouble.)

A monadic mind would be a state machine, but ontologically it would be different from the same state machine running on a network of a billion monads.

Not all your readers will understand that "network of a billion monads" is supposed to refer to things like classical computing machinery (or quantum computing machinery?).

Replies from: CarlShulman, Steve_Rayhawk, Mitchell_Porter
comment by CarlShulman · 2012-08-08T20:17:07.713Z · LW(p) · GW(p)

This needs further translation.

comment by Steve_Rayhawk · 2012-08-08T22:28:03.736Z · LW(p) · GW(p)

His expectation that this will work out is based partly on [...]

(It's also based on an intuition I don't understand that says that classical states can't evolve toward something like representational equilibrium the way quantum states can -- e.g. you can't have something that tries to come up with an equilibrium of anticipation/decisions, like neural approximate computation of Nash equilibria, but using something more like representations of starting states of motor programs that, once underway, you've learned will predictably try to search combinatorial spaces of options and/or redo a computation like the current one but with different details -- or that, even if you can get ths sort of evolution in classical states, it's still knowably irrelevant. Earlier he invoked bafflingly intense intuitions about the obviously compelling ontological significance of the lack of spatial locality cues attached to subjective consciousness, such as "this quale is experienced in my anterior cingulate cortex, and this one in Wernicke's area", to argue that experience is necessarily nonclassically replicable. (As compared with, what, the spatial cues one would expect a classical simulation of the functional core of a conscious quantum state machine to magically become able to report experiencing?) He's now willing to spontaneously talk about non-conscious classical machines that simulate quantum ones (including not magically manifesting p-zombie subjective reports of spatial cues relating to its computational hardware), so I don't know what the causal role of that earlier intuition is in his present beliefs; but his reference to a "sweet spot", rather than a sweet protected quantum subspace of a space of network states or something, is suggestive, unless that's somehow necessary for the imagined tensor products to be able to stack up high enough.)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-10T06:15:30.422Z · LW(p) · GW(p)

bafflingly intense intuitions about the obviously compelling ontological significance of the lack of spatial locality cues attached to subjective consciousness

Let's go back to the local paradigm for explaining consciousness: "how it feels from the inside". On one side of the equation, we have a particular configuration of trillions of particles, on the other side we have a conscious being experiencing a particular combination of sensations, feelings, memories, and beliefs. The latter is supposed to be "how it feels to be that configuration".

If I ontologically analyze the configuration of particles, I'll probably do so in terms of nested spatial structures - particles in atoms in molecules in organelles in cells in networks. What if I analyze the other side of the equation, the experience, or even the conscious being having the experience? This is where phenomenology matters. Whenever materialists talk about consciousness, they keep interjecting references to neurons and brain computations even though none of this is evident in the experience itself. Phenomenology is the art of characterizing the experience solely in terms of how it presents itself.

So let's look for the phenomenological "parts" of an experience. One way to divide it up is into the different sensory modalities, e.g. that which is being seen versus that which is being heard. We can also distinguish objects that may be known multimodally, so there can be some cross-classification here, e.g. I see you but I also hear you. This synthesis of a unified perception from distinct sensations seems to be an intellectual activity, so I might say that there are some visual sensations, some auditory sensations, a concept of you, and a belief that the two types of sensations are both caused by the same external entity.

The analysis can keep going in many directions from here. I can focus just on vision and examine the particular qualities that make up a localized visual sensation (e.g. the classic three-dimensional color schemes). I can look at concepts and thoughts and ask how they are generated and compounded. When I listen to my own thinking, what exactly is going on, at the level of appearance? Do I situate my thoughts and/or my self as occurring at a particular place in the overall environment of appearances, and if so, from where does the sense that I am located there arise?

I emphasize that, if one is doing phenomenology, these questions are to be answered, not by introducing one's favorite scientific guess as to the hidden neural mechanism responsible, but on the basis of introspection and consciously available evidence. If you can't identify a specific activity of your conscious mind as responsible for the current object of inquiry, then that is the phenomenological datum: no cause was apparent, no cause was identified. Of course you can go back to speculation and science later.

The important perspective to develop here is the capacity to think like a systematic solipsist. Suppose that appearances are all there is, including the thoughts passing through your mind. If that is all there is, then what is there, exactly? This is one way to approach the task of analyzing the "nature" or "internal structure" of consciousness, and a reasonably effective one if habitual hypotheses about the hidden material underpinnings of everything keep interfering. Just suppose for a moment that appearances don't arise from atoms and neurons, but that they arise from something else entirely, or that they arise from nothing at all. Either way, they're still there and you can still say something about them.

Having said that, then you can go back to your science. So let's do that now. What we have established is that there is structure on both sides of the alleged equation between a configuration of atoms and a conscious experience. The configuration of atoms has a spatial structure. The "structure" of a conscious experience is something more abstract; for example, it includes the fact that there are distinct sensory continua which are then conceptually synthesized into a world of perceived objects. The set of all sensations coming from the same sense also has a structure which is not exactly spatial, not in the physical sense. There is an "auditory space", a "kinesthetic space", and even the "apparent visual space" is not the same thing as physical space.

On both sides we have many things connected by specific structural relations. On the physical side, we have particles in configurations that can be defined by distances and angles. On the phenomenological side, we have elementary qualia of diverse types which are somehow conceptually fused into sense objects, which in turn become part of intentional states whose objects can also include the self and other intentional states.

Since we have a lot of structure and a lot of relation on both sides, it might seem we have a good chance of developing a detailed theory of how physical structure relates to phenomenological structure. But even before we begin that analysis, I have to note that we are working with really different things on both sides of the equation. On one side we have colors, thoughts, etc, and the other side we have particles. On one side we have connecting relations like "form part of a common perception" and "lie next to each other in a particular sensory modality", on the other side we have connecting relations like "located 3 angstroms apart". This is when it becomes obvious to me that any theory we make out of this, is going to involve property dualism. There is no way you can say the two sides of the equation are the same thing.

The business with the monads is about saying that the physical description in terms of configurations in space is wrong anyway; physical reality is instead a causal network of objects with abstractly complex internal states. That's very unspecified, but it also gives the phenomenological structure a chance to be the ontological structure, without any dualism. The "physical description" of the conscious mind is then just the mathematical characterization of the phenomenology, bleached of all ontological specifics, so it's just "entity X with properties a,b,c, in relation R to entity Y". If we can find a physics in which there are objects with states whose internal structure can be directly mapped onto the structure of phenomenological states, and only if we can do that, then we can have a nondualistic physical ontology.

comment by Mitchell_Porter · 2012-08-09T05:23:11.048Z · LW(p) · GW(p)

I don't know where you got the part about representational equilibria from.

My conception of a monad is that it is "physically elementary" but can have "mental states". Mental states are complex so there's some sort of structure there, but it's not spatial structure. The monad isn't obtained by physically concatenating simpler objects; its complexity has some other nature.

Consider the Game of Life cellular automaton. The cells are the "physically elementary objects" and they can have one of two states, "on" or "off".

Now imagine a cellular automaton in which the state space of each individual cell is a set of binary trees of arbitrary depth. So the sequence of states experienced by a single cell, rather than being like 0, 1, 1, 0, 0, 0,... might be more like (X(XX)), (XX), ((XX)X), (X(XX)), (X(X(XX)))... There's an internal combinatorial structure to the state of the single entity, and ontologically some of these states might even be phenomenal or intentional states.

Finally, if you get this dynamics as a result of something like the changing tensor decomposition of one of those quantum CAs, then you would have a causal system which mathematically is an automaton of "tree-state" cells, ontologically is a causal grid of monads capable of developing internal intentionality, and physically is described by a Hamiltonian built out of Pauli matrices, such as might describe a many-body quantum system.

Furthermore, since the states of the individual cell can have great or even arbitrary internal complexity, it may be possible to simulate the dynamics of a single grid-cell in complex states, using a large number of grid-cells in simple states. The simulated complex tree-states would actually be a concatenation of simple tree-states. This is the "network of a billion simple monads simulating a single complex monad".

comment by CarlShulman · 2012-08-08T18:42:24.660Z · LW(p) · GW(p)

Do you think that the outputs of human philosophers of mind, or physicists thinking about consciousness, can't be accurately modeled by computational processes, even with access to humans? If they can be predicted or heard, then they can be deferred to.

Replies from: Wei_Dai, fubarobfusco, Mitchell_Porter
comment by Wei Dai (Wei_Dai) · 2012-08-08T22:59:34.710Z · LW(p) · GW(p)

CEV is supposed to extrapolate our wishes "if we knew more", and the AI may be so sure that consciousness doesn't really exist in some fundamental ontological sense that it will override human philosophers' conclusions and extrapolate them as if they also thought consciousness doesn't exist in this ontological sense. (ETA: I think Eliezer has talked specifically about fixing people's wrong beliefs before starting to extrapolate them.) I share a similar concern, not so much about this particular philosophical problem, but that the AI will be wrong on some philosophical issue and reach some kind of disastrous or strongly suboptimal conclusion.

Replies from: Pentashagon
comment by Pentashagon · 2012-08-09T20:06:14.018Z · LW(p) · GW(p)

I share a similar concern, not so much about this particular philosophical problem, but that the AI will be wrong on some philosophical issue and reach some kind of disastrous or strongly suboptimal conclusion.

There's a possibility that we are disastrously wrong about our own philosophical conclusions. Consciousness itself may be ethically monstrous in a truly rational moral framework. Especially when you contrast the desire for immortality with the heat death. What is the utility of 3^^^3 people facing an eventual certain death versus even just 2^^^2 or a few trillion?

I don't think there's a high probability that consciousness itself will turn out to be the ultimate evil but it's at least a possibility. A more subtle problem may be that allowing consciousness to exist in this universe is evil. It may be far more ethical to only allow consciousness inside simulations with no defined end and just run them as long as possible with the inhabitants blissfully unaware of their eventual eternal pause. They won't cease to exist so much as wait for some random universe to exist that just happens to encode their next valid state...

Replies from: moridinamael
comment by moridinamael · 2012-08-10T00:51:48.140Z · LW(p) · GW(p)

They won't cease to exist so much as wait for some random universe to exist that just happens to encode their next valid state...

You could say the same of anyone who has ever died, for some sense of "valid" ... This, and similar waterfall-type arguments lead me to suspect that we haven't satisfactorily defined what it means for something to "happen."

Replies from: Pentashagon
comment by Pentashagon · 2012-08-10T18:36:15.240Z · LW(p) · GW(p)

You could say the same of anyone who has ever died, for some sense of "valid" ... This, and similar waterfall-type arguments lead me to suspect that we haven't satisfactorily defined what it means for something to "happen."

It depends on the natural laws the person lived under. The next "valid" state of a dead person is decomposition. I don't find the waterfall argument compelling because the information necessary to specify the mappings is more complex than the computed function itself.

comment by fubarobfusco · 2012-08-08T19:32:57.809Z · LW(p) · GW(p)

I'm hearing an invocation of the Anti-Zombie Principle here, i.e.: "If simulations of human philosophers of mind will talk about consciousness, they will do so for the same reasons that human philosophers do, namely, that they actually have consciousness to talk about" ...

Replies from: CarlShulman
comment by CarlShulman · 2012-08-08T20:18:30.815Z · LW(p) · GW(p)

I'm hearing an invocation of the Anti-Zombie Principle here, i.e.: "If simulations of human philosophers of mind will talk about consciousness, they will do so for the same reasons that human philosophers do,

Yes.

namely, that they actually have consciousness to talk about" ...

Not necessarily, in the mystical sense.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-08-08T20:46:30.314Z · LW(p) · GW(p)

Okay, to clarify: If 'consciousness' refers to anything, it refers to something possessed both by human philosophers and accurate simulations of human philosophers. So one of the following must be true: ① human philosophers can't be accurately simulated, ② simulated human philosophers have consciousness, or ③ 'consciousness' doesn't refer to anything.

Replies from: CarlShulman
comment by CarlShulman · 2012-08-08T21:02:01.476Z · LW(p) · GW(p)

Dualists needn't grant your first sentence, claiming epiphenomena. I am talking about whether mystical mind features would screw up the ability of an AI to carry out our aims, not arguing for physicalism (here).

comment by Mitchell_Porter · 2012-08-09T04:56:12.548Z · LW(p) · GW(p)

I agree that a totally accurate simulation of a philosopher ought to arrive at the same conclusions as the original. But a totally accurate simulation of a human being is incredibly hard to obtain.

I've mentioned that I have a problem with outsourcing FAI design to sim-humans, and that I have a problem with the assumption of "state-machine materialism". These are mostly different concerns. Outsourcing to sim-humans is just wildly impractical, and it distracts real humans from gearing up to tackle the problems of FAI design directly. Adopting state-machine materialism is something you can do, right now, and it will shape your methods and your goals.

The proverbial 500-subjective-year congress of sim-philosophers might be able to resolve the problem of state-machine materialism for you, but then so would the discovery of communications from an alien civilization which had solved the problem. I just don't think you can rely on either method, and I also think real humans do have a chance of solving the ontological problem by working on it directly.

comment by Decius · 2012-08-08T20:46:53.559Z · LW(p) · GW(p)

The simple solution is to demystify qualia: I don't understand the manner in which ionic transfer within my brain appears to create sensation, but I don't have to make the jump from that to 'sensation and experience are different from brain state'. All of my sense data comes through channels- typically as an ion discharge through a nerve or a chemical in my blood. Those ion discharges and chemicals interact with brain cells in a complicated manner, and "I" "experience" "sensation". The experience and sensation are no more mysterious than the identity.

comment by Giles · 2012-08-08T23:05:55.344Z · LW(p) · GW(p)

I find Mitchell_Porter difficult to understand, but I've voted this up just for the well-written summary of the SI's strategy (can an insider tell me whether the summary is accurate?)

Just one thing though - I feel like this isn't the first time I've seen How An Algorithm Feels From Inside linked to as if it was talking about qualia - which it really isn't. It would be a good title for an essay about qualia, but the actual text is more about general dissolving-the-question stuff.

Replies from: David_Gerard
comment by David_Gerard · 2012-08-08T23:27:03.607Z · LW(p) · GW(p)

What is the expansion of your usage of "qualia"? The term used without more specificity is too vague when applied to discussions of reducibility and materialism (and I did just check SEP on the matter and it marked the topic "hotly debated"); there is a meaning that is in philosophical use which could indeed be reasonably described as something very like what How An Algorithm Feels From Inside.

Replies from: haig, Giles
comment by haig · 2012-08-10T03:46:54.966Z · LW(p) · GW(p)

"How an algorithm feels from inside" discusses a particular quale, that of the intuitive feeling of holding a correct answer from inside the cognizing agent. It does not touch upon what types of physically realizable systems can have qualia.

Replies from: David_Gerard
comment by David_Gerard · 2012-08-10T06:35:01.887Z · LW(p) · GW(p)

Um, OK. What types of physically realizable systems can have qualia? Evidently I'm unclear on the concept.

Replies from: haig
comment by haig · 2012-08-10T06:42:05.415Z · LW(p) · GW(p)

That is the $64,000 question.

Replies from: David_Gerard
comment by David_Gerard · 2012-08-10T23:05:42.770Z · LW(p) · GW(p)

It's not yet clear to me that we're talking about anything that's anything. I suppose I'm asking for something that does make that a bit clearer.

Replies from: haig
comment by haig · 2012-08-11T01:08:59.357Z · LW(p) · GW(p)

Ok, so we can with confidence say that humans and other organisms with developed neural systems experience the world subjectively, maybe not exactly in similar ways, but conscious experience seems likely for these systems unless you are a radical skeptic or solipsist. Based on our current physical and mathematical laws, we can reductively analyse these systems and see how each subsystem functions, and, eventually, with sufficient technology we'll be able to have a map of the neural correlates that are active in certain environments and which produce certain qualia. Neuroscientists are on that path already. But, are only physical nervous systems capable of producing a subjective experience? If we emulate with enough precision a brain with sufficient input and output to an environment, computationalists assume that it will behave and experience the same as if it was a physical wetware brain. Given this assumption, we conclude that the simulated brain, which is just some machine code operating on transistors, has qualia. So now qualia is attributed to a software system. How much can we diverge from this perfect software emulation and still have some system that experiences qualia? From the other end, by building a cognitive agent piece-meal in software without reference to biology, what types of dynamics will cause qualia to arise, if at all? The simulated brain is just data, as is Microsoft Windows, but Windows isn't conscious, or so we think. Looking at the electrons moving through the transistors tells us nothing about what running software has qualia and what does not. On the other hand, It might be the case that deeper physics beyond the classical must be involved for the system to have qualia. In that case, classical computers will be unable to produce software that experiences qualia and machines that exploit quantum properties may be needed, this is still speculative, but the whole question of qualia is still speculative.

So now, when designing an AI that will learn and grow and behave in accordance with human values, how important is qualia for it to function along those lines? Can an unconscious optimizing algorithm be robust enough to act morally and shape a positive future for humanity? Will an unconscious optimizing algorithm, without the same subjectivity that we take for granted, be able to scale up in intelligence to the level we see in biological organisms, let alone humans and beyond, or is subjective experience necessary for the level of intelligence we have? If possible, will an optimizing algorithm actually become conscious and experience qualia after a certain threshold, and how does that affect its continued growth?

On a side note, my hypothetical friendly AGI project that would directly guarantee success without wondering about speculations on the limits of computation, qualia, or how to safely encode meta-ethics in a recursively optimizing algorithm, would be to just grow a brain in a vat as it were, maybe just neural tissue cultures on biochips with massive interconnects coupled to either a software or hardware embodiment, and design its architecture so that its metacognitive processes are hardwired for compassion and empathy. A bodhisattva in a box. Yes, I'm aware of all the fear-mongering regarding anthropomorphized AIs, but I'm willing to argue that the possibility space of potential minds, at least the ones we have access to create from our place in history, is greatly constricted and that this route may be the best, and possibly, the only way forward.

comment by Giles · 2012-08-09T14:36:00.682Z · LW(p) · GW(p)

there is a meaning that is in philosophical use which could indeed be reasonably described as something very like what How An Algorithm Feels From Inside.

Can you explain?

Replies from: David_Gerard
comment by David_Gerard · 2012-08-09T16:45:08.491Z · LW(p) · GW(p)

I'm not sure how to break that phrase down further. Section 3 of that SEP article covers the issue, but the writing is as awful as most of the SEP. It's a word for the redness of red as a phenomenon of the nervous system, which is broadly similar between humans (since complex adaptations have to be universal to evolve).

But all this is an attempt to rescue the word "qualia". Again, I suggest expanding the word to whatever it is we're actually talking about in terms of the problem it's being raised as an issue concerning.

comment by Risto_Saarelma · 2012-08-08T17:54:10.727Z · LW(p) · GW(p)

So what should I make of this argument if I happen to know you're actually an upload running on classical computing hardware?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-09T05:05:10.299Z · LW(p) · GW(p)

That someone managed to produce an implausibly successful simulation of a human being.

There's no contradiction in saying "zombies are possible" and "zombie-me would say that zombies are possible". (But let me add that I don't mean the sort of zombie which is supposed to be just the physical part of me, with an epiphenomenal consciousness subtracted, because I don't believe that consciousness is epiphenomenal. By a zombie I mean a simulation of a conscious being, in which the causal role of consciousness is being played by a part that isn't actually conscious.)

Replies from: Risto_Saarelma, Cyan, None
comment by Risto_Saarelma · 2012-08-09T07:48:00.705Z · LW(p) · GW(p)

So if you accidentally cut the top of your head open while shaving and discovered that someone had went and replaced your brain with a high-end classical computing CPU sometime while you were sleeping, you couldn't accept actually being an upload, since the causal structure that produces the thoughts that you are having qualia are still there? (I suppose you might object to the assumed-to-be-zombie upload you being referred to as 'you' as well.)

Reason I'm asking is that I'm a bit confused exactly where the problems from just the philosophical part would come in with the outsourcing to uploaded researchers scenario. Some kind of more concrete prediction, like that a neuromorphic AI architecturally isomorphic to a real human central nervous system just plain won't ever run as intended until you build an quantum octonion monad CPU to house the qualia bit, would be a lot more not-confusing stance, but I don't think I've seen you take that.

comment by Cyan · 2012-08-09T14:21:06.372Z · LW(p) · GW(p)

I'm going to collect some premises that I think you affirm:

  • consciousness is something most or all humans have; likewise for the genes that encode this phenotype
  • consciousness is a quantum phenomenon
  • the input-output relation of the algorithm that the locus of consciousness implements can be simulated to arbitrary accuracy (with difficulty)
  • if the simulation isn't implemented with the right kind of quantum system, it won't be conscious

I have some questions about the implications of these assertions.

  • Do you think the high penetrance of consciousness is a result of founder effect + neutral drift or the result of selection (or something else)?
  • What do you think is the complexity class of the algorithm that the locus of consciousness implements?
  • If you answered "selection" to the first question, what factors do you think contributed to the selection of the phenotype that implements that algorithm in a way that induces consciousness as a "causal side-effect"?
Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-11T15:25:43.746Z · LW(p) · GW(p)

It's anthropically necessary that the ontology of our universe permits consciousness, but selection just operates on state machines, and I would guess that self-consciousness is adaptive because of its functional implications. So this is like looking for an evolutionary explanation of why magnetite can become magnetized. Magnetite may be in the brain of birds because it helps them to navigate, and it helps them to navigate because it can be magnetized; but the reason that this substance can be magnetized has to do with physics, not evolution. Similarly, the alleged quantum locus may be there because it has a state-machine structure permitting reflective cognition, and it has that state-machine structure because it's conscious; but it's conscious because of some anthropically necessitated ontological traits of our universe, not because of its useful functions. Evolution elsewhere may have produced unconscious intelligences with brains that only perform classical computations.

Replies from: Cyan
comment by Cyan · 2012-08-11T16:38:39.997Z · LW(p) · GW(p)

this is like looking for an evolutionary explanation of why magnetite can become magnetized... the alleged quantum locus... [is] conscious because of some anthropically necessitated ontological traits of our universe, not because of its useful functions

I think you have mistaken the thrust of my questions. I'm not asking for an evolutionary explanation of consciousness per se -- I'm trying to take your view as given and figure out what useful functions one ought to expect to be associated with the locus of consciousness.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-11T16:47:22.798Z · LW(p) · GW(p)

What does conscious cognition do that unconscious cognition doesn't do? The answer to that tells you what consciousness is doing (though not whether these activities are useful...).

comment by [deleted] · 2012-08-09T09:25:52.333Z · LW(p) · GW(p)

So if you observed such a classical upload passing exceedingly carefully designed and administered turing tests, you wouldn't change your position on this issue? Is there any observation which would falsify your position?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-10T04:21:37.391Z · LW(p) · GW(p)

Uploads are a distraction here. It's the study of the human brain itself which is relevant. I already claim that there is a contradiction between physical atomism and phenomenology, that a conscious experience is a unity which cannot plausibly be identified with the state of a vaguely bounded set of atoms. If you're a materialist who believes that the brain ultimately consists of trillions of simple particles, then I say that the best you can hope for, as an explanation of consciousness, is property dualism, based on some presently unknown criterion for saying exactly which atoms are part of the conscious experience and which ones aren't.

(I should emphasize that it would be literally nonsensical to say that a conscious experience is a physical part of the brain but that the boundaries of this part, the criteria for inclusion and exclusion at the atomic level, are vague. The only objectively vague things in the world are underspecified concepts, and consciousness isn't just a "concept", it's a fact.)

So instead I bet on a new physics where you can have complex "elementary" entities, and on the conscious mind being a single, but very complex, entity. This is why I talk about reconstructions of quantum mechanics in terms of tensor products of semilocalized Hilbert spaces of varying dimensionality, and so on. Therefore, the real test of these ideas will be whether they make sense biophysically. If they just don't, then the options are to try to make dualism work, or paranoid hypotheses like metaphysical idealism and the Cartesian demon. Or just to deny the existence and manifest character of consciousness; not an option for me, but evidently some people manage to do this.

comment by fubarobfusco · 2012-08-08T16:51:22.488Z · LW(p) · GW(p)

Although the abstract concept of a computer program (the abstractly conceived state machine which it instantiates) does not contain qualia, people often treat programs as having mind-like qualities, especially by imbuing them with semantics - the states of the program are conceived to be "about" something, just like thoughts are.

So far as I can tell, I am also in the set of programs that are treated as having mind-like qualities by imbuing them with semantics. We go to a good deal of trouble to teach people to treat themselves and others as people; this seems to be a major focus of early childhood education, language learning, and so on. "Semantics" and "aboutness" have to do with language use, after all; we learn how to make words do things.

Consciousness may not be a one-place property ("I am conscious"); it is a two-place property ("I notice that I am conscious"; "I notice that you are conscious"; "You notice that I am conscious"). After all, most of the time we are not aware of our consciousness.

Replies from: David_Gerard
comment by David_Gerard · 2012-08-08T23:36:02.862Z · LW(p) · GW(p)

Consciousness is not continuous - it appears to be something we retcon after the fact.

comment by timtyler · 2012-08-08T23:31:47.630Z · LW(p) · GW(p)

But the idea that a human being is a state machine running on a distributed neural computation is just a hypothesis, and I would argue that it is a hypothesis in contradiction with so much of the phenomenological data, that we really ought to look for a more sophisticated refinement of the idea.

I'm not aware of any "phenomenological data" that contradicts computationalism.

You have to factor in the idea that human brains have evolved to believe themselves to be mega-special and valuable. Once you have accounted for this, no phenomenological data contradicts computationalism.

Replies from: hankx7787
comment by hankx7787 · 2012-08-09T00:52:40.763Z · LW(p) · GW(p)

If you want to downvote this comment, I think you should provide some kind of rebuttal...

comment by Steve_Rayhawk · 2012-08-08T22:42:22.456Z · LW(p) · GW(p)

You need to do the impossible one more time, and make your plans bearing in mind that the true ontology [...] something more than your current intellectual tools allow you to represent.

With the "is" removed and replaced by an implied "might be", this seems like a good sentiment...

...well, given scenarios in which there were some other process that could come to represent it, such that there'd be a point in using (necessarily-)current intellectual tools to figure out how to stay out of those processes' way...

...and depending on the relative payoffs, and the other processes' hypothetical robustness against interference.

(To the extent that decomposing the world into processes that separately come to do things, and can be "interfered with" or not, makes sense at all, of course.)

A more intelligible argument than the specific one you have been making is merely "we don't know whether there are any hidden philosophical, contextual, or further-future gotchas in whether or not a seemingly valuable future would actually be valuable". But in that case it seems like you need a general toolset to try to eventually catch the gotcha hypotheses you weren't by historical accident already disposed to turn up, the same way algorithmic probability is supposed to help you organize your efforts to be sure you've covered all the practical implications of hypotheses about non-weird situations. As a corollary: it would be helpful to propose a program of phenomenological investigation that could be expected to cover the same general sort of amount of ground where possible gotchas could be lurking as would designing an AI to approximate a universal computational hypothesis class.

If it matters, the only scenario I can think of specifically relating to quantum mechanics is that there are forms of human communication which somehow are able to transfer qubits, that these matter for something, and that a classical simulation wouldn't preserve them at the input (and/or the other boundaries).

comment by moridinamael · 2012-08-08T16:28:06.035Z · LW(p) · GW(p)

In addition to being a great post overall, the first ~half of the post is a really excellent and compact summary of huge and complicated interlocking ideas. So, thanks for writing that, it's very useful to be able to see how all the ideas fit together at a glance, even if one already has a pretty good grasp of the ideas individually.

I've formed a tentative hypothesis that some human beings experience their own subjective consciousness much more strongly than others. An even more tentative explanation for why this might happen is that perhaps the brain regions responsible for reflective self awareness leading to that particular Strange Loop become activated or used intensely in some people to a greater degree and at an earlier age. Perhaps thinking a lot about "souls" at a young age causes you to strongly anchor every aspect of your awareness to a central observer-concept, including awareness of that observer-concept.

I don't know.

Anyway, this subpopulation of highly self-conscious people might feel strongly that their own qualia are, in fact, ontologically fundamental, and all their other sense data are illusionary. (I say "other" sense data because fundamentally your prefrontal cortex has no idea which objects of its input stream represent what we would naively classify as processed "sense data" versus which objects consist of fabrication and abstraction from other parts of the brain.)

The rest of the human population would be less likely to make statements about their subjective experience of their own awareness because that experience is less immediate for them.

I developed this hypothesis upon realizing that some people immediately know what you mean when you start talking about reflective self-awareness, and for these people the idea of an observer-spirit consciousness seems like a natural and descriptive explanation of their awarness. But then there are other people who look at you blankly when you make statements about your sense of yourself as a floating observer, no matter how you try to explain it, as if they really never reflected on their own awareness.

For myself, I think of "redness" as belonging to the same ontological class as an object in a C++ program - possessing both a physical reality in bits on a chip, and also complex representational properties within the symbolic system of C++. And I see no reason why my C++ program couldn't ultimately form a "awareness" object which then becomes aware of itself. And that could explain sentences the C++ program outputs about having an insistent sense of its own awareness. It actually sticks in my craw to say this. I, personally, have always had a strong sense of my own awareness and a powerful temptation to believe that there is some granular soul within me. I am not entirely satisfied with how this soul is treated by a reductionistic approach, but nor can I formulate any coherent objections, despite my best effots.

ed: Are people literally downvoting every reply that has anything good to say about the parent?

comment by Alejandro1 · 2012-08-08T14:19:44.326Z · LW(p) · GW(p)

Upvoted for clarity.

I think, along with most LWers, that your concerns about qualia and the need for a new ontology are mistaken. But even granting that part of your argument, I don't see why it is problematic to approach the FAI problem through simulation of humans. Yes, you would only be simulating their physical/computational aspects, not the ineffable subjectiveness, but does that loss matter, for the purposes of seeing how the simulations react to different extrapolations and trying to determine CEV? Only if a) the qualia humans experience are related to their concrete biology and not to their computational properties, and b) the relation is two-ways, so the qualia are not epiphenomenal to behavior but affect it causally, and physics as we understand it is not causally closed. But in that case, you would not be able to make a good computational simulation of a human's behavior in the first place!

In conclusion, assuming that faithful computational simulations of human behavior are possible, I don't see how the qualia problem interferes with using them to determine CEV and/or help program FAI. There might be other problems with this line of research (I am not endorsing it) but the simulations not having an epiphenomenal inner aspect that true humans have does not interfere. (In fact, it is good--it means we can use simulations without ethical qualms!)

Replies from: Filipe
comment by Filipe · 2012-08-08T21:26:28.178Z · LW(p) · GW(p)

This seems essentially the same answer as the most upvoted comment on the thread. Yet, you were at -2 just a while ago. I wonder why.

Replies from: Alejandro1
comment by Alejandro1 · 2012-08-08T22:00:29.131Z · LW(p) · GW(p)

I wondered too, but I don't like the "why the downvotes?" attitude when I see it in others, so I refrained from asking. (Fundamental attribution error lesson of the day: what looks like a legitimate puzzled query from the inside, looks like being a whiner from the outside).

My main hypothesis was that the "upvoted for clarity" may have bugged some who saw the original post as obscure. And I must admit that the last paragraphs were much more obscure than the first ones.

comment by scientism · 2012-08-08T20:31:33.702Z · LW(p) · GW(p)

I think you're on the right track with your initial criticisms but qualia is the wrong response. Qualia is a product of making the same mistake as the platonists; it's reifying the qualities of objects into mental entities. But if you take the alternative (IMO correct) approach - leaving qualities in the world where they belong - you get a similar sort of critique because clearly reducing the world to just the aspects that scientists measure is a non-starter (note that it's not even the case that qualitative aspects can't be measured - i.e., you can identify colours using standardised samples - it's simply that these don't factor into the explanations reductionists want to privilege). This is where the whole qualia problem began: Descartes wanted to reduce the world to extended bodies and so he had to hide all the stuff that didn't fit in a vastly expanded concept of mind.

comment by JenniferRM · 2012-08-10T20:20:39.556Z · LW(p) · GW(p)

I've been trying to find a way to empathically emulate people who talk about quantum consciousness for a while, so far with only moderate success. Mitchell, I'm curious if you're aware of the work of Christof Koch and Giulio Tononi, and if so, could you speak to their approach?

For reference (if people aren't familiar with the work already) Koch's team is mostly doing experiments... and seems to be somewhat close to having mice that have genes knocked out so that they "logically would seem" to lack certain kinds of qualia that normal mice "logically would seem" to have. Tononi collaborates with him and has proposed a way to examine a thing that computes and calculates that thing's "amount of consciousness" using a framework he called Integrated Information Theory. I have not sat down and fully worked out the details of IIT such that I could explain it to a patient undergrad at a chalkboard, but the reputation of the people involved is positive (I've seen Koch's dog and pony show a few times and it has improved substantially over the years and he is pimping Tononi pretty effectively)... basically the content "smells promising" but I'm hoping I can hear someone else's well informed opinion to see if I should spend more time on it.

Also, it seems to be relevant to this philosophic discussion? Or not? That's what I'm wondering. Opinions appreciated :-)

Replies from: shminux, Mitchell_Porter
comment by shminux · 2012-08-20T20:26:08.961Z · LW(p) · GW(p)

It bugs me when people talk about 'quantum consciousness", given that classical computers can do anything quantum computers can do, only sometimes slower.

comment by Mitchell_Porter · 2012-08-12T05:44:04.228Z · LW(p) · GW(p)

IIT's measure of "information integration", phi, is still insufficiently exact to escape the "functionalist sorites problem". It could be relevant for a state-machine analysis of the brain, but I can't see it being enough to specify the mapping between physical and phenomenological states. Also, Tononi's account of conscious states seems to be just at the level of sensation. But this is an approach which could converge with mine if the right extra details were added.

I've been trying to find a way to empathically emulate people who talk about quantum consciousness

"We" are a heterogeneous group. Chopra and Penrose - not much in common. Besides, even if you believe consciousness can arise from classical computation but you also believe in many worlds, then quantum concepts do play a role in your theory of mind, in that you say that the mind consists of interactions between distinct states of decohered objects. Figure out how Tononi's "phi" could be calculated for the distinct branches of a quantum computer, and lots of people will want to be your friend.

Replies from: JenniferRM
comment by JenniferRM · 2012-08-13T07:21:28.854Z · LW(p) · GW(p)

If I understand what you're calling the "functionalist sorites problem", it seems to me that Integrated Information Theory is meant to address almost exactly that issue, with its "phi" parameter being a measure of something like the degree (in bits) which an input is capable of exerting influence over a behavioral outcome.

Moreover, qualia, at least as I seem to experience them, are non-binary. Merely hearing the word "red" causes aspects of my present environment to leap to salience in a way that I associate with those facets of the world being more able to influence my subsequent behavior... or to put it much more prosaically: reminders can, in fact, bring reminded content to my attention and thereby actually work. Equally, however, I frequently notice my output having probably been influenced by external factors that were in my consciousness to only a very minor degree such that it would fall under the rubric of priming. Maybe this is ultimately a problem due to generalizing from one example? Maybe I have many gradations of conscious awareness and you have binary awareness and we're each assuming homogeneity where none exists?

Solving a fun problem and lots of people wanting to be my friend sounds neat... like a minor goad to working on the problem in my spare time and seeing if I can get a neat paper on it? But I suspect you're overestimating people's interest, and I still haven't figured out the trick of being paid well to play with ideas, so until them schema inference software probably pays the bills more predictably than trying to rid the world of quantum woo. There are about 1000 things I could spend the next few years on, and I only get to do maybe 2-5 of them, and then only in half-assed ways unless I settle on ONLY one of them. Hobby quantum consciousness research is ~8 on the list and unlikely to actually get many brain cycles in the next year :-P

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-13T08:04:40.924Z · LW(p) · GW(p)

qualia ... are non-binary

I posed the functionalist sorites problem in the form of existence vs nonexistence of a specific quale, but it can equally be posed in the form of one state of consciousness vs another, where the difference may be as blatant or as subtle as you wish.

The question is, what are the exact physical conditions under which a completely specific quale or state of consciousness exists? And we can highlight the need for exactness, by asking at the same time what the exact conditions are, under which no quale occurs, or under which the other state of consciousness occurs; and then considering edge cases, where the physical conditions are intermediate between one vague specification and another vague specification.

For the argument to work, you must be clear on the principle that any state of consciousness is exactly something, even if we are not totally aware of it or wouldn't know how to completely describe it. This principle - which amounts to saying that there is no such thing as entities which are objectively vague - is one that we already accept when discussing physics, I hope.

Suppose we are discussing what the position of an unmeasured electron is. I might say that it has a particular position; I might say that it has several positions or all positions, in different worlds; I might say that it has no position at all, that it just isn't located in space right now. All of those are meaningful statements. But to say that it has a position, but it doesn't have a particular position, is conceptually incoherent. It doesn't designate a possibility. It most resembles "the electron has no position at all", but then you don't get to talk as if the electron nonetheless has a (nonspecific) position at the same time as not actually having a position.

The same principle applies to conscious experience. The quale is always a particular quale, even if you aren't noticing its particularities.

Now let us assume for the moment that this principle of non-vagueness is true for all physical states and all phenomenological states. That means that when we try to understand the conditions under which physical states and phenomenological states are related, we are trying to match up two sets of definite "things".

The immediate implication is that any definite physical state will be matched with a definite phenomenology (or with no phenomenology at all). Equally it implies that any definite phenomenological state will correspond to a definite physical state or to a set of definite physical states. The boundary between "physical states corresponding to one phenomenological state", and "physical states corresponding to another phenomenological state", must be sharp. The only way to avoid a sharp boundary is if there's a continuum on both sides - a continuum of physical states, and a continuum of phenomenological states - but again there must be an exact mapping between them, because of non-vagueness.

IIT does not provide an exact mapping because it doesn't really concern itself with exact microphysical facts, like exact microphysical states, or exact microscopic boundaries between the physical systems that are coupled to each other. Everything is just being described in a coarse-grained fashion; which is fine for computational or other practical causal analyses.

I don't think I would find many people willing to defend the position that conscious states are objectively vague. I also wouldn't find many willing to say that any law of correspondence between physical and phenomenological states must be exact on the microphysical level. But this is the implication of the principle of ontological non-vagueness, applied to both sides of the equation.

Replies from: JenniferRM
comment by JenniferRM · 2012-08-13T17:19:01.805Z · LW(p) · GW(p)

Someone downvoted you, but I upvoted you to correct it. I only downvote when I think there is (1) bad faith communication or (2) an issue above LW's sanity line is being discussed tactlessly. Neither seems to apply here.

That said, I think you just made a creationist "no transitional forms" move in your argument? A creationist might deny that 200-million-year-separated organisms, seemingly obviously related by descent, are "the same" magically/essentially distinct "kind". There's a gap between them! When pressed (say by being shown some intermediate forms that have been found given the state of the scientific excavation of the crust) a creationist could point in between each intermediate form to more gaps which might naively seem to make their "gaps exist" point a stronger point against the general notion of "evolution by natural selection". But it doesn't. Its not a stronger argument thereby, but a weaker one.

Similarly, you seem to have a rhetorical starting point where you verbally deploy the law of the excluded middle to say that either a quale "is or is not" experienced due to a given micro-physical configuration state (notice the similarity of focusing on simplistic verbal/propositional/logical modeling of rigid "kinds" or "sets" with magically perfect inclusion/exclusion criteria). I pushed on that and you backed down. So it seems like you've retreated to a position where each verbally distinguishable level of conscious awareness should probably have a different physical configuration, and in fact this is what we seem to observe with things like fMRI...if you squint your eyes and acknowledge limitations in observation and theory that are being rectified by science even as we write. We haven't nanotechnogically consumed the entire crust of the earth to find every fossil, and we haven't simulated a brain yet, but these things may both be on the long term path of "the effecting of all things possible".

My hope in trying to empathically emulate people who take quantum consciousness seriously is that I'll gain a new insight... but it is hard because mostly what I see is things I find very easy to interpret as second rate thinking (like getting confused in abstractions of philosophical handwaving) while ignoring the immanent physical vastness of the natural world (with its trillions of moving parts that have had billions of years of selection to become optimized) in the manner of Penrose and Searle and so on. I want there to be something interesting to understand. Some "aha moment" when it all snaps into place, and I don't want that moment to be a final decisive insight into what's going wrong in your heads that makes you safe to write off...

The only way to avoid a sharp boundary is if there's a continuum on both sides - a continuum of physical states, and a continuum of phenomenological states - but again there must be an exact mapping between them, because of non-vagueness.

This just sorta sounds to me like you've been infected with dualism and have no other metaphysical theories to play off against the dualistic metaphysics in your head. I might try to uncharitably translate you to be saying something like "I have a roughly verbalizable model of my phenomenological experience of my brain states for a brain that is self-monitoring, self-regulating, world-modeling, agents-in-world-modeling, self-as-agent-modeling, and behavior-generating (and btw, I promoted my model to 'ontological realness' and started confusing my experience of my belief in ontologically non-physical mental states with there being a platonic ghost in my head or something), but brains and my ghost model are both really complicated, and it seems hard to map them into each other with total fidelity... and this means that brains must be very magic, you might say quantumly magic, in order to match how confused I am about the lack of perfect match between my ghost model and my understanding of the hardware that might somehow compute the ghost model... and since my ghost model is ontologically real this means there are ghosts... in my brain... because it's a quantum brain... or something... I'm not sure..."

I want something to fall out of conversation with (and reading of) quantum consciousness theorists that shows that something like a quantum fourier transform is running on our neurons to allow "such and such super powers" to be demonstrated by humans that clearly has a run time in our brains that beats what would be possible for a classical turing machine. What would classical-Turing-zombies look like that is different from how quantum-soulful-people would look like? All I can hear is mediocre philosophy of mind. I think? I don't intend meanness.

I'm just trying to communicate the problem I'm having hearing whatever it is that you're really trying to say that makes sense to you. I'm aware of inferential distances and understand that I might need to spend 200 weekends (which would make it a four year hobby project) reading traditionally-understood-as-boring non-fiction to understand what you're saying, but my impression is that no such course of reading exists for you to point me towards... which would be weak but distinct evidence for you being confused rather than me being ignorant.

Is there something I should read? What am I missing?

ETA: I re-read this and find my text to be harsher than I'd like. I really don't want this to be harsh, but actually want enlightenment here and find myself groping for words that will get you to engage with my vocabulary and replace an accessible but uncharitable interpretation in my head with a better theory. If you'd like to not respond in public, PM me your email and I'll respond via that medium? Maybe IRC would be a better to reduce the latency on vocabulary development?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-13T23:04:04.464Z · LW(p) · GW(p)

I think you just made a creationist "no transitional forms" move in your argument?

No, I explicitly mentioned the idea that there might be a continuum of possible quale states; you even quoted the sentence where I brought it up. But it is irrelevant to my argument, which is that for a proposed mapping between physical and phenomenological states to have any chance of being true, it must possess an extension to an exact mapping between fundamental microphysical states and phenomenological states (not necessarily a 1-to-1 mapping) - because the alternative is "objective vagueness" about which conscious state is present in certain physical configurations - and this requirement is very problematic for standard functionalism based on vaguely defined mesoscopic states, since any specification of how all the edge cases correspond to the functional states will be highly arbitrary.

Let me ask you this directly: do you think it would be coherent to claim that there are physical configurations in which there is a state of consciousness present, but it's not any particular state of consciousness? It doesn't have to be a state of consciousness that we presently know how to completely characterize, or a state of consciousness that we can subjectively discriminate from all other possible states of consciousness; it just has to be a definite, particular state of consciousness.

If we agree that ontological definiteness of physical state implies ontological definiteness in any accompanying state of consciousness (again I'll emphasize that this is ontological definiteness, not phenomenological definiteness; I must allow for the fact that states of consciousness have details that aren't noticed by the experiencer), then that immediately implies the existence of an exact mapping from microphysically exact states to ontologically definite states of consciousness. Which implies an inverse mapping from ontologically definite states of consciousness, to a set of exact microphysical states, which are the physical states (or state, there might only be one) in which that particular state of consciousness is realized.

Replies from: JenniferRM
comment by JenniferRM · 2012-08-14T02:35:29.839Z · LW(p) · GW(p)

OK, I hope I'm starting to get it. Are you looking for a basis to power a pigeonhole argument about equivalence classes?

If we're going to count things, then a potential source of confusion is that there are probably more ontologically distinct states of "being consciously depressed" than can detectable from the inside, because humans just aren't very good at internal monitoring and stuff, but that doesn't mean they aren't differences that a martian with Awesome Scanning Equipment couldn't detect. So a mental patient could be phenomenologically depressed in a certain way and say "that feeling I just felt was exactly the same feeling as in the past modulo some mental trivia about vaguely knowing it is Tuesday rather than Sunday" and the Martian anthropologist might check the scanner logs and might truthfully agree but more likely the the Martian might truthfully say, "Technically no: you were more consciously obsessed about your ex-boyfriend than you were consciously obsessed about your cellulite, which is the opposite ordering of every time in the past, though until I said this you were not aware of this difference in your awareness" and then the patient might introspect based on the statement and say "Huh, yeah, I guess you're right, curious that I didn't notice that from the inside while it was happening... oh well, time for more crying now..." And in general, absent some crazy sort of phenomenological noise source, there are almost certainly fewer phenomenologically distinct states than ontologically distinct states.

So then the question arises as to how the Martian's "ontology monitoring" scanner worked.

It might have measured physical brain states via advanced but ultimately prosaic classical-Turing-neuron technology or it might have used some sort of quantum-chakra-scanner that detects qualia states directly. Perhaps it has both and can run either or both scanners and compare their results over time? One of them can report that a stray serotonin molecule was different, and the other can identify an ontologically distinct feeling of satisfaction. Which leads to a second question of number: can the the quantum chakra scanner detect exactly the same cardinality of qualia states as the classical turing scanner can detect brain states? If I'm reconstructing/interpreting your claim properly, this starts to get a the heart of a sort of "quantum qualia pigeonhole puzzle"?

Except even if this is what you're proposing, I don't see how it implies quantum stuff is likely to be very important...

If the scanners give exactly the same counts, that would be surprising and probably very few people expect this outcome because there are certainly unconscious mental processes and those are presumably running on "brain tissue" and hence contribute to brain state counts but not qualia state counts.

So the likely answer is that there are fewer qualia states than brain states. Conversely if somehow there were more qualia states than brain states then I think that would be evidence for "mind physics" above and beyond "particle physics" and upon learning the existence of a physics that includes ontologically real cartesian mental entities that runs separately from but affect raw brain matter... well, then I guess my brain would explode... and right afterwards I'd get curious about how "computational chakronics" work :-)

Assuming the Martian's scanners came out with more brain-states than qualia-states, this would confirm my expectations, and would also confirm the (already dominant?) theory that there was something interesting about the operation, interconnection, and/or embodied-embedding of certain kinds of brain tissue in the relatively boring way that is the obvious target of research for computationally-inspired neuro-physical science. This is what all the fMRIs and Halley Berry neuron probing and face/chalice experiments are for.

A result of |brainstates| > |qualiastates| would be consistent with the notion that consciousness was "substrate independent" in potentially two ways, first it might allow us to port the "adaptively flexible self monitoring conscious architecture dynamic" to a better medium by moving the critical patterns of interaction to microchips or something (allowing us to drop all the slimy proteins and ability to be denatured at 80 Celsius and so on). Second, it might allow us to replace significant chunks of nervous tissue (spinal tissue and retina and so on) with completely different and better stuff without even worrying because they probably aren't even involved in "consciousness" except as simple data pipes.

Which implies an inverse mapping from ontologically definite states of consciousness, to a set of exact microphysical states, which are the physical states (or state, there might only be one) in which that particular state of consciousness is realized.

This would be pretty spooky to me if it was possible. My current expectations (call this B>Q>P) are:

|brainstates| > |qualiastates| > |phenomenonologystates|

If my expected ordering is right, then an inverse mapping from qualiastates to brain states should be impossible by the pigeonhole principle... and then substrate independence probably "goes through". Quantum mechanics, in this model, could totally be "just a source of noise", with some marginal value as a highly secure random number generator to use in mixed strategies, but this result would be perfectly consistent with quantum effects mostly existing as a source of error that makes it harder to build a classical computation above it that actually does cognitive work rather than merely thrashing around doing "every possible thing".

I mean... quantum stuff could still matter if B>Q>P is true. Like it might be involved in speedup tricks for some neural algorithms that we haven't yet understood? But it doesn't seem like it would be an obvious source of "magical qualia chakras" that make the people who have them more conscious in a morally-important ghost-in-the-brain way that would be lost from porting brain processes to a faster and more editable substrate. If it does, then that result is probably really really important (hence my interest)... it just seems very unlikely to me at the present time.

Are we closer to coherent now? Do we have a simple disagreement of expectations that can be expressed in a mutually acceptable vocabulary? That seems like it would be progress, if true :-)

Replies from: Tyrrell_McAllister, Mitchell_Porter
comment by Tyrrell_McAllister · 2012-08-15T22:28:32.011Z · LW(p) · GW(p)

Which implies an inverse mapping from ontologically definite states of consciousness, to a set of exact microphysical states, which are the physical states (or state, there might only be one) in which that particular state of consciousness is realized.

This would be pretty spooky to me if it was possible. My current expectations (call this B>Q>P) are:

|brainstates| > |qualiastates| > |phenomenonologystates|

If my expected ordering is right, then an inverse mapping from qualiastates to brain states should be impossible by the pigeonhole principle...

I think that there was a miscommunication here. To be strictly correct, Mitchell should have written "Which implies an inverse mapping from ontologically definite states of consciousness, to sets of exact microphysical states...". His additional text makes it clear that he's talking about a map f sending every qualia state q to a set f(q) of brain states, namely, the set of brain states b such that being in brain state b implies experiencing qualia state q. This is consistent with the ordering B>Q>P that you expect.

comment by Mitchell_Porter · 2012-08-15T05:20:25.375Z · LW(p) · GW(p)

This is not about counting the number of states. It is about disallowing vagueness at the fundamental level, and then seeing the implications of that for functionalist theories of consciousness.

A functionalist theory of consciousness says that a particular state of consciousness occurs, if and only if the physical object is in a particular "functional state". If you classify all the possible physical states into functional states, there will be borderline cases. But if we disallow vagueness, then every one of those borderline cases must correspond to a specific state of consciousness.

Someone with no hair is bald, someone with a head full of hair is not bald, yet we don't have a non-arbitrary criterion for where the exact boundary between bald and not-bald lies. This doesn't matter because baldness is a rough judgment and not an objective property. But states of consciousness are objective, intrinsic attributes of the conscious being. So objective vagueness isn't allowed, and there must be a definite fact about which conscious state, if any, is present, for every possible physical state.

If we are employing the usual sort of functionalist theory, then the physical variables defining the functional states will be bulk mesoscopic quantities, there will be borderline areas between one functional state and another, and any line drawn through a borderline area, demarcating an exact boundary, just for the sake of avoiding vagueness, will be completely arbitrary at the finest level. The difference between experiencing one shade of red and another will be that you have 4000 color neurons firing rather than 4001 color neurons, and a cell will count as a color neuron if it has 10 of the appropriate receptors but not if it only has 9, and a state of this neuron will count as firing if the action potential manages to traverse the whole length of the axon, but not if it's just a localized fizzle...

The arbitrariness of the distinctions that would need to be made, in order to refine this sort of model of consciousness all the way to microphysical exactness, is evidence that it's the wrong sort of model. This sort of inexact functionalism can only apply to unconscious computational states. It would seem that most of the brain is an unconscious coprocessor of the conscious part. We can think about the computational states of the unconscious part of the brain in the same rough-and-ready way that we think about the computational states of an ordinary digital computer - they are regularities in the operation of the "device". We don't need to bother ourselves over whether a transistor halfway between a 0 state and a 1 state is "really" in one state or the other, because the ultimate criterion of semantics here is behavior, and a transistor - or a neuron - in a computational "halfway state" is just one whose behavior is unpredictable, and unreliable compared to the functional role it is supposed to perform.

This is not an option when thinking about conscious states, because states of consciousness are possessed intrinsically, and not just by ascription on the basis of behavior. Therefore I deduce that the properties defining the physical correlate of a state of consciousness, are not fuzzy ones like "number of neurons firing in a particular ganglion", but are instead properties that are microphysically exact.

Replies from: Tyrrell_McAllister, Tyrrell_McAllister, JenniferRM, Richard_Kennaway
comment by Tyrrell_McAllister · 2012-08-18T05:31:24.475Z · LW(p) · GW(p)

I see that you already addressed precisely the points that I made here. You wrote

The counterargument might be made, what about electrons in a transistor? There doesn't have to be an exact answer to the question, how many electrons is enough for the transistor to really be in the "1" state rather than the "0" state. But the reason there doesn't have to be an exact answer, is that we only care about the transistor's behavior, and then only its behavior under conditions that the device might encounter during its operational life. If under most circumstances there are only 0 electrons or 1000 electrons present, and if those numbers reliably produce "0 behavior" or "1 behavior" from the transistor, then that is enough for the computer to perform its function as a computational device. Maybe a transistor with 569 electrons is in an unstable state that functionally is neither definitely 0 nor definitely 1, but if those conditions almost never come up in the operation of the device, that's OK.

With any theory about the presence of qualia, we do not have the luxury of this escape via functional pragmatism. A theory about the presence of qualia needs to have definite implications for every physically possible state - it needs to say whether the qualia are present or not in that state - or else we end up with situations as in the reductio, where we have people who allegedly neither have the quale nor don't have the quale.

I agree that any final "theory of qualia" should say, for every physically possible state, whether that state bears qualia or not. I take seriously the idea that such a final theory of qualia is possible, meaning that there really is an objective fact of the matter about what the qualia properties of any physically possible state are. I don't have quite the apodeictic certainty that you seem to have, but I take the idea seriously. At any rate, I feel at least some persuasive force in your argument that we shouldn't be drawing arbitrary boundaries around the microphysical states associated with different qualia states.

But even granting the objective nature of qualia properties, I'm still not getting why vagueness or arbitrariness is an inevitable consequence of any assignment of qualia states to microphysical states.

Why couldn't the property of bearing qualia be something that can, in general, be present with various degrees of intensity, ranging from intensely present to entirely absent? Perhaps the "isolated islands" normally traversed by our brains are always at one extreme or another of this range. In that case, it would be impossible for us to imagine what it would "be like" to "bear qualia" in only a very attenuated sense. Nonetheless, perhaps a sufficiently powerful nano-manipulator could rearrange the particles in your brain into such a state.

To be clear, I'm not talking about states that experience specifc qualia — a patch of red, say — very dimly. I'm talking about states that just barely qualify as bearing qualia at all. I'm trying to understand how you rule out the possibility that "bearing qualia" is a continuous property, like the geometrical property of "being longer than a given unit". Just as a geometrical figure can have a length varying from not exceeding, to just barely exceeding, to greatly exceeding that of a given unit, why might not the property of bearing qualia be one that can vary from entirely absent, to just barely present, to intensely present?

It's not obviously enough to point out, as you did to Jennifer, that I feel myself to be here, full stop, rather than just barely here or partly here, and that I can't even imagine myself feeling otherwise. That doesn't rule out the possibility that there are possible states, which my brain never normally enters, in which I would just barely be a bearer of qualia.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-18T10:13:52.388Z · LW(p) · GW(p)

why might not the property of bearing qualia be one that can vary from entirely absent, to just barely present, to intensely present?

There are two problems here. First, you need to make the idea of "barely having qualia" meaningful. Second, you need to explain how that can solve the arbitrariness problem for a microphysically exact psychophysical correspondence.

Are weak qualia a bridge across the gap between having qualia and not having qualia? Or is the axis intense-vs-weak, orthogonal to the axis there-vs-not-there-at-all? In the latter case, even though you only have weak qualia, you still have them 100%.

The classic phenomenological proposition regarding the nature of consciousness, is that it is essentially about intentionality. According to this, even perception has an intentional structure, and you never find sense-qualia existing outside of intentionality. I guess that according to the later Husserl, all possible states of consciousness would be different forms of a fundamental ontological structure called "transcendental intentionality"; and the fundamental difference between a conscious entity and a non-conscious entity is the existence of that structure "in" the entity.

There are mathematical precedents for qualitative discontinuity. If you consider a circle versus a line interval, there's no topological property such as "almost closed". In the context of physics, you can't have entanglement in a Hilbert space with less than four dimensions. So it's conceivable that there is a discontinuity in nature, between states of consciousness and states of non-consciousness.

Twisty distinctions may need to be made. At least verbally, I can distinguish between (1) an entity whose state just is a red quale (2) an entity whose state is one of awareness of the red quale (3) an entity which is aware that it is aware of the red quale. The ontological position I described previously would say that (3) is what we call self-awareness; (2) is what we might just call awareness; there's no such thing as (1), and intentionality is present in (2) as well as in (3). I'm agnostic about the existence of something like (1), as a bridge between having-qualia and not-having-qualia. Also, even looking for opportunities for continuity, it's hard not to think that there's another discontinuity between awareness and self-awareness.

If I was a real phenomenologist, I would presumably have a reasoned position on such questions. Or at least I could state the options with much more rigor. I'll excuse the informality of my exposition by saying that one has to start somewhere.

On the arbitrariness problem: I think this is most apparent when it's arbitrariness of the physical boundary of the conscious entity. Consider a single specific microphysical state that has an observer in it. I don't see how you could have an exact principle determining the presence and nature of an observer from such a state, if you thought that observers don't have exact and unique physical boundaries, as you were suggesting in another comment. It seems to involve a one-to-many-to-one mapping, where you go from one exact physical state, to many possible observer-boundaries, to just one exact conscious state. I don't see how the existence of a conscious-to-nonconscious continuum of states deals with that.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-08-27T22:46:52.550Z · LW(p) · GW(p)

There are two problems here. First, you need to make the idea of "barely having qualia" meaningful. Second, you need to explain how that can solve the arbitrariness problem for a microphysically exact psychophysical correspondence.

I'm still not sure where this arbitrariness problem comes from. I'm supposing that the bearing of qualia is an objective structural property of certain physical systems. Another mathematical analogy might be the property of connectivity in graphs. A given graph is either connected or not, though connectivity is also something that exists in degrees, so that there is a difference between being highly connected and just barely connected.

On this view, how does arbitrariness get in?

Are weak qualia a bridge across the gap between having qualia and not having qualia? Or is the axis intense-vs-weak, orthogonal to the axis there-vs-not-there-at-all? In the latter case, even though you only have weak qualia, you still have them 100%.

I'm suggesting something more like your "bridge across the gap" option. Analogously, one might say that the barely connected graphs are a bridge between disconnected graphs and highly connected graphs. Or, to repeat my analogy from the grandparent, the geometrical property of "being barely longer than a given unit" is a bridge across the gap between "being shorter that the given unit" and "being much longer than the given unit".

On the arbitrariness problem: I think this is most apparent when it's arbitrariness of the physical boundary of the conscious entity. Consider a single specific microphysical state that has an observer in it. I don't see how you could have an exact principle determining the presence and nature of an observer from such a state, if you thought that observers don't have exact and unique physical boundaries, as you were suggesting in another comment. It seems to involve a one-to-many-to-one mapping, where you go from one exact physical state, to many possible observer-boundaries, to just one exact conscious state. I don't see how the existence of a conscious-to-nonconscious continuum of states deals with that.

I'm afraid that I'm not seeing the difficulty. I am suggesting that the possession of a given qualia state is a certain structure property of physical systems. I am suggesting that this structure property is of the sort that can be possessed by a variety of different physical systems in a variety of different states. Why couldn't various parts be added or removed from the system while leaving intact the structure property corresponding to the given qualia state?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-28T02:14:55.626Z · LW(p) · GW(p)

Give me an example of an "objective structural property" of a physical system. I expect that it will either be "vague" or "arbitrary"...

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-08-28T03:36:30.912Z · LW(p) · GW(p)

I'm not sure that I understand the question. Would you agree with the following? A given physical system in a given state satisfies certain structural properties, in virtue of which the system is in that state and not some other state.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-28T06:30:45.824Z · LW(p) · GW(p)

I just want a specific example, first. You're "supposing that the bearing of qualia is an objective structural property of certain physical systems". So please give me one entirely concrete example of "an objective structural property".

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-08-28T14:12:07.827Z · LW(p) · GW(p)

A sentence giving such a property would have to be in the context of a true and complete theory of physics, which I do not possess.

I expect that such a theory will provide a language for describing many such structural properties. I have this expectation because every theory that has been offered in the past, had it been literally true, would have provided such a language. For example, suppose that the universe were in fact a collection of indivisible particles in Euclidean 3-space governed by Newtonian mechanics. Then the distances separating the centers of mass of the various particles would have determinate ratios, triples of particles would determine line segments meeting at determinate angles, etc.

Since Newtonian mechanics isn't an accurate description of physical reality, the properties that I can describe within the framework of Newtonian mechanics don't make sense for actual physical systems. A similar problem bedevils any physical theory that is not literally true. Nonetheless, all of the false theories so far describe structural properties for physical systems. I see no reason to expect that the true theory of physics differs from its predecessors in this regard.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-29T06:08:54.155Z · LW(p) · GW(p)

suppose that the universe were in fact a collection of indivisible particles in Euclidean 3-space governed by Newtonian mechanics. Then the distances separating the centers of mass of the various particles would have determinate ratios, triples of particles would determine line segments meeting at determinate angles, etc.

Let's use this as an example (and let's suppose that the main force in this universe is like Newtonian gravitation). It's certainly relevant to functionalist theories of consciousness, because it ought to be possible to make universal Turing machines in such a universe. A bit might consist in the presence or absence of a medium-sized mass orbiting a massive body at a standard distance, something which is tested for by the passage of very light probe-bodies and which can be rewritten by the insertion of an object into an unoccupied orbit, or by the perturbation of an object out of an occupied orbit.

I claim that any mapping of these physical states onto computational states is going to be vague at the edges, that it can only be made exact by the delineation of arbitrary exact boundaries in physical state space with no functional consequence, and that this already exemplifies all the problems involved in positing an exact mapping between qualia-states and physics as we know it.

Let's say that functionally, the difference between whether a given planetary system encodes 0 or 1 is whether the light probe-mass returns to its sender or not. We're supposing that all the trajectories are synchronized such that, if the orbit is occupied, the probe will swing around the massive body, do a 180-degree turn, and go back from whence it came - that's a "1"; but otherwise it will just sail straight through.

If we allow ourselves to be concerned with the full continuum of possible physical configurations, we will run into edge cases. If the probe does a 90-degree turn, probably that's not "return to sender" and so can't count as a successful "read-out" that the orbit is occupied. What about a 179.999999-degree turn? That's so close to 180 degrees, that if our orrery-computer has any robustness-against-perturbation in its dynamics, at all, it still ought to get the job done. But somewhere in between that almost-perfect turn and the 90-degree turn, there's a transition between a functional "1" and a functional "0".

Now the problem is, if we are trying to say that computational properties are objectively possessed by this physical system, there has to be an exact boundary. (Or else we simply don't consider a specific range of intermediate states; but then we are saying that the exact boundary does exist, in the form of a discontinuity between one continuum of physically realizable states, and another continuum of physically realizable states.) There is some exact angle-of-return for the probe-particle which marks the objective difference between "this gravitating system is in a 1-state" and "this gravitating system is in a 0-state".

To specify such an angle is to "delineate an arbitrary exact boundary in physical state space with no functional consequence". Consider what it means, functionally, for a gravitating system in this toy universe to be in a 1-state. It means that a probe-mass sent into the system at the appropriate time will return to sender, indicating that the orbit is occupied. But since we are talking about a computational mechanism made out of many systems, "return to sender" can't mean that the returning probe-particle just heads off to infinity in the right direction. The probe must have an appropriate causal impact on some other system, so that the information it conveys enters into the next stage of the computation.

But because we are dealing with a physics in which, by hypothesis, distances and angles vary on a continuum, the configuration of the system to which the probe returns can also be counterfactually varied, and once again there are edge cases. Some specific rearrangement of masses and orbits has to happen in that system for the probe's return to count as having registered, and whether a specific angle-of-return leads to the required rearrangement depends on the system's configuration. Some configurations will capture returning probes on a broad range of angles, others will only capture it for a narrow range.

I hope this is beginning to make sense. The ascription of computational states as an objective property of a physical system requires that the mapping from physics to computation must be specific and exact for all possible physical states, even the edge cases, but in a physics based on continua, it's just not possible to specify an exact mapping in a way that isn't arbitrary in its details.

comment by Tyrrell_McAllister · 2012-08-15T23:26:48.970Z · LW(p) · GW(p)

We don't need to bother ourselves over whether a transistor halfway between a 0 state and a 1 state is "really" in one state or the other, because the ultimate criterion of semantics here is behavior...

I don't think that this is why we don't bother ourselves with intermediate states in computers.

To say that we can model a physical system as a computer is not to say that we have a many-to-one map sending every possible microphysical state to a computational state. Rather, we are saying that there is a subset Σ′ of the entire space Σ of microstates for the physical system, and a state machine M, such that,

(1) as the system evolves according to physical laws under the conditions where we wish to apply our computational model, states in Σ′ will only evolve into other states in Σ′, but never into states in the complement of Σ′;

(2) there is a many-to-one map f sending states in Σ′ to computational states of M (i.e., states in Σ′ correspond to unambiguous states of M); and

(3) if the laws of physics say that the microphysical state σ ∈ Σ′ evolves into the state σ′ ∈ Σ′, then the definition of the state machine M says that the state f(σ) transitions to the state f(σ′).

But, in general, Σ′ is a proper subset of Σ. If a physical system, under the operating conditions that we care about, could really evolve into any arbitrary state in Σ, then most of the states that the system reached would be homogeneous blobs. In that case, we probably wouldn't be tempted to model the physical system as a computer.

I propose that physical systems are properly modeled as computers only when the proper subset Σ′ is a union of "isolated islands" in the larger state-space Σ, with each isolated island mapping to a distinct computational state. The isolated islands are separated by "broad channels" of states in the complement of Σ′. To the extent that states in the "islands" could evolve into states in the "channels", then, to that extent, the system shouldn't be modeled as a computer. Conversely, insofar as a system is validly modeled as a computer, that system never enters "vague" computational states.

The computational theory of mind amounts to the claim that the brain can be modeled as a state machine in the above sense.

But suppose that a confluence of cosmic rays knocked your brain into some random state in the "channels". Well, most such states correspond to no qualia at all. Your brain would just be an inert mush. But some of the states in the channels do correspond to qualia. So long as this is possible, why doesn't your vagueness problem reappear here?

If this were something that we expected would ever really happen, then we would be in a world where we shouldn't be modeling the brain as a computer, except perhaps as a computer where many qualia states correspond to unique microphysical states, so that a single microphysical change sometimes makes for a different qualia state. In practice, that would probably mean that we should think of our brains as more like a bowl of soup than a computer. But insofar as this just doesn't happen, we don't need to worry about the vagueness problem you propose.

comment by JenniferRM · 2012-08-15T16:51:49.374Z · LW(p) · GW(p)

This is not working. I keep trying to get you to think in E-Prime for simplicity's sake and you keep emitting words that seem to me to lack any implication for what I should expect to experience. I can think of a few ways to proceed from this state of affairs that might work.

One idea is for you to restate the bit I'm about to quote while tabooing the words "attribute", "property", "trait", "state", "intrinsic", "objective", "subjective", and similar words.

Someone with no hair is bald, someone with a head full of hair is not bald, yet we don't have a non-arbitrary criterion for where the exact boundary between bald and not-bald lies. This doesn't matter because baldness is a rough judgment and not an objective property. But states of consciousness are objective, intrinsic attributes of the conscious being. So objective vagueness isn't allowed, and there must be a definite fact about which conscious state, if any, is present, for every possible physical state.

...states of consciousness are possessed intrinsically, and not just by ascription on the basis of behavior. Therefore I deduce that the properties defining the physical correlate of a state of consciousness, are not fuzzy ones like "number of neurons firing in a particular ganglion", but are instead properties that are microphysically exact.

If I translate this I hear this statement as being confused about the way to properly use abstraction in the course of reasoning, and insisting on pedantic precision whenever logical abstractions come up. Pushing all the squirrelly words into similar form for clarity, it sounds roughly like this:

Someone with no hair is bald, someone with a head full of hair is not bald and we don't have a non-arbitrary criterion for where the exact boundary between bald and not-bald lies. This doesn't matter because baldness is a rough judgment and not an ethereal feature. But each way of being conscious is an ethereal aspect of a conscious being. Since ethereal vagueness isn't allowed, there must be ethereal precision for each way of being conscious that is distinct for every possible brain state.

Repeating for emphasis: ways of being conscious are ethereal, and not just inferred by rough judgment on the basis of behavior. Therefore I deduce that the ether relating brain states to ways of being conscious are not fuzzy ones like "number of neurons firing in a particular ganglion", but are instead ethereally exact.

Do you see how this is a plausible interpretation of what you said? Do you see how the heart of our contention seems to me to have nothing to do with consciousness and everything to do with the language and methods of abstract reasoning?

We don't have to play taboo. A second way that we might resolve our lack of linguistic/conceptual agreement is by working with the concepts that we don't seem to use the same way in a much simpler place where all the trivial facts are settled and only the difficult concepts are at stake.

Consider the way that area, width, and height are all "intrinsic properties" of a rectangle in euclidean geometry. For me, this is another way of saying that if a construct defined in euclidean geometry lacks one of these features then it is not a rectangle. Consider another property of rectangles, the "tallness" of the rectangle, defined as ratio of the height to the width. This is not intrinsic and other than zero and infinity it could be anything and where you put the cutoff is mostly arbitrary. However, I also know that within the intrinsic properties of {width, height, area} any two of them are sufficient for defining a euclidean rectangle and thereby exactly constraining the third property to have some specific value. From this abstract reasoning, I infer that I could measure a rectangle on a table using a ruler for the width and height, and cutting out felt of known density and thickness to cover the shape and weighing that felt to get the area. This would give me three numbers that agreed with each other, modulo some measurement error and unit conversions.

On the other hand, with a euclidean square the width, height, and area are also intrinsic in the sense of being properties of everything I care to call a square, but because I additionally know that the length and width of squares are intrinsically equal. Thus, the tallness of a square is exactly 1, as an intrinsically unvarying property. Given this as background, I know that I only need one of the three "variable but intrinsic properties" to exactly specify the other two "variable but intrinsic properties", which has implications for any measurements of actual square objects that I make with rulers and felt.

Getting more advanced, I know that I can use these properties in pragmatic ways. For example, if I'm trying to build a square out of lumber, I can measure the lengths of wood to be as equal as possible, cut them, and connect them with glue or nails with angles as close to 90 degrees as I can manage, and then I can check the quality of my work by measuring the two diagonals from one corner to another because these are "intrinsically equal" in euclidean squares and the closer the diagonal measurements are to each other the more I can consider my lumber construct to be "like a euclidean square" for other purposes (such as serving as the face of a cube). The diagonals aren't a perfect proxy (because if my construct is grossly non-planar the diagonals could be perfectly equal even as my construct was not square-like) but they are useful.

Perhaps you could talk about how the properties of euclidean rectangles and squares relate to the properties of "indeterminate rectangles and squares", and how the status of their properties as "intrinsic" and/or "varying" would relates to issues of measurement and construction in the presence of indeterminacy?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-15T18:19:53.389Z · LW(p) · GW(p)

I will try to get across what I mean by calling states of consciousness "intrinsic", "objectively existing", and so forth; by describing what it would mean for them to not have these attributes.

It would mean that you only exist by convention or by definition. It would mean that there is no definite fact about whether your life is part of reality. It wouldn't just be that some models of reality acknowledge your existence and others don't; it would mean that you are nothing more than a fuzzy heuristic concept in someone else's model, and that if they switched models, you would no longer exist even in that limited sense.

I would like to think that you personally have a robust enough sense of your own reality to decisively reject such propositions. But by now, nothing would surprise me, coming from a materialist. It's been amply demonstrated that people can be willing to profess disbelief in anything and everything, if they think that's the price of believing in science. So I won't presume that you believe that you exist, I'll just hope that you do, because if you don't, it will be hard to have a sensible conversation about these topics.

But... if you do agree that you definitely exist, independently of any "model" that actual or hypothetical observers have, then it's a short step to saying that you must also have some of your properties intrinsically, rather than through model-dependent attribution. The alternative would be to say that you exist, you're a "thing", but not any particular thing; which is the sort of untenable objective vagueness that I was talking about.

The concept of an intrinsic property is arising somewhat differently here, than it does in your discussion of squares and rectangles. The idealized geometrical figures have their intrinsic properties by definition, or by logical implication from the definition. But I can say that you have intrinsic properties, not by definition (or not just by definition), but because you exist, and to be is to be something. (Also known as the "law of identity".) It would make no sense to say that you are real, but otherwise devoid of ontological definiteness.

For exactly the same reason, it would make no sense to have a fundamentally vague "physical theory of you". Here I want to define "you" as narrowly as possible - this you, in this world, even just in this moment if necessary. I don't want the identity issues of a broadly defined "you" to interfere. I hope we have agreed that you-here-now exist, that you exist objectively, that you must have some identifying or individuating properties which are also held objectively and intrinsically; the properties which make you what you are.

If we are going to be ontological materialists about you-here-now, and we are also going to acknowledge you-here-now as completely and independently real, then there also can't be any vagueness or arbitrariness about which physical object is you-here-now. For every particle - if we have particles in our physical ontology - either it is definitely a part of you-here-now, or it definitely isn't.

At this point I'm already departing radically from the standard materialist account of personhood, which would say that we can be vague about whether a few atoms are a part of you or not. The reason we can't do that, is precisely the objectivity of your existence. If you are an objectively existing entity, I can't at the same time say that you are an entity whose boundaries aren't objectively defined. For some broader notion, like "your body", sure, we can be vague about where its boundaries are. But there has to be a core notion of what you are that is correct, exact, fully objective; and the partially objective definitions of "you" come from watering down this core notion by adding inessential extra properties.

Now let's contrast this situation with the piece of lumber that is close to being a square but isn't a perfect square. My arguments against fundamental vagueness are not about insisting that the piece of lumber is a perfect square. I am merely insisting that it is what it is, and whatever it is, it is that, exactly and definitely.

The main difference between "you-here-now" and the piece of lumber, is that we don't have the same reason to think that the lumber has a hard ontological core. It's an aggregate of atoms, electrons will be streaming off it, and there will be some arbitrariness about when such an electron stops being "part of the lumber". To find indisputably objective physical facts in this situation, you probably need to talk in terms of immediate relations between elementary particles.

The evidence for a hard core in you-here-now is primarily phenomenological and secondarily logical. The phenomenological evidence is what we call the unity of experience: what's happening to you in any moment is a gestalt; it's one thing happening to one person. Your experience of the world may have fuzzy edges to it, but it's still a whole and hence objectively a unity. The logical "evidence" is just the incoherence of supposing there can be a phenomenological unity without there being an ontological unity at any level. This experiential whole may have parts, but you can't use the existence of the parts to then turn around and deny the existence of the whole.

The evidence for an ontological hard core to you-here-now does not come from physics. Physically the brain looks like it should be just like the piece of lumber, an aggregate of very many very small things. This presumption is obviously why materialists often end up regarding their own existence as something less than objective, or why the search for a microphysically exact theory of the self sounds like a mistake. Instead we are to be content with the approximations of functionalism, because that's the most you could hope to do with such an entity.

I hope it's now very clear where I'm coming from. The phenomenological and ontological arguments for a "hard core" to the self are enough to override any counterargument from physics. They tell us that a mesoscopic theory of what's going on, like functionalism, is at best incomplete; it cannot be the final word. The task is to understand the conscious brain as a biophysical system, in terms of a physical ontology that can contain "real selves". And fortunately, it's no longer the 19th century, we have quantum mechanics and the ingredients for something more sophisticated than classic atomism.

Replies from: JenniferRM, Steve_Rayhawk, Tyrrell_McAllister
comment by JenniferRM · 2012-08-17T06:30:26.787Z · LW(p) · GW(p)

I'm going back and forth on whether to tap out here. On the one hand I feel like I'm making progress in understanding your perspective. On the other hand the progress is clarifying that it would take a large amount of time and energy to derive a vocabulary to converse in a mutually transparent way about material truth claims in this area. It had not occurred to me that pulling on the word "intrinsic" would flip the conversation into a solipsistic zone by way of Cartesian skepticism. Ooof.

Perhaps we could schedule a few hours of IM or IRC to try a bit of very low latency mutual vocabulary development, and then maybe post the logs back here for posterity (raw or edited) if that seems worthwhile to us. (See private message for logistics.) If you want to stick to public essays I recommend taking things up with Tyrrell; he's a more careful thinker than I am and I generally agree with what he says. He noticed and extended a more generous and more interesting parsing of your claims than I did when I thought you were trying to make a pigeonhole argument in favor of magical entities, and he seems to be interested. Either public essays with Tyrrell, IM with me, or both, or neither... as you like :-)

(And/or Steve of course, but he generally requires a lot of unpacking, and I frequently only really understand why his concepts were better starting places than my own between 6 and 18 months after talking with him.)

comment by Steve_Rayhawk · 2012-08-16T20:04:05.148Z · LW(p) · GW(p)

It wouldn't just be that some models of reality acknowledge your existence and others don't; it would mean that you are nothing more than a fuzzy heuristic concept in someone else's model, and that if they switched models, you would no longer exist even in that limited sense.

Or in a cascade of your own successive models, including of the cascade.

Or an incentive to keep using that model rather than to switch to another one. The models are made up, but the incentives are real. (To whatever extent the thing subject to the incentives is.)

Not that I'm agreeing, but some clever ways to formulate almost your objection could be built around the wording "The mind is in the mind, not in reality".

Replies from: JenniferRM
comment by JenniferRM · 2012-08-17T06:59:42.785Z · LW(p) · GW(p)

Crap. I had not thought of quines in reference to simulationist metaphysics before.

comment by Tyrrell_McAllister · 2012-08-15T23:54:35.559Z · LW(p) · GW(p)

At this point I'm already departing radically from the standard materialist account of personhood, which would say that we can be vague about whether a few atoms are a part of you or not. The reason we can't do that, is precisely the objectivity of your existence. If you are an objectively existing entity, I can't at the same time say that you are an entity whose boundaries aren't objectively defined.

I have some sympathy for the view that my-here-now qualia are determinant and objective. But I don't see why that implies that there must be a determinant objective unique collection of particles that is experiencing the qualia. Why not say that there are various different boundaries that I could draw, but, no matter which of these boundaries I draw, the qualia being experienced by the contained system of particles would be the same? For example, adding or removing the table in front of me doesn't change the qualia experienced by the system.

(Here I am supposing that I can map the relevant physical systems to qualia in the manner that I describe in this comment.)

comment by Richard_Kennaway · 2012-08-15T12:39:36.408Z · LW(p) · GW(p)

Therefore I deduce that the properties defining the physical correlate of a state of consciousness, are not fuzzy ones like "number of neurons firing in a particular ganglion", but are instead properties that are microphysically exact.

My subjective conscious experience seems no more exact a thing to me than my experience of distinctions of colours. States of consciousness seem to be a continuous space, and there isn't even a hard boundary (again, as I perceive things subjectively) between what is conscious and what is not.

But perhaps people vary in this; perhaps it is different for you?

comment by haig · 2012-08-10T04:44:25.682Z · LW(p) · GW(p)

To summarize (mostly for my sake so I know I haven't misunderstood the OP):

  • 1.) Subjective conscious experience or qualia play a non-negligible role in how we behave and how we form our beliefs, especially of the mushy (technical term) variety that ethical reasoning is so bound up in.
  • 2.) The current popular computational flavor of philosophy of mind has inadequately addressed qualia in your eyes because the universality of the extended church-turing thesis, though satisfactorily covering the mechanistic descriptions of matter in a way that provides for emulation of the physical dynamics, does not tell us anything about what things would have subjective conscious experiences.
  • 3.) Features of quantum mechanics such as entanglement and topological structures in a relativistic quantum field provide a better ontological foundation for your speculative theories of consciousness which takes as inspiration phenomenology and a quantum mondadology.

EDIT: I guess the shortest synopsis of this whole argument is: we need to build qualia machines, not just intelligent machines, and we don't have any theories yet to help us do that (other than the normal, but delightful, 9 month process we currently use). I can very much agree with #1. Now, with #2, it is true that the explanatory gap of qualia does not yield to the computational descriptions of physical processes, but it is also true that the universe may just be constructed such that this computational description is the best we can get and we will just have to accept that qualia will be experienced by those computational systems that are organized in particular ways, the brain being one arrangement of such systems. And, for #3, without more information about your theory, I don't see how appealing to ontologically deeper physical processes would get you any further in explaining qualia, you need to give us more.

comment by JQuinton · 2012-08-09T22:19:47.743Z · LW(p) · GW(p)

One question I had reading this is: What does it matter if our model of human consciousness is wrong? If we create FAI that has all of the outward functionality of consciousness I still would consider that a win. Not all eyes that have evolved are human eyes; the same could happen with consciousness. If we manufactured some mechanical "eye" that didn't model exactly the interior bits of a human eye but was still able to do what eyes do, shouldn't we still consider this an eye? It would seem nonsensical to me to question whether this mechanical eye "really" sees because the act of seeing is a subjective experience that can't be truly modeled or say that it's not "real" seeing because it's computational seeing.

comment by Manfred · 2012-08-08T19:30:27.608Z · LW(p) · GW(p)

which in its biophysical context would appear to be describing a mass of entangled electrons in that hypothetical sweet spot, somewhere in the brain, where there's a mechanism to protect against decoherence.

Which activity totally inaccessible to state machines do you think these electrons do?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-09T04:13:44.892Z · LW(p) · GW(p)

The idea is not that state machines can't have qualia. Something with qualia will still be a state machine. But you couldn't know that something had qualia, if you just had the state machine description and no preexisting concept of qualia.

If a certain bunch of electrons are what's conscious in the brain, my point is that the "electrons" are actually qualia and that this isn't part of our physics concept of what an electron is; and that you - or a Friendly AI - couldn't arrive at this "discovery" by reasoning just within physical and computational ontologies.

Replies from: Manfred, David_Gerard
comment by Manfred · 2012-08-09T13:11:02.934Z · LW(p) · GW(p)

you - or a Friendly AI - couldn't arrive at this "discovery" by reasoning just within physical and computational ontologies.

Could an AI just look at the physical causes of humans saying "I think I have qualia"? Why wouldn't these electrons be a central cause, if they're the key to qualia?

comment by David_Gerard · 2012-08-09T20:36:00.789Z · LW(p) · GW(p)

Please expand the word "qualia", and please explain how you see that the presence or absence of these phenomena will make an observable difference in the problem you are addressing.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-10T06:45:24.964Z · LW(p) · GW(p)

See this discussion. Physical theories of human identity must equate the world of appearances, which is the only world that we actually know about, with some part of a posited world of "physical entities". Everything from the world of appearances is a quale, but an AI with a computational-materialist philosophy only "knows" various hypotheses about what the physical entities are. The most it could do is develop a concept like "the type of physical entity which causes a human to talk about appearances", but it still won't spontaneously attach the right significance to such concepts (e.g. to a concept of pain).

I have agreed elsewhere that it is - remotely! - possible that an appropriately guided AI could solve the hard problems of consciousness and ethics before humans did, e.g. by establishing a fantastically detailed causal model of human thought, and contemplating the deliberations of a philosophical sim-human. But when even the humans guiding the AI abandon their privileged epistemic access to phenomenological facts, and personally imitate the AI's limitations by restricting themselves to computational epistemology, then the project is doomed.

comment by OrphanWilde · 2012-08-08T14:22:51.878Z · LW(p) · GW(p)

I might be mistaken, but it seems like you're forwarding a theory of consciousness, as opposed to a theory of intelligence.

Two issues with that - first, that's not necessarily the goal of AI research. Second, you're evaluating consciousness, or possibly intelligence, from the inside, rather than the outside.

Replies from: dbc
comment by dbc · 2012-08-08T15:51:18.953Z · LW(p) · GW(p)

I think consciousness is relevant here because it may be an important component of our preferences. For instance, all else being equal, I would prefer a universe filled with conscious beings to one filled with paper clips. If an AI cannot figure out what consciousness is, then it could have a hard time enacting human preferences.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-08-08T15:55:42.494Z · LW(p) · GW(p)

That presumes consciousness can only be understood or recognized from the inside. An AI doesn't have to know what consciousness feels like (or more particularly, what "feels like" even means) in order to recognize it.

Replies from: torekp
comment by torekp · 2012-08-11T17:23:29.447Z · LW(p) · GW(p)

True, but it does need to recognize it, and if it is somehow irreversibly committed to computationalism and that is a mistake, it will fail to be promote consciousness correctly.

For what it's worth, I strongly doubt Mitchell's argument for the "irreversibly committed" step. Even an AI lacking all human-like sensation and feeling might reject computationalism, I suspect, provided that it's false.

comment by shminux · 2012-08-08T18:06:25.886Z · LW(p) · GW(p)

If everything comes together, then it will now be a straight line from here to the end.

To the end of what? The sequence? Or the humanity as we know it?

You need to find the true ontology, find the true morality, and win the intelligence race. For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly ("true ontology"), value the world correctly ("true morality"), and it would need to outsmart its opponents ("win the intelligence race").

Is there one "true" ontology/morality? Most likely there are many, leading in different directions, not necessarily easy to rank based on our current morality.

Personally, I am not overly worried that SI will produce anything resembling a working AGI, let alone a self-improving one, on any kind of reasonable time frame, and there is little cause for concern at this point, and definitely no rush to fix any deep conceptual issues you think SI might have. The best outcome one can hope for is that the SI research will produce some interesting results and gets a certain amount of respect from the CS/AI crowd in general. Formulating and proving a theorem or two could be one of those results. Maybe an insight into machine learning.

I am not at all concerned that they would lock a "wrong" epistemology (MWI/Tegmark, whatever) into a self-improving AGI, partly because I think that algorithms for unnecessarily complicated models will be simplified, either externally by the programmers or internally by the machine, into something that better reflects the interaction with the external world.

A more likely outcome is that they hit an unexpected road block fairly quickly, one that makes them go back to the drawing board and reevaluate the basics.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-10T02:22:55.254Z · LW(p) · GW(p)

To the end of what? The sequence? Or the humanity as we know it?

The end of SI's mission, in success, failure, or change of paradigm.

Is there one "true" ontology/morality?

There's one reality so all "true ontologies" ought to be specializations of the same truth. One true morality is a shakier proposition, given that morality is the judgment of an agent and there's more than one agent. It's not even clear that just picking out the moral component of the human decision procedure is enough for SI's purposes. What FAI research is really after is "decision procedure that a sober-minded and fully-informed human being would prefer to be employed by an AGI".

comment by lukstafi · 2012-08-11T20:01:51.874Z · LW(p) · GW(p)

Phenomenal experience does not give a non-theoretical access to existence claims. Qualia are theoretical tools of theories implemented as (parts of) minds. I do not (go as far as) posit "computational epistemology", just provide a constraint on ontology.

comment by Wei Dai (Wei_Dai) · 2012-08-08T18:33:24.706Z · LW(p) · GW(p)

First, "ontology".

This makes me think you're going to next talk about your second objection, to "outsourcing the methodological and design problems of FAI research to uploaded researchers and/or a proto-FAI which is simulating or modeling human researchers", but you never mention that again. Was that intentional or did you forget? Do you have any new thoughts on that since the discussions in Extrapolating values without outsourcing?

comment by Kawoomba · 2012-08-08T14:05:51.785Z · LW(p) · GW(p)

OK.

In all seriousness, there's a lot you're saying that seems contradictory at first glance. A few snippets:

My message for friendly AI researchers is not that computational epistemology is invalid, or that it's wrong to think about the mind as a state machine, just that all that isn't the full story.

If computation epistemology is not the full story, if true epistemology for a conscious being is "something more", then you are saying that it is so incomplete as to be invalid. (Doesn't Searle hold similar beliefs, along the lines of "consciousness is something that brain matter does"? No uploading for you two!)

I do not think that SI intends to just program its AI with an apriori belief in the Everett multiverse

I'm not sure you appreciate the distance to go "just" in regards of provable friendliness theory, let alone a workable foundation of strong AI, a large scientific field in its own right.

The question of which "apriori beliefs" are supposed to be programmed or not programmed in the AI is so far off as to be irrelevant.

Also note that if it those beliefs turn out not be an invariant in respect to friendliness (and why should they?), they are going to be updated until they converge towards more accurate beliefs anyways.

For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly ("true ontology"), value the world correctly ("true morality"), and it would need to outsmart its opponents ("win the intelligence race").

"Ontology + morals" corresponds to "model of the current state of the world + actions to change it", and the efficiency of those actions equals "intelligence". An agent's intelligence is thus an emergent property of being able to manipulate the world in accordance to your morals, i.e. it is not an additional property but is inherent in your so-called "true ontology".

Still upvoted.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-08-09T04:26:51.657Z · LW(p) · GW(p)

If computation epistemology is not the full story, if true epistemology for a conscious being is "something more", then you are saying that it is so incomplete as to be invalid.

It's a valid way to arrive at a state-machine model of something. It just won't tell you what the states are like on the inside, or even whether they have an inside. The true ontology is richer than state-machine ontology, and the true epistemology is richer than computational epistemology.

I'm not sure you appreciate the distance to go "just" in regards of provable friendliness theory, let alone a workable foundation of strong AI, a large scientific field in its own right.

I do know that there's lots of work to be done. But this is what Eliezer's sequence will be about.

An agent's intelligence is thus an emergent property of being able to manipulate the world in accordance to your morals, i.e. it is not an additional property

I agree with the Legg-Hutter idea that quantifiable definitions of general intelligence for programs should exist, e.g. by ranking them using some combination of stored mathematical knowledge and quality of general heuristics. You have to worry about no-free-lunch theorems and so forth (i.e. that a program's IQ depends on the domain being tested), but on a practical level, there's no question that efficiency of algorithms and quality of heuristics available to an AI is at least semi-independent of what the AI's goals are. Otherwise all chess programs would be equally good.

comment by [deleted] · 2012-08-08T13:33:21.637Z · LW(p) · GW(p)

Upvoted even though I only made it to the part about MWI dogmatism.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-08-08T14:10:04.619Z · LW(p) · GW(p)

Do look at the last two paragraphs...

Replies from: None
comment by [deleted] · 2012-08-08T14:38:53.298Z · LW(p) · GW(p)

I did, but I can make neither head nor tail of them.

Replies from: Vladimir_Nesov, David_Gerard
comment by Vladimir_Nesov · 2012-08-08T16:27:57.596Z · LW(p) · GW(p)

That was my point.

Replies from: None
comment by [deleted] · 2012-08-08T16:37:50.257Z · LW(p) · GW(p)

Then I still don't get your point.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-08-08T16:40:51.158Z · LW(p) · GW(p)

(Edited) The point is that the value of the post may seem very different if you take more than its opening into account.

Replies from: None
comment by [deleted] · 2012-08-08T16:47:52.502Z · LW(p) · GW(p)

Then presumably you have your own downvote to apply; need you also require mine?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-08-08T17:58:46.594Z · LW(p) · GW(p)

(I'm sorry; I've reformulated the grandparent.)

comment by David_Gerard · 2012-08-08T16:07:31.349Z · LW(p) · GW(p)

They didn't make sense last time either. To be generous, there appears to be considerable inferential distance to cover.

comment by hankx7787 · 2012-08-08T15:29:15.340Z · LW(p) · GW(p)

I hate to be that guy, but have you read the reductionism sequence? snerk

In all seriousness though, I think you're exactly right to question the underlying philosophical premises upon which SI is proceeding, especially regarding epistemology.

I just think the particular parts that you're questioning are mostly the wrong parts, and that, at best, you're only vaguely poking in the direction of where some actual issues may be.

Replies from: CarlShulman
comment by CarlShulman · 2012-08-08T19:21:48.551Z · LW(p) · GW(p)

He has.

Replies from: hankx7787
comment by hankx7787 · 2012-08-09T12:44:24.212Z · LW(p) · GW(p)

I'm kind of amazed this is getting upvoted. Are there really so many people this clueless about when someone is joking?

Replies from: Kaj_Sotala, CarlShulman
comment by CarlShulman · 2012-08-09T17:54:14.022Z · LW(p) · GW(p)

Edited to reduce snark.