Personal research update
post by Mitchell_Porter · 2012-01-29T09:32:30.423Z · LW · GW · Legacy · 32 commentsContents
32 comments
Synopsis: The brain is a quantum computer and the self is a tensor factor in it - or at least, the truth lies more in that direction than in the classical direction - and we won't get Friendly AI right unless we get the ontology of consciousness right.
Followed by: Does functionalism imply dualism?
Sixteen months ago, I made a post seeking funding for personal research. There was no separate Discussion forum then, and the post was comprehensively downvoted. I did manage to keep going at it, full-time, for the next sixteen months. Perhaps I'll get to continue; it's for the sake of that possibility that I'll risk another breach of etiquette. You never know who's reading these words and what resources they have. Also, there has been progress.
I think the best place to start is with what orthonormal said in response to the original post: "I don't think anyone should be funding a Penrose-esque qualia mysterian to study string theory." If I now took my full agenda to someone out in the real world, they might say: "I don't think it's worth funding a study of 'the ontological problem of consciousness in the context of Friendly AI'." That's my dilemma. The pure scientists who might be interested in basic conceptual progress are not engaged with the race towards technological singularity, and the apocalyptic AI activists gathered in this place are trying to fit consciousness into an ontology that doesn't have room for it. In the end, if I have to choose between working on conventional topics in Friendly AI, and on the ontology of quantum mind theories, then I have to choose the latter, because we need to get the ontology of consciousness right, and it's possible that a breakthrough could occur in the world outside the FAI-aware subculture and filter through; but as things stand, the truth about consciousness would never be discovered by employing the methods and assumptions that prevail inside the FAI subculture.
Perhaps I should pause to spell out why the nature of consciousness matters for Friendly AI. The reason is that the value system of a Friendly AI must make reference to certain states of conscious beings - e.g. "pain is bad" - so, in order to make correct judgments in real life, at a minimum it must be able to tell which entities are people and which are not. Is an AI a person? Is a digital copy of a human person, itself a person? Is a human body with a completely prosthetic brain still a person?
I see two ways in which people concerned with FAI hope to answer such questions. One is simply to arrive at the right computational, functionalist definition of personhood. That is, we assume the paradigm according to which the mind is a computational state machine inhabiting the brain, with states that are coarse-grainings (equivalence classes) of exact microphysical states. Another physical system which admits the same coarse-graining - which embodies the same state machine at some macroscopic level, even though the microscopic details of its causality are different - is said to embody another instance of the same mind.
An example of the other way to approach this question is the idea of simulating a group of consciousness theorists for 500 subjective years, until they arrive at a consensus on the nature of consciousness. I think it's rather unlikely that anyone will ever get to solve FAI-relevant problems in that way. The level of software and hardware power implied by the capacity to do reliable whole-brain simulations means you're already on the threshold of singularity: if you can simulate whole brains, you can simulate part brains, and you can also modify the parts, optimize them with genetic algorithms, and put them together into nonhuman AI. Uploads won't come first.
But the idea of explaining consciousness this way, by simulating Daniel Dennett and David Chalmers until they agree, is just a cartoon version of similar but more subtle methods. What these methods have in common is that they propose to outsource the problem to a computational process using input from cognitive neuroscience. Simulating a whole human being and asking it questions is an extreme example of this (the simulation is the "computational process", and the brain scan it uses as a model is the "input from cognitive neuroscience"). A more subtle method is to have your baby AI act as an artificial neuroscientist, use its streamlined general-purpose problem-solving algorithms to make a causal model of a generic human brain, and then to somehow extract from that, the criteria which the human brain uses to identify the correct scope of the concept "person". It's similar to the idea of extrapolated volition, except that we're just extrapolating concepts.
It might sound a lot simpler to just get human neuroscientists to solve these questions. Humans may be individually unreliable, but they have lots of cognitive tricks - heuristics - and they are capable of agreeing that something is verifiably true, once one of them does stumble on the truth. The main reason one would even consider the extra complication involved in figuring out how to turn a general-purpose seed AI into an artificial neuroscientist, capable of extracting the essence of the human decision-making cognitive architecture and then reflectively idealizing it according to its own inherent criteria, is shortage of time: one wishes to develop friendly AI before someone else inadvertently develops unfriendly AI. If we stumble into a situation where a powerful self-enhancing algorithm with arbitrary utility function has been discovered, it would be desirable to have, ready to go, a schema for the discovery of a friendly utility function via such computational outsourcing.
Now, jumping ahead to a later stage of the argument, I argue that it is extremely likely that distinctively quantum processes play a fundamental role in conscious cognition, because the model of thought as distributed classical computation actually leads to an outlandish sort of dualism. If we don't concern ourselves with the merits of my argument for the moment, and just ask whether an AI neuroscientist might somehow overlook the existence of this alleged secret ingredient of the mind, in the course of its studies, I do think it's possible. The obvious noninvasive way to form state-machine models of human brains is to repeatedly scan them at maximum resolution using fMRI, and to form state-machine models of the individual voxels on the basis of this data, and then to couple these voxel-models to produce a state-machine model of the whole brain. This is a modeling protocol which assumes that everything which matters is physically localized at the voxel scale or smaller. Essentially we are asking, is it possible to mistake a quantum computer for a classical computer by performing this sort of analysis? The answer is definitely yes if the analytic process intrinsically assumes that the object under study is a classical computer. If I try to fit a set of points with a line, there will always be a line of best fit, even if the fit is absolutely terrible. So yes, one really can describe a protocol for AI neuroscience which would be unable to discover that the brain is quantum in its workings, and which would even produce a specific classical model on the basis of which it could then attempt conceptual and volitional extrapolation.
Clearly you can try to circumvent comparably wrong outcomes, by adding reality checks and second opinions to your protocol for FAI development. At a more down to earth level, these exact mistakes could also be made by human neuroscientists, for the exact same reasons, so it's not as if we're talking about flaws peculiar to a hypothetical "automated neuroscientist". But I don't want to go on about this forever. I think I've made the point that wrong assumptions and lax verification can lead to FAI failure. The example of mistaking a quantum computer for a classical computer may even have a neat illustrative value. But is it plausible that the brain is actually quantum in any significant way? Even more incredibly, is there really a valid apriori argument against functionalism regarding consciousness - the identification of consciousness with a class of computational process?
I have previously posted (here) about the way that an abstracted conception of reality, coming from scientific theory, can motivate denial that some basic appearance corresponds to reality. A perennial example is time. I hope we all agree that there is such a thing as the appearance of time, the appearance of change, the appearance of time flowing... But on this very site, there are many people who believe that reality is actually timeless, and that all these appearances are only appearances; that reality is fundamentally static, but that some of its fixed moments contain an illusion of dynamism.
The case against functionalism with respect to conscious states is a little more subtle, because it's not being said that consciousness is an illusion; it's just being said that consciousness is some sort of property of computational states. I argue first that this requires dualism, at least with our current physical ontology, because conscious states are replete with constituents not present in physical ontology - for example, the "qualia", an exotic name for very straightforward realities like: the shade of green appearing in the banner of this site, the feeling of the wind on your skin, really every sensation or feeling you ever had. In a world made solely of quantum fields in space, there are no such things; there are just particles and arrangements of particles. The truth of this ought to be especially clear for color, but it applies equally to everything else.
In order that this post should not be overlong, I will not argue at length here for the proposition that functionalism implies dualism, but shall proceed to the second stage of the argument, which does not seem to have appeared even in the philosophy literature. If we are going to suppose that minds and their states correspond solely to combinations of mesoscopic information-processing events like chemical and electrical signals in the brain, then there must be a mapping from possible exact microphysical states of the brain, to the corresponding mental states. Supposing we have a mapping from mental states to coarse-grained computational states, we now need a further mapping from computational states to exact microphysical states. There will of course be borderline cases. Functional states are identified by their causal roles, and there will be microphysical states which do not stably and reliably produce one output behavior or the other.
Physicists are used to talking about thermodynamic quantities like pressure and temperature as if they have an independent reality, but objectively they are just nicely behaved averages. The fundamental reality consists of innumerable particles bouncing off each other; one does not need, and one has no evidence for, the existence of a separate entity, "pressure", which exists in parallel to the detailed microphysical reality. The idea is somewhat absurd.
Yet this is analogous to the picture implied by a computational philosophy of mind (such as functionalism) applied to an atomistic physical ontology. We do know that the entities which constitute consciousness - the perceptions, thoughts, memories... which make up an experience - actually exist, and I claim it is also clear that they do not exist in any standard physical ontology. So, unless we get a very different physical ontology, we must resort to dualism. The mental entities become, inescapably, a new category of beings, distinct from those in physics, but systematically correlated with them. Except that, if they are being correlated with coarse-grained neurocomputational states which do not have an exact microphysical definition, only a functional definition, then the mental part of the new combined ontology is fatally vague. It is impossible for fundamental reality to be objectively vague; vagueness is a property of a concept or a definition, a sign that it is incomplete or that it does not need to be exact. But reality itself is necessarily exact - it is something - and so functionalist dualism cannot be true unless the underdetermination of the psychophysical correspondence is replaced by something which says for all possible physical states, exactly what mental states (if any) should also exist. And that inherently runs against the functionalist approach to mind.
Very few people consider themselves functionalists and dualists. Most functionalists think of themselves as materialists, and materialism is a monism. What I have argued is that functionalism, the existence of consciousness, and the existence of microphysical details as the fundamental physical reality, together imply a peculiar form of dualism in which microphysical states which are borderline cases with respect to functional roles must all nonetheless be assigned to precisely one computational state or the other, even if no principle tells you how to perform such an assignment. The dualist will have to suppose that an exact but arbitrary border exists in state space, between the equivalence classes.
This - not just dualism, but a dualism that is necessarily arbitrary in its fine details - is too much for me. If you want to go all Occam-Kolmogorov-Solomonoff about it, you can say that the information needed to specify those boundaries in state space is so great as to render this whole class of theories of consciousness not worth considering. Fortunately there is an alternative.
Here, in addressing this audience, I may need to undo a little of what you may think you know about quantum mechanics. Of course, the local preference is for the Many Worlds interpretation, and we've had that discussion many times. One reason Many Worlds has a grip on the imagination is that it looks easy to imagine. Back when there was just one world, we thought of it as particles arranged in space; now we have many worlds, dizzying in their number and diversity, but each individual world still consists of just particles arranged in space. I'm sure that's how many people think of it.
Among physicists it will be different. Physicists will have some idea of what a wavefunction is, what an operator algebra of observables is, they may even know about path integrals and the various arcane constructions employed in quantum field theory. Possibly they will understand that the Copenhagen interpretation is not about consciousness collapsing an actually existing wavefunction; it is a positivistic rationale for focusing only on measurements and not worrying about what happens in between. And perhaps we can all agree that this is inadequate, as a final description of reality. What I want to say, is that Many Worlds serves the same purpose in many physicists' minds, but is equally inadequate, though from the opposite direction. Copenhagen says the observables are real but goes misty about unmeasured reality. Many Worlds says the wavefunction is real, but goes misty about exactly how it connects to observed reality. My most frustrating discussions on this topic are with physicists who are happy to be vague about what a "world" is. It's really not so different to Copenhagen positivism, except that where Copenhagen says "we only ever see measurements, what's the problem?", Many Worlds says "I say there's an independent reality, what else is left to do?". It is very rare for a Many World theorist to seek an exact idea of what a world is, as you see Robin Hanson and maybe Eliezer Yudkowsky doing; in that regard, reading the Sequences on this site will give you an unrepresentative idea of the interpretation's status.
One of the characteristic features of quantum mechanics is entanglement. But both Copenhagen, and a Many Worlds which ontologically privileges the position basis (arrangements of particles in space), still have atomistic ontologies of the sort which will produce the "arbitrary dualism" I just described. Why not seek a quantum ontology in which there are complex natural unities - fundamental objects which aren't simple - in the form of what we would presently called entangled states? That was the motivation for the quantum monadology described in my other really unpopular post. :-) [Edit: Go there for a discussion of "the mind as tensor factor", mentioned at the start of this post.] Instead of saying that physical reality is a series of transitions from one arrangement of particles to the next, say it's a series of transitions from one set of entangled states to the next. Quantum mechanics does not tell us which basis, if any, is ontologically preferred. Reality as a series of transitions between overall wavefunctions which are partly factorized and partly still entangled is a possible ontology; hopefully readers who really are quantum physicists will get the gist of what I'm talking about.
I'm going to double back here and revisit the topic of how the world seems to look. Hopefully we agree, not just that there is an appearance of time flowing, but also an appearance of a self. Here I want to argue just for the bare minimum - that a moment's conscious experience consists of a set of things, events, situations... which are simultaneously "present to" or "in the awareness of" something - a conscious being - you. I'll argue for this because even this bare minimum is not acknowledged by existing materialist attempts to explain consciousness. I was recently directed to this brief talk about the idea that there's no "real you". We are given a picture of a graph whose nodes are memories, dispositions, etc., and we are told that the self is like that graph: nodes can be added, nodes can be removed, it's a purely relational composite without any persistent part. What's missing in that description is that bare minimum notion of a perceiving self. Conscious experience consists of a subject perceiving objects in certain aspects. Philosophers have discussed for centuries how best to characterize the details of this phenomenological ontology; I think the best was Edmund Husserl, and I expect his work to be extremely important in interpreting consciousness in terms of a new physical ontology. But if you can't even notice that there's an observer there, observing all those parts, then you won't get very far.
My favorite slogan for this is due to the other Jaynes, Julian Jaynes. I don't endorse his theory of consciousness at all; but while in a daydream he once said to himself, "Include the knower in the known". That sums it up perfectly. We know there is a "knower", an experiencing subject. We know this, just as well as we know that reality exists and that time passes. The adoption of ontologies in which these aspects of reality are regarded as unreal, as appearances as only, may be motivated by science, but it's false to the most basic facts there are, and one should show a little more imagination about what science will say when it's more advanced.
I think I've said almost all of this before. The high point of the argument is that we should look for a physical ontology in which a self exists and is a natural yet complex unity, rather than a vaguely bounded conglomerate of distinct information-processing events, because the latter leads to one of those unacceptably arbitrary dualisms. If we can find a physical ontology in which the conscious self can be identified directly with a class of object posited by the theory, we can even get away from dualism, because physical theories are mathematical and formal and make few commitments about the "inherent qualities" of things, just about their causal interactions. If we can find a physical object which is absolutely isomorphic to a conscious self, then we can turn the isomorphism into an identity, and the dualism goes away. We can't do that with a functionalist theory of consciousness, because it's a many-to-one mapping between physical and mental, not an isomorphism.
So, I've said it all before; what's new? What have I accomplished during these last sixteen months? Mostly, I learned a lot of physics. I did not originally intend to get into the details of particle physics - I thought I'd just study the ontology of, say, string theory, and then use that to think about the problem. But one thing led to another, and in particular I made progress by taking ideas that were slightly on the fringe, and trying to embed them within an orthodox framework. It was a great way to learn, and some of those fringe ideas may even turn out to be correct. It's now abundantly clear to me that I really could become a career physicist, working specifically on fundamental theory. I might even have to do that, it may be the best option for a day job. But what it means for the investigations detailed in this essay, is that I don't need to skip over any details of the fundamental physics. I'll be concerned with many-body interactions of biopolymer electrons in vivo, not particles in a collider, but an electron is still an electron, an elementary particle, and if I hope to identify the conscious state of the quantum self with certain special states from a many-electron Hilbert space, I should want to understand that Hilbert space in the deepest way available.
My only peer-reviewed publication, from many years ago, picked out pathways in the microtubule which, we speculated, might be suitable for mobile electrons. I had nothing to do with noticing those pathways; my contribution was the speculation about what sort of physical processes such pathways might underpin. Something I did notice, but never wrote about, was the unusual similarity (so I thought) between the microtubule's structure, and a model of quantum computation due to the topologist Michael Freedman: a hexagonal lattice of qubits, in which entanglement is protected against decoherence by being encoded in topological degrees of freedom. It seems clear that performing an ontological analysis of a topologically protected coherent quantum system, in the context of some comprehensive ontology ("interpretation") of quantum mechanics, is a good idea. I'm not claiming to know, by the way, that the microtubule is the locus of quantum consciousness; there are a number of possibilities; but the microtubule has been studied for many years now and there's a big literature of models... a few of which might even have biophysical plausibility.
As for the interpretation of quantum mechanics itself, these developments are highly technical, but revolutionary. A well-known, well-studied quantum field theory turns out to have a bizarre new nonlocal formulation in which collections of particles seem to be replaced by polytopes in twistor space. Methods pioneered via purely mathematical studies of this theory are already being used for real-world calculations in QCD (the theory of quarks and gluons), and I expect this new ontology of "reality as a complex of twistor polytopes" to carry across as well. I don't know which quantum interpretation will win the battle now, but this is new information, of utterly fundamental significance. It is precisely the sort of altered holistic viewpoint that I was groping towards when I spoke about quantum monads constituted by entanglement. So I think things are looking good, just on the pure physics side. The real job remains to show that there's such a thing as quantum neurobiology, and to connect it to something like Husserlian transcendental phenomenology of the self via the new quantum formalism.
It's when we reach a level of understanding like that, that we will truly be ready to tackle the relationship between consciousness and the new world of intelligent autonomous computation. I don't deny the enormous helpfulness of the computational perspective in understanding unconscious "thought" and information processing. And even conscious states are still states, so you can surely make a state-machine model of the causality of a conscious being. It's just that the reality of how consciousness, computation, and fundamental ontology are connected, is bound to be a whole lot deeper than just a stack of virtual machines in the brain. We will have to fight our way to a new perspective which subsumes and transcends the computational picture of reality as a set of causally coupled black-box state machines. It should still be possible to "port" most of the thinking about Friendly AI to this new ontology; but the differences, what's new, are liable to be crucial to success. Fortunately, it seems that new perspectives are still possible; we haven't reached Kantian cognitive closure, with no more ontological progress open to us. On the contrary, there are still lines of investigation that we've hardly begun to follow.
32 comments
Comments sorted by top scores.
comment by lavalamp · 2012-01-29T15:39:54.246Z · LW(p) · GW(p)
Everything computable by a quantum computer is computable by a classical computer (only slower, in some cases). Even if the human brain does in fact do some quantum calculations, a corresponding classical brain could be made. If you really believe that functionalism requires dualism, then I do not see how quantum mechanics can possibly help.
I'm bothered by the fact that you speak of modeling brains with fMRI. fMRI tracks blood flow, not neural activity (they are correlated). It will not be useful AFAIK for scanning a brain at the neuronal level, and we will (most likely) have to map every neural connection before we'd be able to emulate a brain. Speaking of "coarse-grained neurocomputational states" may be nonsensical; we don't know how much of the brain we'll have to emulate to get it right.
Lastly, my recollection from back when I went searching for evidence that the brain was a quantum computer in a feeble and ultimately doomed attempt to maintain my belief in dualism, is that it was very unlikely that the brain used quantum computation.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-01-30T04:56:34.249Z · LW(p) · GW(p)
The role of quantum mechanics in this argument is not to transcend Turing-equivalence. The role of quantum mechanics is to provide a rationale for an ontology containing entities which are fundamental yet have complex states, something which is necessary if you don't want to think of the mind as a non-fundamental state machine. Entangled states as actual states is not absolutely the only way to do this - e.g. there are plenty of topological structures, potentially playing a role in physics, which are complex unities - but it's a natural candidate.
I see I way overestimated the resolution of fMRI: it's of the order of a cubic millimeter, and a cubic millimeter contains about a billion synapses! So even with a really long time series, any model you make is probably going to be pretty crappy - it'll reproduce what your experimental subjects did in the precise situations under which they were scanned, but any other situation is liable to produce something bad.
I meant neuronal states described in a way that is coarse-grained with respect to fundamental physical degrees of freedom.
The standard argument against quantum biology of any sort is that living matter is at room temperature and so everything decoheres. One of the reasons that microtubules are attractive, for someone interested in quantum biology, is that they have some of the right properties to be storing topological quantum entanglement, which is especially robust. It would still be a huge leap from that to what I'm talking about, because I'm saying the whole conscious mind is a single quantum entity, so if it's based on collective excitations of electrons in microtubules (for example), we would still require some way for electrons in different microtubules to be coherently coupled. Any serious attempt in this direction will also have to study the cellular and intercellular medium from a condensed-matter perspective, to see if there are any collective quantum effects, e.g. in the ambient electromagnetic field on subcellular scales, or in the form of phonons in the fibrous intra- and intercellular matrix, which could help to mediate such a coupling.
Replies from: None, lavalamp↑ comment by [deleted] · 2012-01-30T21:46:59.004Z · LW(p) · GW(p)
The role of quantum mechanics is to provide a rationale for an ontology containing entities which are fundamental yet have complex states, something which is necessary if you don't want to think of the mind as a non-fundamental state machine.
Why don't you want to think of the mind as non-fundamental? Sounds like rationalization.
↑ comment by lavalamp · 2012-01-30T14:56:55.001Z · LW(p) · GW(p)
The role of quantum mechanics is to provide a rationale for an ontology containing entities which are fundamental yet have complex states, something which is necessary if you don't want to think of the mind as a non-fundamental state machine.
I don't understand how a quantum computer satisfies this requirement, but not a classical computer.
comment by lavalamp · 2012-01-29T15:09:10.866Z · LW(p) · GW(p)
In order that this post should not be overlong, I will not argue at length here for the proposition that functionalism implies dualism...
This seems like such a necessary component of your argument that I think it was a bad place to skimp on the explanation. The outline you gave did little to convince me, I'm afraid. I could be wrong, but my perception is that I won't be alone here in that position. Split the post in two if it makes it too long...
comment by kilobug · 2012-01-29T10:57:54.464Z · LW(p) · GW(p)
Maybe it's me, but I really didn't get the "why" of your "functionalism implies dualism" thesis. The qualia issue has been addressed in length by people like Douglas Hofstadter in a (to me) quite convincing way, or indirectly by Eliezer Yudkowsky in the Sequences (in How An Algorithm Feels From Inside for example), and the "borderline cases" issue is just a very classical issue in a reductionist view of "multi-layered map of a single-layered reality". It's the same kind of "borderline cases" you've with not being able to take at quark-level what is exactly part of a given object and what is not, and I really don't see how it implies dualism.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-01-30T05:26:37.530Z · LW(p) · GW(p)
I really didn't get the "why" of your "functionalism implies dualism" thesis.
Well, let's review what it is that we're trying to explain. Consider what you are, from the subjective perspective. You're a locus of awareness, experiencing some texture of sensations organized into forms. Then, we have what is supposed to be the physical reality of you: about O(10^26) atoms, executing an intricate nested dance of systems within systems, inside a skull somewhere. An individual sensation - the finest grain of that texture of sensation you're always experiencing, e.g. some pixel of red in your visual field - is supposed to be the very same thing as a particular massed movement of billions of atoms somewhere in your visual cortex. Even if you say that the redness is "how this movement feels" or "how it feels to be this movement" or some similar turn of phrase, you're still tending towards a type of dualism, property dualism, because you're saying that along with its physically recognizable properties, this flow of atoms has a property that otherwise plays no role in physics, the property of "how it feels".
the "borderline cases" issue is just a very classical issue in a reductionist view of "multi-layered map of a single-layered reality". It's the same kind of "borderline cases" you've [got] with not being able to take at quark-level what is exactly part of a given object and what is not, and I really don't see how it implies dualism.
For macroscopic concepts like chair, we can get away with vagueness about borderline cases, because there's no reason to believe that "chair" is anything more than a heuristic concept for talking about certain large clusters of atoms. The experience of a chair is a collaboration between a world of atoms and a mind of perceptions and concepts. But you can't reduce the mind itself in this way, because of the circularity involved.
Replies from: hairyfigment, kilobug↑ comment by hairyfigment · 2012-01-30T08:37:28.567Z · LW(p) · GW(p)
So what part of the answer do you disagree with?
As near as I can tell, you say our models look too imprecise to explain consciousness. You must know the argument that consciousness ain't that precise - how do you respond? Because when I put this together with the first link, I don't see what you have left to explain. (But I may be slightly drunk.)
↑ comment by kilobug · 2012-01-30T09:37:36.849Z · LW(p) · GW(p)
An individual sensation - the finest grain of that texture of sensation you're always experiencing, e.g. some pixel of red in your visual field - is supposed to be the very same thing as a particular massed movement of billions of atoms somewhere in your visual cortex.
No, not the very same thing. Many kinds of "massed movement of billions of atoms" can generate the same sensation. Sure, exactly the same movement of the whole brain will always generate the same sensation, but in real life, it won't just happen, a brain will never be exactly in the same state.
you're still tending towards a type of dualism, property dualism, because you're saying that along with its physically recognizable properties, this flow of atoms has a property that otherwise plays no role in physics, the property of "how it feels".
The configuration of atoms on my hard disk has a property of being an ext4 filesystem, while being an ext4 filesystem plays no role in physics, so I believe in property dualism ? Property is part of the map, not of the territory. The property of that hard disk is that it holds that movie file. The same movie file (for me, at the level of the map which is useful to me) exists on my USB key, and on that DVD. The physical configuration of the two is totally different, for me it's the same file.
It's exactly the same with "feeling" or "seeing red". And it doesn't matter that my DVD is slightly damaged so some DVD players will be able to read it, but others won't, making it a "borderline case".
But you can't reduce the mind itself in this way, because of the circularity involved.
I don't see the problem with that kind of circularity (but maybe I did read too much Hoftsdatder, so "strange loops" have became a normal fundamental concept to me). Also, you seem to forget that perception involve vagueness. Our perceptions aren't binary "red" and "orange". When require to classify something between "red" and "orange", we'll end up with one (one will get slightly higher activation), but overall, the "red" or "orange" symbols in our brain are more-or-less strongly activated and can be activated at the same time for borderline cases. So the borderline cases aren't even that problematic.
comment by Shmi (shminux) · 2012-01-29T19:51:03.958Z · LW(p) · GW(p)
Downvoted for many wishy-washy unfounded statements, but mostly for mentioning the word tensor in the synopsis and never again in the rest of the post. Have you considered taking a basic technical writing course?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-01-30T03:43:21.668Z · LW(p) · GW(p)
That's a genuine oversight; I added the synopsis at the last minute. The mind as tensor factor was discussed in a previous post. I've added a comment to the essay, but maybe I should just change the synopsis.
comment by Daniel_Burfoot · 2012-02-05T04:28:48.207Z · LW(p) · GW(p)
You are obviously smart and dedicated; the fact that I can't perfectly understand what you are talking about is just as likely to be my fault as it is to be yours. I want there to be more smart people pursuing eccentric philosophical quests. I would fund you if I had substantial excess financial capacity, I hope someone else does.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-02-05T05:00:34.845Z · LW(p) · GW(p)
Thanks for saying this.
comment by Jack · 2012-01-29T23:52:39.527Z · LW(p) · GW(p)
I think if you did become a professional physicist your chances of finding funding for this project would increase significantly. Even people who are sympathetic to your approach have no good reason for thinking you have the ability to make significant progress in this field. If you do have some qualifications or experience that would lead your readers to see that you aren't just a wackjob you should really include it in your pitches.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-01-30T04:23:04.904Z · LW(p) · GW(p)
Certainly, if I had a physics job already, this post would not exist; I would just be invisibly getting on with the project. Then there's the option of writing a paper. Last time around, the plan was to do something small but solid, like prove a minor conjecture. But I ended up taking the ambitious path, and now I have some ideas to develop which are very cool, but not yet validated (or invalidated). I might talk about them on a physics forum, but talking about them here would prove nothing, since the audience isn't qualified to judge them.
So yes, in terms of obtaining practical assistance, this post was a long shot. I had to actually run out of money before I was willing to sit down and write it. Afterwards I felt this immense relief at making a relatively uncompromising statement of my agenda. Hopefully a few other people got something from it as well.
comment by Risto_Saarelma · 2012-01-30T05:54:59.520Z · LW(p) · GW(p)
So is it possible you'll end up with a conclusion that whenever people go unconscious, the quantum state process gets scrambled and they die for real, and then later a new process gets started and some brand new person with their memories wakes up?
comment by Will_Newsome · 2012-02-10T08:45:23.770Z · LW(p) · GW(p)
Your agenda strikes me as potentially fruitful but insufficiently meta. There are many philosophical problems an FAI would need to be able to solve, and I certainly agree that consciousness is a huge one. But this would seem to me to indicate that we need to find a way to automate philosophical progress generally, rather than find a way to algorithmicize our human-derived intuitions about consciousness. Non? Are you of the opinion that we need to understand how brains do their magic if we're to be sure that our seed AI will be able to figure out how to do similar magic?
Wheeler talks about quantum mechanics as statistically describing the behavior of masses of logical operations. Goertzel is like 'well logical operations are just a rather rigid and unsatisfying form of thought, maybe you get quantum from masses of Mind operations'. As far as crackpot theories go it seems okay, and superficially looks like what you're trying to do in a much more technical way by unifying physics and experience.
Anyway, I wish you good luck on your journey.
(I apologize if this comment is unclear, I am highly distracted.)
comment by Ghostly · 2012-02-03T04:16:18.652Z · LW(p) · GW(p)
An example of the other way to approach this question is the idea of simulating a group of consciousness theorists for 500 subjective years, until they arrive at a consensus on the nature of consciousness. I think it's rather unlikely that anyone will ever get to solve FAI-relevant problems in that way.
The CEV idea there would be to create an AI which is optimizing for expected satisfaction of the utility function that would be output by such a process. If the AI's other functionality is good, it will start with reasonable guesses about what such a process would output, and rapidly improve those guesses. As it further improved, gathered more data, etc, it would better and better approximate that output.
comment by orthonormal · 2012-01-29T20:45:44.424Z · LW(p) · GW(p)
[pointless insult, later reconsidered]
Replies from: Nick_Tarleton, Nick_Tarleton, None↑ comment by Nick_Tarleton · 2012-01-30T21:10:33.597Z · LW(p) · GW(p)
[criticism of pointless insult]
Replies from: orthonormal↑ comment by orthonormal · 2012-01-31T04:18:37.411Z · LW(p) · GW(p)
...you're right. I'll edit it.
↑ comment by Nick_Tarleton · 2012-01-30T20:59:42.391Z · LW(p) · GW(p)
Unlike David, I don't think this direction is particularly worth exploring, but I think this kind of mockery (of people who are themselves civil) worsens the atmosphere without creating any benefit, and isn't exactly but contributes to an epistemic punishment-of-nonpunishers dynamic, which is really undesirable. Downvoted.
↑ comment by [deleted] · 2012-01-30T06:04:57.603Z · LW(p) · GW(p)
While I am also skeptical of this proposed direction, it is definitey worth exploring as these questions Mitchell is trying to answer are very much unresolved. Personally my views are very similar to Dennett, but I still applaud Mitchell for this work and if for no other reason, Mitchell Porter is unquestionbly the most knowledgable on this site when it comes to quantum physics and all theoretical physics in general really and the philosophy surrounding it.
If you disagree with him, fine, point out his errors, but don't call it "bullshit" just because you dislike it and please, consider showing some respect to someone who has dedicated more time in search of truth than you ever have.
It's not like he is asking "hey can I have some money to dig deeper into this rabbit hole?" he outlines what his goals are very clearly and then people should decide for themselves.
If I remember correctly, you were the same person who criticized his criticism of MWI without being able to point out any errors...
Replies from: wedrifid, orthonormal↑ comment by wedrifid · 2012-01-30T16:46:35.426Z · LW(p) · GW(p)
Mitchell Porter is unquestionbly the most knowledgable on this site when it comes to quantum physics and all theoretical physics in general really and the philosophy surrounding it.
I don't believe you.
If you disagree with him, fine, point out his errors, but don't call it "bullshit" just because you dislike it
He called the non-reductionist qualia bullshit "bullshit" because it is non-reductionist qualia bullshit.
Replies from: None↑ comment by [deleted] · 2012-01-30T21:55:08.636Z · LW(p) · GW(p)
Ok, if you don't believe me, then let me know who this other person is? I wont have a lengthy discussion about whos more knowledgable in a comment section, but I'd really like to see who you think knows more physics than Porter. Please don't say Yudkowsky, that'd be a joke.
I know Porter mainly from other place where physics is dicussed and not AI
↑ comment by orthonormal · 2012-01-30T16:22:59.724Z · LW(p) · GW(p)
Mitchell Porter is unquestionbly the most knowledgable on this site when it comes to quantum physics and all theoretical physics in general really and the philosophy surrounding it.
On what basis do you claim this? I haven't heard of any credentials he has in physics, for one thing. (For the record, I'm not a physicist but I have a PhD in a relevant area of math.)
If I remember correctly, you were the same person who criticized his criticism of MWI without being able to point out any errors...
Could you point me to that discussion, then?
Replies from: None↑ comment by [deleted] · 2012-01-30T23:29:58.099Z · LW(p) · GW(p)
I don't know which official credentials he has, but if you take a look at physicsforums.com and physics.stackexchange and other various blogs, you will quickly realize the vast amount of knowledge he got. He can do work in QM,QFT and String theory, not just read it and understand, but actual work, can you?
Here is the disccussion I was referring to: http://lesswrong.com/r/discussion/lw/7jw/probablyincomprehensible_decision_theory/4sud
Replies from: orthonormal, Douglas_Knight↑ comment by orthonormal · 2012-01-31T04:39:43.247Z · LW(p) · GW(p)
Thanks.
if you take a look at physicsforums.com and physics.stackexchange and other various blogs, you will quickly realize the vast amount of knowledge he got.
Huh. I'm surprised, and I'll update.
Here is the disccussion I was referring to:
Let's see. He was saying that MWI made no ontological sense. I said something insulting (that's an ugly pattern- I don't do that to very many people on here), then Manfred started explaining MWI pretty well. I jumped back in to explain the Born probabilities as a consequence of it being a Hilbert space.
Then Mitchell correctly identified my explanation with a variant of Barbour's Platonia (I may have missed that on first reading) before launching into something like infinite-set atheism (that countable duplication of a configuration would count as an explanation of the Born probabilities, but that a vector in L^2 with a certain length would not). He also mentions that MWI has issues with relativity and preferred somehow; I admit I'm not qualified to talk about relativistic QM, so I can't counter the objection- but if it wasn't a problem for Feynman, I'm not especially worried.
...I should update again, because he did offer a decent restatement of my view before offering one criticism I understand (and reject as mathematical Ludditism) along with two that I don't have the background for. I think I've probably let my disgust over his philosophical stance on qualia bleed over into my estimation of his physics knowledge. Drat.
↑ comment by Douglas_Knight · 2012-01-31T05:14:04.096Z · LW(p) · GW(p)
What do physics credentials have to do with this? Note MP's quote of Orthonormal's complaint in which he compared MP to Penrose!
I looked at physics stackexchange and quickly failed to change my impression of MP's grasp of physics. It's not terrible, but I don't see evidence that he can "do work." Given your interpretation of that thread you link, I shouldn't have bothered.
Replies from: None↑ comment by [deleted] · 2012-01-31T05:39:38.100Z · LW(p) · GW(p)
His credentials has to do with the fact that one should not dismiss everything out of hand just because you disagree with some positions he hold, if you knew his credentials you might have another look at what he writes and not shrug it off right away.
I'm surprised you feel that way really, he has done quite a lot of work, but he is a bit "all over the place" in terms of finishing projects. I would like to understand what you base your skepticism of his competence on, but maybe you should do that in a PM so we don't hijack MP's post.
regarding my position on MWI, it is in accordance with the current state of physics, the Born Rule issue and preferred basis is still not solved. Dugic et al 2011 also shows a deep problem for ever bein able to solve the preferred basis problem. But again, this should be discussed in it's own thread. Maybe a nonpartial should create "The big MWI debate" post where we could discuss all of these issues. Because I think that frankly over 90% of those who subscribe to MWi on this site has no clue about QM but only read Yudkowsky's sequence. And EY is definitely no physicist
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2012-01-31T05:52:56.332Z · LW(p) · GW(p)
Before this goes any further, I have to say that I appreciate the defense, but let's not overstate my demonstrated competencies. I would like to construct a field theory which applies this to this, and I am studying a well-known model system from string theory with a view to modifying it to match the real world, and I have various other conceptual projects. So I'm able to have ideas, but hardly any of these projects are at the stage where I have calculations to perform, and in that area I'll be leaning on references for a long time to come.