One last roll of the dice

post by Mitchell_Porter · 2012-02-03T01:59:56.996Z · LW · GW · Legacy · 107 comments

Previous articles: Personal research update, Does functionalism imply dualism?, State your physical account of experienced color.

 

In phenomenology, there is a name for the world of experience, the "lifeworld". The lifeworld is the place where you exist, where time flows, and where things are actually green. One of the themes of the later work of Edmund Husserl is that a scientific image of the real world has been constructed, on the basis of which it is denied that various phenomena of the lifeworld exist anywhere, at any level of reality.

When I asked, in the previous post, for a few opinions about what color is and how it relates to the world according to current science, I was trying to gauge just how bad the eclipse of the lifeworld by theoretical conceptions is, among the readers of this site. I'd say there is a problem, but it's a problem that might be solved by patient discussion.

Someone called Automaton has given us a clear statement of the extreme position: nothing is actually green at any level of reality; even green experiences don't involve the existence of anything that is actually green; there is no green in reality, there is only "experience of green" which is not itself green. I see other responses which are just a step or two away from this extreme, but they don't deny the existence of actual color with that degree of unambiguity.

A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green. Which returns us to the dilemma: either "experiences" exist and part of them is actually green, or you have to say that nothing exists, in any sense, at any level of reality, that is actually green. Either the lifeworld exists somewhere in reality, or you must assert, as does the philosopher quoted by Automaton, that all that exists are brain processes and words. Your color sensations aren't really there, you're "having a sensation" without there being a sensation in reality.

What about the other responses? kilobug seems to think that pi actually exists inside a computer calculating the digits of pi, and that this isn't dualist. Manfred thinks that "keeping definitions and referents distinct" would somehow answer the question of where in reality the actual shades of green are. drethelin says "The universe does not work how it feels to us it works" without explaining in physical terms what these feelings about reality are, and whether any of them is actually green. pedanterrific asks why wrangle about color rather than some other property (the answer is that the case of color makes this sort of problem as obvious as it ever gets). RomeoStevens suggests I look into Jeff Hawkins. Hawkins mentions qualia once in his book "On Intelligence", where he speculates about what sort of neural encoding might be the physical correlate of a color experience; but he doesn't say how or whether anything manages to be actually colored.

amcknight asks which of 9 theories of color listed in the SEP article on that subject I'm talking about. If you go a few paragraphs back from the list of 9 theories, you will see references to "color as it is in experience" or "color as a subjective quality". That's the type of color I'm talking about. The 9 theories are all ways of talking about "color as in physical objects", and focus on the properties of the external stimuli which cause a color sensation. The article gets around to talking about actual color, subjective or "phenomenal" color, only at the end.

Richard Kennaway comes closest to my position; he calls it an apparently impossible situation which we are actually living. I wouldn't put it quite like that; the only reason to call it impossible is if you are completely invested in an ontology lacking the so-called secondary qualities; if you aren't, it's just a problem to solve, not a paradox. But Richard comes closest (though who knows what Will Newsome is thinking). LW user "scientism" bites a different bullet to the eliminativists, and says colors are real and are properties of the external objects. That gets a point for realism, but it doesn't explain color in a dream or a hallucination.

Changing people's minds on this subject is an uphill battle, but people here are willing to talk, and most of these subjects have already been discussed for decades. There's ample opportunity to dissolve, not the problem, but the false solutions which only obscure the real problem, by drawing on the work of others; preferably before the future Rationality Institute starts mass-producing people who have the vice of quale-blindness as well as the virtues of rationality. Some of those people will go on to work on Friendly AI. So it's highly desirable that someone should do this. However, that would require time that I no longer have.

 

In this series of posts, I certainly didn't set out to focus on the issue of color. The first post is all about Friendly AI, the ontology of consciousness, and a hypothetical future discipline of quantum neurobiology. It may still be unclear why I think evidence for quantum computing in the brain could help with the ontological problems of consciousness. I feel that the brief discussion this week has produced some minor progress in explaining myself, which needs to be consolidated into something better. But see my remarks here about being able to collapse the dualistic distinction between mental and physical ontology in a tensor network ontology; also earlier remarks here about about mathematically representing the phenomenological ontology of consciousness. I don't consider myself dogmatic about what the answer is, just about the inadequacy of all existing solutions, though I respect my own ideas enough to want to pursue them, and to believe that doing so will be usefully instructive, even if they are wrong.

However, my time is up. In real life, my ability to continue even at this inadequate level hangs by a thread. I don't mean that I'm suicidal, I mean that I can't eat air. I spent a year getting to this level in physics, so I could perform this task. I have considerable momentum now, but it will go to waste unless I can keep going for a little longer - a few weeks, maybe a few months. That should be enough time to write something up that contains a result of genuine substance, and/or enough time to secure an economic basis for my existence in real life that permits me to keep going. I won't go into detail here about how slim my resources really are, or how adverse my conditions, but it has been the effort that you would want from someone who has important contributions to make, and nowhere to turn for direct assistance.[*] I've done what I can, these posts are the end of it, and the next few days will decide whether I can keep going, or whether I have to shut down my brain once again.

So, one final remark. Asking for donations doesn't seem to work yet. So what if I promise to pay you back? Then the only cost you bear is the opportunity cost and the slight risk of default. Ten years ago, Eliezer lent me the airfare to Atlanta for a few days of brainstorming. It took a while, but he did get that money back. I honor my commitments and this one is highly public. This really is the biggest bargain in existential risk mitigation and conceptual boundary-breaking that you'll ever get: not even a gift, just a loan is required. If you want to discuss a deal, don't do it here, but mail me at mitchtemporarily@hotmail.com. One person might be enough to make the difference.

[*]Really, I can't say that, that's an emotional statement. There has been lots of assistance, large and small, from people in my life. But it's been a struggle conducted at subsistence level the whole way.

 

ETA 6 Feb: I get to keep going.

107 comments

Comments sorted by top scores.

comment by WrongBot · 2012-02-03T02:44:50.429Z · LW(p) · GW(p)

What would it take to convince you that this entire line of inquiry is confused? Not just the quantum stuff, but the general idea that qualia are ontologically basic? Not just arguments, necessarily, experiments would be good, too.

If Mitchell is unable or unwilling to answer this question, no one should give him any amount of money no matter the terms.

Replies from: Nisan, Mitchell_Porter
comment by Nisan · 2012-02-03T03:23:24.818Z · LW(p) · GW(p)

Is this a reasonable request? What would convince you that this line of inquiry is not confused?

Replies from: WrongBot
comment by WrongBot · 2012-02-03T04:23:03.608Z · LW(p) · GW(p)

If we discover laws of physics that only seem to be active in the brain, that would convince me. If we discover that the brain sometimes violates the laws of physics as we know them, that would convince me. If we build a complete classical simulation of the brain and it doesn't work, that would convince me. If we build a complete classical simulation of the brain and it works differently from organic brains, that would convince me. Ditto for quantum versions, even, I guess.

And there are loads of other things that would be strong evidence on this issue. Maybe we'll find the XML tags that encode greenness in objects. I don't expect any of these things to be true, because if I did then I would have updated already. But if any of these things did happen, of course I would change my mind. It probably wouldn't even take evidence that strong. Hell, any evidence stronger than intuition would be nice.

comment by Mitchell_Porter · 2012-02-03T03:54:22.605Z · LW(p) · GW(p)

Mr WrongBot, the very name under which you post appears on my screen in green. Every account of color that has been presented here is either eliminativist or dualist. I would, in extreme conditions, be willing to think about dualism, but I'm not willing to deny that green is green and that it's right in front of me. I don't care if green is basic or not in your theory of reality, but it has to be there, OK? Not just a word or a gensym labeling an information-processing event.

You will not solve the hard problem of consciousness by saying that you are "confused". You will solve it by being clear about the facts, beginning with the facts that consciousness is real, it contains color, and our physical ontologies do not.

If someone wants a completely different reason to help tide me over, I can tell them all about a newly discovered relationship in particle physics which is just waiting for someone to embed it in a model. Every day I check the arxiv and expect to see that someone has done it, but not yet. It's an amazing generalization of another relationship which has been known for 30 years but which has been neglected (not completely), because it runs against a bit of particle-physics "common sense". A way around that was figured out a few years ago, and the logical next step is to extend this workaround to the new, extended relationship. This is something I noticed on my long trek towards fundamental-physics competence. I am actually trying to get a research job out of a side aspect of a spinoff of this, but there are no particle theorists where I am, so that's a challenge.

Replies from: JoshuaZ, WrongBot
comment by JoshuaZ · 2012-02-03T05:16:00.973Z · LW(p) · GW(p)

If someone wants a completely different reason to help tide me over, I can tell them all about a newly discovered relationship in particle physics which is just waiting for someone to embed it in a model. Every day I check the arxiv and expect to see that someone has done it, but not yet. It's an amazing generalization of another relationship which has been known for 30 years but which has been neglected (not completely), because it runs against a bit of particle-physics "common sense".

There are a fair number of math and physics people here on LW. Simply describing such an idea to some of them either via email or through another post would be doable. And this isn't the only place where there are physicists. I know professional physicists who might be willing to look at that sort of thing if I told them in advance that the person in question seemed to have an idea that wasn't obviously wrong and that the person knows far more physics than almost any crank. This sort of thing if it actually works would by itself be far more interesting to many here than anything related to qualia. The attitude would probably be something like pleasant surprise if it helps in that regard in some way.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-03T07:04:49.977Z · LW(p) · GW(p)

It's discussed in this thread. The Koide formula is the prototype; Alejandro Rivero, who opened the thread, found its generalization. The barrier to the original formula's credibility, as explained in the Wikipedia article (I added this section just yesterday), and also in Motl's post, is that that sort of relation "shouldn't" hold for that sort of quantity. The physicist Yukinari Sumino devised a mechanism to protect the original Koide relation from quantum corrections, but his paper has received very little attention, even from the people who work on generalizing the Koide relation. We don't know that the Sumino mechanism is the only way to protect a Koide relation, but it's natural to think about extending it to Rivero's new relations. That's actually a highly conservative response; one of the peculiarities of these formulas is the appearance of mass scales also found in QCD, which is suggestive of something really deep going on, like a supersymmetric QCD dual to the whole standard model. As far as Sumino's paper goes, later in the thread "fzero", who is evidently a working particle physicist, gives it some critical scrutiny, but we didn't yet come to any conclusions.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-02-03T16:51:52.368Z · LW(p) · GW(p)

So other people are paying serious attention to this already? Have you considered seeing if Rivero considers your ideas strong enough to help you put a paper on the arxiv?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-04T04:30:12.823Z · LW(p) · GW(p)

So other people are paying serious attention to this already?

To explain the situation I have to describe the sociology of theoretical attention to Koide's relation. Koide first discovered it in a model 30 years ago. Dozens of people have written papers about it. But as Lubos mentioned, the relation "shouldn't" be as accurate as it is, because of RG flow (which is why Lubos dismisses it as a coincidence). A much smaller number of papers have been written about the behavior of Koide relations under RG flow. Sumino's papers are the only ones describing a mechanism capable of making the Koide relation exact, and they haven't received much attention.

Alejandro's generalization of the Koide relation was found in a theoretical milieu where the RG issue hasn't been discussed very much. The bare facts are a set of numbers, the fermion masses. The people looking for Koide-type relations have included mainstream physicists, "alternative" physicists, and ex-physicists. (Alejandro is in the last category; he has a physics PhD, but works in IT now, but participates vigorously in these online discussions.) The mainstream theorists who have in recent years proposed extended Koide relations do sometimes consider how they would be affected by RG flow, but it's a rather peremptory discussion, and you don't even get that much from the others.

So I'm the only one publicly talking about "Rivero + Sumino", but who knows what's going on in Germany, Japan, and elsewhere. There are a few things which might inhibit consideration of Alejandro's paper: it involves an unfamiliar parametrization of the Koide relation, and it may sound odd to have quarks from different weak doublets in such a relationship. However, the paper actually describes a chain of Koide relations encompassing all the quark masses... Anyway, without going into the details, the relationships he found are a little peculiar from the vantage of theoretical expectations, which is an extra issue on top of the RG issue for physicists not already involved in these investigations; but in my opinion these peculiarities are actually mighty clues to the cause of the relationship.

If I ever do manage to make a Rivero-Sumino model, getting on the arxiv won't be a problem. The challenge is just to make the model.

comment by WrongBot · 2012-02-03T04:25:03.332Z · LW(p) · GW(p)

So there is no evidence, ever, even in principle, that could convince you that eliminitavism is true? Well, I guess I have my answer then.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-03T04:34:54.406Z · LW(p) · GW(p)

Does eliminativism imply that the green word in the banner of this website isn't green?

Replies from: WrongBot
comment by WrongBot · 2012-02-03T04:59:16.935Z · LW(p) · GW(p)

Forget eliminitavism. Let's just talk about the proposition that:

We don't need new fundamental entities, currently unknown to science, to explain whatever it is that causes people to talk about qualia.

Or alternatively, the proposition that:

The experience of the color green can be completely and satisfactorily explained by understanding the computational processes performed by the human brain.

Or any similar proposition that gets at that general idea. Is there any evidence that could convince you of such a proposition? Whether you intend to or not, you are coming off (to me) as evasive on this subject, which is why I'm trying so hard to nail down your actual position.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-03T06:06:56.771Z · LW(p) · GW(p)

you are coming off (to me) as evasive on this subject, which is why I'm trying so hard to nail down your actual position.

My actual position is that green exists and that it does not exist in standard physical ontology. Particles in standard physical ontology are not green, and nothing made out of them would be green. Is that clear?

One of your propositions talks about explaining "whatever it is that causes people to talk about qualia". The other talks about explaining "the experience of the color green". In both cases we talk around the actual existence of colors in an unacceptable way - the first one focuses on words, the second one focuses on "experience of green", which can provide an opportunity to deny that green per se exists, as seen in the quote produced by Automaton.

Replies from: WrongBot
comment by WrongBot · 2012-02-03T06:16:36.056Z · LW(p) · GW(p)

First requested clarification: is it your belief that green is an ontologically basic primitive? Or is green composed of other ontologically basic primitives that are outside the standard model?

Second requested clarification: Since my propositions are unacceptable (and that strikes me as a fair criticism, given your arguments) is there any experiment or argument or experience or anything that could convince you that green is not real in the way that you currently believe that it is?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-03T06:36:48.045Z · LW(p) · GW(p)

I'll start with the second question.

is there any experiment or argument or experience or anything that could convince you that green is not real in the way that you currently believe that it is?

That would amount to convincing me that the experience which is currently happening, is not currently happening; or that an experience which previously happened, did not actually happen.

is it your belief that green is an ontologically basic primitive? Or is green composed of other ontologically basic primitives that are outside the standard model?

The analysis of color as abstractly three-dimensional (e.g. hue, saturation, brightness) seems phenomenologically accurate to me. So as a first approximation to a phenomenological ontology of color, I say that there are regions of color, present to a conscious subject, within which these attributes vary.

If we want to go further, we have to tackle further questions, like what ontological category color belongs to. Is it a property of the conscious subject? Is it a property of a part of the conscious subject? Should we regard sensory continua as part of the perceiving subject, or as separate entities to which the subject has a relation? Should we regard color as a property of a visual region, or should we look at some of the philosophical attempts to collapse the distinction between object and property, and view "colored region" as the basic entity?

What I believe is that consciousness is not a collection of spatial parts. It has parts, but the binding relations are some other sort of relation, like "co-presence to the perceiving subject" or "proximity in the sensory manifolds". Since color only occurs within the context of some conscious experience, whether or not it's "ontologically basic" is going to require a lot more clarity about what's foundational and what's derivative in ontology, and about the ontological nature of the overall complex or unity or whatever it is, that is the experience as a whole (which in turn is still only going to be part of or an aspect of the total state of a self).

Clearly this isn't in the standard model of physics in any standard sense. But if the standard model can be expressed - for example, and only as an example, as an evolving tensor network of a particular sort - then it may be possible to specify an ontology beneath the tensor formalism, in which this complex ontological object, the conscious being, can be identified with one of the very high-dimensional tensor factors appearing in the theory.

The correct way to state the nature of color and its relationship to its context in consciousness is a very delicate question. We may need entirely new categories - by categories I mean classic ontological categories like substance, property, relation. Though there has already been, in the history of human thought, a lot of underrated conceptual invention which might turn out to be relevant.

Replies from: Torben, WrongBot
comment by Torben · 2012-02-03T18:07:03.013Z · LW(p) · GW(p)

That would amount to convincing me that the experience which is currently happening, is not currently happening; or that an experience which previously happened, did not actually happen.

Why? What's wrong with an experience happening in another way than you imagine? This more than anything cries "crackpot" to me; the uncompromising attitude that your opponents' view must lead to absurdities. Like Christians arguing that without souls, atheists should go on killing sprees all the time.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-04T03:37:25.518Z · LW(p) · GW(p)

What's wrong with an experience happening in another way than you imagine?

You could be talking about ontology here, or you could be talking about phenomenology (and then there is the small overlap where we talk about phenomenological ontology, the ontology of appearances).

An example of an experience happening ontologically in a different way than you imagine, might be a schizophrenic who thinks voices are being beamed into their head by the CIA, when in fact they are an endogenously created hallucination.

An example of an experience happening phenomenologically in a different way than you imagine, might be a court witness who insists quite honestly that they saw the defendant driving the stolen car, but in fact they never really had that experience.

We are talking here about the nature of color experience. I interpret WrongBot to be making a phenomenological claim, that there aren't actually colors even at the level of experience. Possibly you think the argument is about the causes or "underlying nature" of color experience, e.g. the idea that a color perception is really a neural firing pattern.

If the argument is solely at the level of phenomenology, then there is no need to take seriously the idea that the colors aren't there. This isn't a judgment call about an elusive distant event. Colors are right in front of me, every second of my waking life; it would be a sort of madness to deny that.

If the argument is at the level of ontology, then I presume that color perception does indeed have something to do with neural activity. But the colors themselves cannot be identified with movements of ions through neural membranes, or whatever the neurophysical correlate of color is supposed to be, because we already have a physical ontology and it doesn't contain any such extra property. So either we head in the direction of functionalist dualism, like David Chalmers, or we look for an alternative. My alternative is a monism in which the "Cartesian theater" does exist and can be identified with a single large quantum tensor factor somewhere in the brain. I am not dogmatic about this, there are surely other possibilities, but I do insist that colors exist and that they cannot be monistically identified with collective motions of ions or averages of neural firing rates.

(ETA: Incidentally, I don't deny the existence and relevance of classical encodings of color properties in the nervous system. It's just that, on a monistic quantum theory of mind, this isn't the physical correlate of consciousness; it's just part of the causal path leading towards the Cartesian theater, where the experience itself is located.)

Replies from: drethelin
comment by drethelin · 2012-02-04T03:50:18.492Z · LW(p) · GW(p)

Do we need a separate understanding for the feeling you get when you see a loved one? Is there a thing separate from the neurons and from the particles of scent that constitutes the true REAL smell? What about the effects of caffeine? There is nothing inherent to that molecule that equates to "alertness" any more than there are "green" atoms. Do you think there is a seperate "alertness" mind-object that interacts with a nonphysical part of coffee? Do you think these things are also unable to be explained by neurons, or do you think colors are different?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-04T05:07:05.634Z · LW(p) · GW(p)

Colors are just the most vivid example. Smells and feelings are definitely part of consciousness - that is, part of the same phenomenal gestalt as color - so they are definitely on the same ontological level. A few comments up the thread, I talked about color as a three-dimensional property associated with visual regions. Smell is similarly a sensory quale embedded in a certain way in the overall multimodal sensory gestalt. Feelings are even harder to pin down, they seem to be a complex of bodily sensation, sensations called "moods" that aren't phenomenally associated with a body region, and even some element of willed intentionality. Alertness itself isn't a quale, it's a condition of hyperattentiveness, but it is possible to notice that you are attending intently to things, so alertness is a possible predicate of a reflective judgment made about oneself on the basis of phenomenal evidence. In other words, it's a conceptual posit made as part of a high-order intentional state.

These discussions are bringing back to me the days when I made a serious attempt to develop a phenomenological ontology. All the zeroth-order objects of an experience were supposed to be part of a "total instantaneous phenomenal state of affairs", and then you had high-order reflective judgments made on top of that, which themselves could become parts of higher-order judgments. Cognitive scientists and AI theorists do talk about intentionality, but only functionally, not phenomenologically. Even philosophers of consciousness sometimes hesitate to say that intentional states are part of consciousness - they're happier to focus on sensation, because it's so obvious, not just that it's there, but that you know it's there.

However, it's also clear, not only that we think, but that we know we are thinking - even if this awareness is partly mediated by a perceptual presentation to oneself of a stream of symbols encoding the thought, such as a subvocalization - and so I definitely say intentionality is part of consciousness, not just sensation. Another way to see this is to notice that we see things as something. There's a "semantics" to perception, the conceptual ingredient in the phenomenal gestalt. Therefore, it's not enough to characterize conscious states as simply a blob of sensory quale - colors varying across the visual field, other sense-data varying across the other sensory modalities. The whole thing is infused, even at the level of consciousness, with interpretation and conceptual content. How to express this properly - how to state accurately the ontology of this conceptual infusion into the phenomenal - is another delicate issue, though plenty has been written about it, for example in Kant and Husserl.

So everything that is a part of experience is part of the problem. Experiences have structure (for example, the planar structure of a depthless visual field), concepts have logical structure and conditions of application, thoughts also have a combinatorial structure. The key to computational materialism is a structural and causal isomorphism between the structure of conscious and cognitive states, and the structure of physical and computational states. The problem is that the isomorphism can't be an identity if we use ordinary physical ontology or even physically coarse-grained computational states in any ontology.

Empirically, we do not know in any very precise way what the brain locus of consciousness is. It's sort of spread around, the brain contains multiple copies of data... One of the strong reasons for the presumption that speculations about the physical correlate of consciousness being an "exact quantum-tensor-factor state machine" rather than a "coarse-grained synapse-and-ion-gate state machine" are bogus and irrelevant, is the presumption that the physical locus of consciousness is already known to be something like the latter. But it isn't; that is just a level of analysis that we happen to be comfortable with. The question is still empirically open, one reason why I would hold out hope for a quantum monism, rather than a functionalist dualism, being the answer.

comment by WrongBot · 2012-02-03T18:31:15.643Z · LW(p) · GW(p)

Alright, I got my answer, I'm done. So far as I can tell, you're smart, willing to put the work in, and your intentions are good. It's really too bad to hear that you've accepted the crackpot offer. I wish it were otherwise.

comment by antigonus · 2012-02-03T04:08:48.678Z · LW(p) · GW(p)

Something has gone horribly wrong here.

Replies from: David_Gerard, JenniferRM
comment by JenniferRM · 2012-02-03T19:00:35.505Z · LW(p) · GW(p)

Is the apparent reference to David Stove's "What is Wrong with Our Thoughts?" intentional?

Replies from: antigonus
comment by antigonus · 2012-02-04T01:37:36.035Z · LW(p) · GW(p)

No, never seen that before.

comment by JoshuaZ · 2012-02-03T04:50:17.766Z · LW(p) · GW(p)

There are a lot of reasons that people aren't responding positively to your comments. One of which I think hasn't been addressed is that this to a large extent pattern matches to a bad set of metapatterns in history. In general, our understanding of the mind has been by having to reject our strong intuitions about how our minds are dualist and how aspects of our minds (or our minds as a whole) are fundamentally irreducible. So they look at this and think that it isn't a promising line of inquiry. Now, this may be unfair, but I don't think it really is very unfair. The notion that there are irreducible or even reducible but strongly dualist aspects of our universe seems to a class of hypotheses which has been repeatedly falsified. So it is fair for someone to by default to assign a low probability to similar hypotheses.

You have other bits that make worrying signals about your rationality or level intentions, like when you write things like:

I don't mean that I'm suicidal, I mean that I can't eat air. I spent a year getting to this level in physics, so I could perform this task.

This bit not only made me sit up in alarm, it substantially reduced how seriously I should take your ideas. Previously, my thought process was "This seems wrong, but Porter seems to know a decent amount of physics, more than I do in some aspects, maybe I should update more to taking this sort of hypothesis more seriously?" Although Penrose has already done that, so you wouldn't cause that big an update, this shows that much of your physics knowledge occurred after you reached certain conclusions. This feels a lot like learning about a subject to write the bottom line. This isn't as extreme as say Jonathan Wells) who got a PhD in biology so he could "destroy Darwinism" but it does seem similar. The primary difference is that Wells seemed interested in the degree for its rhetorical power, whereas you seem genuinely interested in actually working out the truth. But to a casual observer who just read this post, they'd see a very strong match here.

I also think that you are being downvoted in part because you are asking for money in a fairly crass fashion and you don't have the social capital/status here to get away with it. Eliezer gets away with it even from the people here who don't consider the Singularity Institute to be a great way of fighting existential risk, because it is hard to have higher status than being the website's founder (although lukeprog and Yvain might be managing to beat that in some respects). In this context, making a point about how you just want loans at some level reduces status even further. One thing that you may want to consider is looking for other similar sources of funding that are broader and don't have the same underlying status system. Kickstarter would be an obvious one.

Replies from: Mitchell_Porter, HoverHell
comment by Mitchell_Porter · 2012-02-03T07:33:28.300Z · LW(p) · GW(p)

In general, our understanding of the mind has been by having to reject our strong intuitions about how our minds are dualist and how aspects of our minds (or our minds as a whole) are fundamentally irreducible.

Sometimes progress consists of doubling back to an older attitude, but at a higher level. Revolutions have excesses. The ghost in the machine haunts us, the more we take the machine apart. I see the holism of quantum states as the first historical sign of an ontological synthesis transcending the clash between reductionism and subjectivity, which has hitherto been resolved by rejecting one or the other, or by uneasy dualistic coexistence.

this shows that much of your physics knowledge occurred after you reached certain conclusions. This feels a lot like learning about a subject to write the bottom line.

Or it's like learning anatomy, physiology, and genetics, so you can cure a disease. Certainly my thinking about physics has a much higher level of concreteness now, because I have much more to work with, and I have new ideas about details - maybe it's complexes of twistor polytopes, rather than evolving tensor networks. But I've found no reason to question the original impetus.

I also think that you are being downvoted in part because you are asking for money in a fairly crass fashion and you don't have the social capital/status here to get away with it.

I believe most of the downvotes are coming because of the claims I make (about what might be true and what can't be true) - I get downvotes whenever I say this stuff. Also because it's written informally rather than like a scholarly argumentative article (that's due to writing it all in a rush), and it contains statements to the effect that "too many of you just don't get it". Talking about money is just the final straw, I think.

But actually I think it's going OK. There's communication happening, issues are being aired and resolved, and there will have been progress, one way or another, by the time the smoke clears.

However, I do want to say that this comment of yours was not bad as an exercise in dispassionate analysis of what causes might be at work in the situation.

Replies from: bryjnar, David_Gerard
comment by bryjnar · 2012-02-03T15:39:19.355Z · LW(p) · GW(p)

One other bit of (hopefully) constructive criticism: you do seem to have a bit of a case of philosophical jargon-itis. I mean sentences like this:

I see the holism of quantum states as the first historical sign of an ontological synthesis transcending the clash between reductionism and subjectivity, which has hitherto been resolved by rejecting one or the other, or by uneasy dualistic coexistence.

As a philosopher myself, I appreciate the usefulness of jargon from time to time, but you sometimes have the air of throwing it around for the sheer joy of it. Furthermore, I (at least) find that that sort of style can sometimes feel like you're deliberately trying to obscure your point, or that it's camoflage to conceal any dubious parts.

Replies from: David_Gerard, CronoDAS
comment by David_Gerard · 2012-02-03T22:36:45.918Z · LW(p) · GW(p)

When someone's spent years on a personal esoteric search for meaning, word salad is a really bad sign.

comment by CronoDAS · 2012-02-06T05:06:18.681Z · LW(p) · GW(p)

One other bit of (hopefully) constructive criticism: you do seem to have a bit of a case of philosophical jargon-itis.

What he said.

I have difficulty understand what Mitchell Porter is trying to say when he talks about this topic. When I run into something that is difficult to understand in this manner, I usually find that, upon closer examination, it usually turns out that I didn't understand it because it doesn't make any sense in the first place. And, as far as I can tell, this is also true of what Mitchell Porter, too.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-06T06:45:38.010Z · LW(p) · GW(p)

I claim that colors obviously exist, because they are all around us, and I also claim that they do not exist in standard physical ontology. Is that much clear?

Replies from: CronoDAS
comment by CronoDAS · 2012-02-06T07:07:25.611Z · LW(p) · GW(p)

Now it is.

I disagree that colors do not exist in standard physical ontology, and find the claim rather absurd on its face. (I'm not entirely sure what ontology is, but I think I've picked up the meaning from context.)

See:
Brain Breakthrough! It's Made of Neurons!
Hand vs. Fingers
Angry Atoms

I don't know every last detail of how the experience of color is created by the interaction of of light waves, eyes, and neurons, but I know that that's where it comes from.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-06T08:33:24.184Z · LW(p) · GW(p)

An ontology is a theory about what it is that exists. I have to speak of "physical ontology" and not just of physics, because so many physicists take an anti-ontological or positivistic attitude, and say that physical theory just has to produce numbers which match the numbers coming from experiment; it doesn't have to be a theory about what it is that exists. And by standard physical ontology I mean one which is based on what Galileo called primary properties, possibly with some admixture of new concepts from contemporary mathematics, but definitely excluding the so-called secondary properties.

So a standard physical ontology may include time, space, and objects in space, and the objects will have size, shape, and location, and then they may have a variety of abstract quantitative properties on top of that, but they don't have color, sound, or any of those "feels" which get filed under qualia.

I don't know every last detail of how the experience of color is created by the interaction of light waves, eyes, and neurons, but I know that that's where it comes from.

Asking "where is the experienced color in the physical brain?" shows the hidden problem here . We know from experience that reality includes things that are actually green, namely certain parts of experiences. If we insist that everything is physical, then that means that experiences and their parts are also physical entities of some kind. If the actually green part of an experience is a physical entity, then there must be a physical entity which is actually green.

For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location - the property of always being at some point in space - and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn't actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles - e.g. "location of center of mass" or "having a part at location x0 and another part at x1". We can even extend to counterfactual properties, e.g. "the property of flying apart if a heavy third particle were to fly past on a certain trajectory".

To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that's a little absurd. It is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue. The properties that are intrinsically available in standard physical ontology are much like arithmetic properties, but with a few additional "physical" predicates that can also enter into the definition.

I presume that most modern people don't consider linguistic behaviorism an adequate account of anything to do with consciousness. Linguistic behaviorism is where you say there are no "minds" or "psychological states", there are just bodies that speak. It's the classic case of accounting for experience by only accounting for what people say about experience.

Cognitive theories of consciousness are considered an advance on this because they introduce a causal model with highly structured internal states which have a structural similarity to conscious states. We see the capacity of neurons to encode information e.g. in spiking rates, we see that there are regions of cortex to which visual input is mapped point by point, and so we say, maybe the visual experience of a field of color is the same thing as a sheet of visual neurons spiking at different rates.

But I claim they can't be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.

Here I say there are two choices. Either you say that on top of the primary properties out of which standard physical ontology is built, there are secondary properties, like actual green, which are the building blocks of conscious experiences, and you say that the experiences dualistically accompany the causally isomorphic physical processes. Or you say that somewhere there is a physical object which is genuinely identical to the conscious experience - it is the experience - and you say that these neuronal sheets which behave like the parts of an experience still aren't the thing itself, they are just another stage in the processing of input (think of the many anatomical stages to the pathways that begin at the optic nerve and lead onward into the brain).

There are two peculiarities to this second option. First, haven't we already argued that the base properties available in physical ontology, considered either singly or in conjunction, just can't be identified with the constituent properties of conscious states? How does positing this new object help, if it is indeed a physical object? And second, doesn't it sound like a soul - something that's not a network of neurons, but a single thing; the single place where the whole experience is localized?

I propose to deal with the second peculiarity by employing a quantum ontology in which entanglement is seen as creating complex single objects (and not just correlated behaviors in several objects which remain ontologically distinct), and with the first peculiarity by saying that, yes, the properties which make up a conscious state are elementary physical properties, and noting that we know nothing about the intrinsic character of elementary physical properties, only their causal and structural relations to each other (so there's no reason why the elementary internal properties of an entangled system can't literally and directly be the qualia). I take the structure of a conscious state and say, that is the structure of some complex but elementary entity - not the structure of a collective behavior (as when we talk about the state of a neuron as "firing" or "not firing", a description which passes over the intricate microscopic detail of the exact detailed state).

The rationale of this move is that identifying the conscious state machine with a state machine based on averaged collective behaviors is really what leads to dualism. If we are instead dealing with the states of an entity which is complex but "fundamental", in the sense of being defined in terms of the bottom level of physical description (e.g. the Hilbert spaces of these entangled systems), then it's not a virtual machine.

Maybe that's the key concept in order to get this across to computer scientists: the idea is that consciousness is not a virtual state machine, it's a state machine at the "bottom level of implementation". If consciousness is a virtual state machine - so I argue - then you have dualism, because the states of the state machine of consciousness have to have a reality which the states of a virtual machine don't intrinsically have.

If you are just making a causal model of something, there's no necessity for the states of your model to correspond to anything more than averaged behaviors and averaged properties of the real system you're modeling. But consciousness isn't just a model or a posited concept, it is a thing in itself, a definite reality. States of consciousness must exist in the true ontology, they can't just be heuristic approximate concepts. So the choice comes down to: conscious states are dualistically correlated with the states of a virtual state machine, or conscious states are the physical states of some complex but elementary physical entity. I take the latter option and posit that it is some entangled subsystem of the brain with a large but finite number of elementary degrees of freedom. This would be the real physical locus of consciousness, the self, and you; it's the "Cartesian theater" where diverse sensory information all shows up within the same conscious experience, and it is the locus of conscious agency, the internally generated aspect of its state transitions being what we experience as will.

(That is, the experience of willing is awareness of a certain type of causality taking place. I'm not saying that the will is a quale; the will is just the self in its causal role, and there are "qualia of the will" which constitute the experience of having a will, and they result from reflective awareness of the self's causal role and causal power... Or at least, these are my private speculations. )

I'll guess that my prose got a little difficult again towards the end, but that's how it will be when we try to discuss consciousness in itself as an ontological entity. But hopefully the road towards the dilemma between dualism and quantum monism is a little clearer now.

Replies from: CronoDAS
comment by CronoDAS · 2012-02-06T12:06:52.058Z · LW(p) · GW(p)

For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location - the property of always being at some point in space - and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn't actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles - e.g. "location of center of mass" or "having a part at location x0 and another part at x1". We can even extend to counterfactual properties, e.g. "the property of flying apart if a heavy third particle were to fly past on a certain trajectory".

To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that's a little absurd.

Well, it sounds quite reasonable to me to say that if you arrange elementary particles in a certain, complicated way, you get an instance of something that experiences greenness. To me, this is no different than saying that that if you arrange particles in a certain, complicated way, you get a diamond. We just happen to know a lot more about what particle configurations create "diamondness" than "experience of green"ness. (As a matter of fact, we know exactly how to define "diamondness" as a function of particle type and arrangement.)

So, at this point I apply the Socratic method...

Are we in agreement that a "diamond" is a thing that exists? (My answer: Yes - we can recognize diamonds when we see them.)

Is the property "is a diamond" one that can be defined in terms of "quantitative and logical conjunctions of the properties of individual particles"? (My answer: Yes, because we know that diamonds are made of carbon atoms arranged in a specific pattern.)

Hopefully we agree on these answers! And if we do, can you tell me what the difference is between the predicate "is experiencing greenness" and "is a diamond" such that we can tell, in the real world, if something is a diamond by looking at the particles that make it up, and that it is impossible, in principle, to do the same for "is experiencing greenness"?

What I think your mistake is, is that you underestimate the scope of just what "quantitative and logical conjunctions of the properties of individual particles" can actually describe. Which is, literally, anything at all that can be described with mathematics, assuming you're allowing all the standard operators of predicate logic and of arithmetic. And that would include the function that maps "arrangements of particles" as an input and returns "true" if the arrangement of particles included a brain that was experiencing green and "false" otherwise - even though we humans don't actually know what that function is!

But I claim they can't be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.

To sum up, I assert that you are mistaken when you say that there is is an ontological mismatch - the sheet of neurons does indeed contain the experience of green. You are literally making the exact same error that Eliezer's strawman makes in Angry Atoms.

Replies from: whowhowho, Mitchell_Porter
comment by whowhowho · 2013-02-01T15:34:37.054Z · LW(p) · GW(p)

We just happen to know a lot more about what particle configurations create "diamondness" than "experience of green"ness.

And if you don't know how to create greenness, it is an act of faith on your part that it is done by phsyics as you understand it at all.

Replies from: CronoDAS
comment by CronoDAS · 2013-02-01T20:08:21.965Z · LW(p) · GW(p)

Perhaps, but physics has had a pretty good run so far...

Replies from: whowhowho
comment by whowhowho · 2013-02-02T01:27:20.226Z · LW(p) · GW(p)

The key phrase is "as you understand it". 19th century physics doesn't explain whatever device you wrote that on.

comment by Mitchell_Porter · 2012-02-07T02:44:10.954Z · LW(p) · GW(p)

By talking about "experience of green", "experiencing greenness", etc, you get to dodge the question of whether greenness itself is there or not. Do you agree that there is something in reality that is actually green, namely, certain parts of experiences? Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?

Replies from: metaphysicist, CronoDAS
comment by metaphysicist · 2012-02-21T06:31:40.200Z · LW(p) · GW(p)

Do you agree that there is something in reality that is actually green, namely, certain parts of experiences?

No. Why do you believe there is? Because you seem to experience green? Since greenness is ontologically anomalous, what reason is there to think the experience isn't illusion?

comment by CronoDAS · 2012-02-07T04:37:22.847Z · LW(p) · GW(p)

Well, I'm used to using the word "green" to describe objects that reflect certain wavelengths of light (which are interpreted in a certain way by the human visual system) and not experiences. As in, "This apple is green" or "I see something that looks green." Which is why I used the expression "experience of greenness", because that's the best translation I can think of for what you're saying into CronoDAS-English.

So the question

Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?

seems like a fallacy of equivocation to me, or possibly a fallacy of composition. It feels odd to me to say that a brain is green - after all, they don't look green when you're cutting open a skull to see what's inside of it. If "green" in Mitchell-Porter-English means the same thing as "experiences the sensation of greenness" does in CronoDAS-English, then yes, I'll definitely say that the set of particular physical entities in question possesses the property "green", even though the same can't be said of the individual point-particles which make up that collection.

(This kind of word-wrangling is another reason why I tried to stay out of this discussion in the past... trying to make sure we mean the same thing when we talk to each other can take a lot of effort.)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-07T05:38:00.646Z · LW(p) · GW(p)

I'm used to using the word "green" to describe objects that reflect certain wavelengths of light (which are interpreted in a certain way by the human visual system)

But you would have been using the word "green" before you knew about wavelengths of light, or had the idea that your experiences were somehow the product of your brain. Green originally denotes a very basic phenomenon, a type of color. As a child you may have been a "naive realist", thinking that what you see is the world itself. Now you think of your experience as something in your brain, with causes outside the brain. But the experience itself has not changed. In particular, green things are still actually green, even if they are now understood as "part of an experience that is inside one's brain" rather than "part of the world outside one's body".

"Interpretation" is too abstract a word to describe something as concrete as color. It provides yet another way to dodge the reality of color itself. You don't say that the act of falling over is an "interpretation" of being in the Earth's gravitation field. The green experiences are green, they're not just "interpreted as green".

It feels odd to me to say that a brain is green - after all, they don't look green when you're cutting open a skull to see what's inside of it.

Since we are assuming that our experiences are parts of our brains, this would be the wrong way to think about it anyway. Your experience of anything, including cutting open someone else's skull, is supposed to be an object inside your own brain, and any properties of that experience are properties of part of your own brain. You won't see the color in another brain by looking at it. But somehow, you see the color in your own brain by being it.

If "green" in Mitchell-Porter-English means the same thing as "experiences the sensation of greenness" does in CronoDAS-English

The latter expression again pushes away the real issue - is there such a thing as actual greenness or not. We earlier had some quotes from an Australian philosopher, JJC Smart, who would say there are "experiences of green", but there's no actual green. He says this because he's a materialist, so he believes that all there is in reality is just neurons doing their thing, and he knows that standard physical ontology doesn't contain anything like actual green. He has to deny the reality of one of the most obviously real things there is, but, at least he takes a stand.

On the other hand, someone else who talks about "experiences of green" might decide that what they mean is exactly the same thing as they would have meant by green, when they were a child and a direct realist. Talking about experience in this case is just a way to emphasize the adult understanding of what it is that one directly experiences - parts of your own brain, rather than objects outside it. But independent of this attitude, you still face a choice: will you say that yes, green is there in the same way it ever was, or will you say that it just can't be, because physics is true and physics contains no such thing as "actual green"?

Replies from: CronoDAS
comment by CronoDAS · 2012-02-07T06:46:37.809Z · LW(p) · GW(p)

Lot of words there... I hope I'm understanding better.

But independent of this attitude, you still face a choice: will you say that yes, green is there in the same way it ever was, or will you say that it just can't be, because physics is true and physics contains no such thing as "actual green"?

This is what I've been trying to say: "Green" exists, and "green" is also present (indirectly) in physics. (I think.)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-08T01:06:46.148Z · LW(p) · GW(p)

What does "present indirectly" mean?

Replies from: CronoDAS
comment by CronoDAS · 2012-02-08T02:15:48.988Z · LW(p) · GW(p)

Not one of the fundamental properties, but definable in terms of them.

In other words, present in the same way "diamond" is - there's no property "green" in the fundamental equations of physics, but it "emerges" from them, or can (in principle) be defined in terms of them. (I'm embarrassed to use the word "emergent", but, well...)

To use an analogy, there's no mention of "even numbers" in the axioms of Peano Arithmetic or in first order logic, but S(S(0)) is still even; evenness is present indirectly within Peano Arithmetic. You can talk about even numbers within Peano Arithmetic by writing a formula fragment that is true of all even numbers and false for all other numbers, and using that as your "definition" of even. (It would be something like "Ǝy(S(S(0))y) = x)".) If I understand correctly, "standard physical ontology" is also a formal system, so the exact same trick should work for talking about concepts such as "diamond" or "green" - we just don't happen to know (yet) how to define "green" the same way we can define "diamond" or "even", but I'm pretty sure that, in principle, there is a way to do it.

(I hope that made sense...)

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-08T02:51:24.933Z · LW(p) · GW(p)

Here I fall back on my earlier statement that this

is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue.

Let's compare the plausibility of getting colors out of combinations of the elementary properties in standard physical ontology, and the plausibility of getting colors out of Peano Arithmetic. I think the two cases are quite similar. In both cases you have an infinite tower of increasingly complex conjunctive (etc) properties that can be defined in terms of an ontological base, but getting to color just from arithmetic or just from points arranged in space is asking for magic. (Whereas getting a diamond from points arranged in space is not problematic.)

There are quantifiable things you can say about subjective color, for example its three-dimensionality (hue, saturation, brightness). The color state of a visual region can be represented by a mapping from the region (as a two-dimensional set of points) into three-dimensional color space. So there ought to be a sense in which the actually colored parts of experience are instances of certain maps which are roughly of the form R^2 -> R^3. (To be more precise, the range and domain will be certain subsets of R^2 and R^3.) But this doesn't mean that a color experience can be identified with this mathematical object, or with a structurally isomorphic computational state.

You could say that my "methodology", in attempting to construct a physical ontology that contains consciousness, is to discover as much as I can about the structure and constituent relations of a conscious experience, and then to insist that these are realized in the states of a physically elementary "state machine" rather than a virtual machine, because that allows me to be a realist about the "parts" of consciousness, and their properties.

Replies from: CronoDAS
comment by CronoDAS · 2012-02-08T02:59:24.176Z · LW(p) · GW(p)

Let's compare the plausibility of getting colors out of combinations of the elementary properties in standard physical ontology, and the plausibility of getting colors out of Peano Arithmetic.

In one sense, there already is a demonstration that you can get colors from the combinations of the elementary properties in standard physical ontology: you can specify a brain in standard physical ontology. And, heck, maybe you can get colors out of Peano Arithmetic, too! ;)

At this point we have at least identified what we disagree on. I suspect that there is nothing more we can say about the topic that will affect each other's opinion, so I'm going to withdraw from the discussion.

comment by David_Gerard · 2012-02-03T22:32:46.658Z · LW(p) · GW(p)

Dualism is a confused notion. If, in a long journey through gathering a tremendous degree of knowledge, you arrive at dualism, you've made a mistake somewhere and need to go back and see where you divided by zero. If your logical chain is in fact sound to a mathematical degree of certainty, then arriving at dualism is a reductio ad absurdum of your starting point.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-04T03:01:47.075Z · LW(p) · GW(p)

Perhaps you missed that I have argued against functionalism because it implies dualism.

Replies from: David_Gerard
comment by David_Gerard · 2012-02-04T08:36:28.322Z · LW(p) · GW(p)

Then you need to do the same for ontologically basic qualia.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-04T09:18:29.484Z · LW(p) · GW(p)

I fail to see what your actual position is. Mine is, first, that colors exist, and second, that they don't exist in standard physical ontology. Please make a comparably clear statement about what you believe the truth to be.

Replies from: David_Gerard
comment by David_Gerard · 2012-02-04T11:21:44.701Z · LW(p) · GW(p)

Colours "exist" as a fact of perception. If you're looking for colours without perception, you've missed what normative usage of "colour" means. You've also committed a ton of compression fallacy, assuming that all possible definitions of "colour" do or should refer to the same ontological entity.

You've then covered your views in word salad; I would not attempt to write with such an appalling lack of clarity as you've wrapped your views in in this sequence, except for strictly literary purposes; certainly not if my intent were to inform.

You need to seriously consider the possibility that this sequence is getting such an overwhelmingly negative reaction because you're talking rubbish.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-04T12:46:08.033Z · LW(p) · GW(p)

Colours "exist" as a fact of perception.

Why do you put "exist" in quotation marks? What does that accomplish? If I chopped off your hand, would you say that the pain does not exist, it only "exists"?

If you're looking for colours without perception, you've missed what normative usage of "colour" means.

I'm not looking for colors without perception; I'm looking for the colors of perception somewhere in physical reality; since colors are real, and physical reality is supposed to be the only sort of reality there is.

You've then covered your views in word salad; I would not attempt to write with such an appalling lack of clarity as you've wrapped your views in in this sequence, except for strictly literary purposes; certainly not if my intent were to inform.

It's not so easy to describe conscious states accurately, and a serious alternative to dualism isn't so easy to invent or convey either. I'm improvising a lot. If you make an effort to understand it, it may make more sense.

But let us return to your views. Colors only exist as part of perceptions; fine. Presumably you believe that a perception is a type of physical process, a brain process. Do you believe that some part of these brain processes is colored? If someone is seeing green, is there a flicker of actual greenness somewhere in or around the relevant brain process? I doubt that you think this. But then, at this point, nothing in your model of reality is actually green, neither the world outside the brain, nor the world inside the brain. Yet greenness is manifestly there in reality: perceptions contain actual greenness. Therefore your model is incomplete. Therefore, if you wish to include actual conscious experiences in your model, they'll have to go in alongside but distinct from the physical processes. Therefore, you will have to be a dualist.

I am not advocating dualism, I'm just telling you that if you don't want to deny the phenomenology of color, and you want to retain your physical ontology, you will have to be a dualist.

comment by HoverHell · 2012-02-03T16:13:35.842Z · LW(p) · GW(p)

-

Replies from: JoshuaZ, David_Gerard
comment by JoshuaZ · 2012-02-03T16:28:25.723Z · LW(p) · GW(p)

Mostly irrelevant to the OP, a question: how implausible do you see a claim that dualism is false (there's nothing irreducible in material models of our minds) and (at the same time) qualia (or phenomena as in constructs from qualia) are ontologically basic? (and, ergo, materialism i.e. material model is not ontologically basic).

I don't know. Probably very low, certainly less than 1%.

Replies from: HoverHell
comment by HoverHell · 2012-02-03T22:33:53.643Z · LW(p) · GW(p)

-

comment by David_Gerard · 2012-02-03T22:31:35.456Z · LW(p) · GW(p)

Asserting that qualia are ontologically basic appears to be assuming that an aspect of mind is ontologically basic, i.e. dualism. So it's only not having done the logical chain myself that would let me set a a probability (a statement of my uncertainty) on it at all, rather than just saying "contradiction".

Replies from: HoverHell
comment by HoverHell · 2012-02-03T22:52:39.756Z · LW(p) · GW(p)

-

comment by Ghostly · 2012-02-03T04:41:49.854Z · LW(p) · GW(p)

Some questions:

  1. How will you make money in the future to pay back the loan?
  2. Why aren't you doing that now, even on a part-time basis?
  3. Is there one academic physicist who will endorse your specific research agenda as worthwhile?
  4. Likewise for an academic philosopher?
  5. Likewise for anyone other than yourself?
  6. Why won't physicists doing ordinary physics (who are more numerous, have higher ability, and have better track record of productivity) solve your problems in the course of making better predictive models?
  7. How would this particular piece of work help with your larger interests? Would it cause physicists to work on this topic? Provide a basis for assessing your productivity or lack thereof?
  8. Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.
Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-03T05:57:14.614Z · LW(p) · GW(p)

How will you make money in the future to pay back the loan?

Because I'll have a job again? I have actually had paid employment before, and I don't anticipate that the need to earn money will vanish from my life. The question is whether I'll move a few steps up the economic food chain, either because I find a niche where I can do my thing, or because I can't stand poverty any more and decide to do something that pays well. If I move up, repayment will be faster, if I don't, it will be slower, but either way it will happen.

Why aren't you doing that now, even on a part-time basis?

This is the culmination of a period in which I stopped trying to find compromise work and just went for broke. I've crossed all sorts of boundaries in the past few weeks, as the end neared and I forced more from myself. That will have been worth it, no matter what happens now.

Is there one academic physicist who will endorse your specific research agenda as worthwhile?

Well, let's start with the opposite: go here for an ex-academic telling me that one part of my research agenda is not worthwhile. Against that, you might want to look at the reception of my "questions" on the Stack Exchange sites - most of those questions are actually statements of ideas, and they generally (though not always) get a positive reception.

Now if you're talking about the big agenda I outlined in my first post in this series, there are clear resemblances between elements of it and work that is already out there. You don't have to look far to find physicists who are interested in physical ontology or in quantum brain theories - though I think most of them are on the wrong track, and the feeling might be mutual. But yes, I can think of various people whose work is similar enough to what I propose, that they might endorse that part of the picture.

Likewise for an academic philosopher?

David Chalmers is best know for talking about dualism, but he's flagged a new monism as an option worth exploring. We have an exchange of views on his blog here.

Likewise for anyone other than yourself?

Let's see if anyone else has surfaced, by the time we're done here.

Why won't physicists doing ordinary physics (who are more numerous, have higher ability, and have better track record of productivity) solve your problems in the course of making better predictive models?

Well, let's see what subproblems I have, which might be solved by a physicist. There's the search for the right quantum ontology, and then there's the search for quantum effects in the brain. Although most physicists take a positivist attitude and dismiss concerns about what's really there, there are plenty of people working on quantum ontology. In my opinion, there are new technical developments in QFT, mentioned at the end of my first post, which make the whole work of quantum ontology to date a sort of "prehistory", conducted in ignorance of very important new facts about alternative representations of QFT that are now being worked out by specialists. Just being aware of these new developments gives me a slight technical edge, though of course the word will spread.

As for the quantum brain stuff, there's lots of room for independent new inquiries there. I gave a talk 9 years ago on the possibility of topological quantum computation in the microtubule, and no-one else seems to have explored the theoretical viability of that yet. Ideas can sit unexamined for many years at a time.

How would this particular piece of work help with your larger interests? Would it cause physicists to work on this topic? Provide a basis for assessing your productivity or lack thereof?

OK, I no longer know what you're referring to by "particular piece of work" and "larger interests". Do you mean, how would the discovery of a consciousness-friendly quantum ontology be relevant to Friendly AI?

Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.

If I ever go to work at Google it won't be to live off the proceeds afterwards, it will be because it's relevant to artificial intelligence. Of course you're right that some forms of work pay well. Part of what keeps me down is impatience and the attempt to do the most important thing right now.

ETA: The indent turned every question into question "1", so I removed the numbers.

Replies from: Ghostly, Nick_Tarleton, JoshuaZ
comment by Ghostly · 2012-02-03T07:47:41.794Z · LW(p) · GW(p)

Chalmers' short comment in your link amounts to just Chalmers expressing enthusiasm for ontologically basic mental properties, not any kind of recommendation for your specific research program.

Of course you're right that some forms of work pay well. Part of what keeps me down is impatience and the attempt to do the most important thing right now.

To be frank, the Outside View says that most people who have achieved little over many years of work will achieve little in the next few months. Many of them have trouble with time horizons, lack of willpower, or other problems that sabotage their efforts systematically, or prefer to indulge other desires rather than work hard. These things would hinder both scientific research and paid work. Refusing to self-finance with a lucrative job, combined with the absence of any impressive work history (that you have made clear in the post I have seen) is a bad signal about your productivity, your reasons for asking us for money, and your ability to eventually pay it back.

the attempt to do the most important thing right now

No one else seems to buy your picture of what is most important (qualia+safe AI). Have you actually thought through and articulated a model, with a chain of cause and effect, between your course of research and your stated aims of affecting AI? Which came first, your desire to think about quantum consciousness theories or an interest in safe AI? It seems like a huge stretch.

I'm sorry to be so blunt, but if you're going to be asking for money on Less Wrong you should be able to answer such questions.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-03T08:57:59.335Z · LW(p) · GW(p)

Chalmers' short comment in your link amounts to just Chalmers expressing enthusiasm for ontologically basic mental properties, not any kind of recommendation for your specific research program.

There is no existing recommendation for my specific research program because I haven't gone looking for one. I thought I would just work on it myself, finish a portion of it myself, and present that to the world, along with the outline of the rest of the program.

Refusing to self-finance with a lucrative job ... is a bad signal

"Lucrative" is a weakness in your critique. I'm having trouble thinking of anyone who decided they should have been a scientist, then went and made lots of money, then did something of consequence in science. People who really want to do something tend to have trouble doing something else in its place.

Of course you're correct that if someone wants to achieve big things, but has failed to do so thus far, there are reasons. One of my favorite lines from Bruce Sterling's Schismatrix talks about the Superbrights, who were the product of an experiment in genetically engineering IQs above 200, as favoring "wild schemes, huge lunacies that in the end boiled down to nothing". I spent my 20s trying to create a global transhumanist utopia in completely ineffective ways (which is why I can now write about the perils of utopianism with some conviction), and my 30s catching up on a lot of facts about the world. I am surely a case study in something, some type of failed potential, though I don't know what exactly. I would even think about trying to extract the lessons from my own experience, and that of similar people like Celia Green and Marni Sheppeard, so that others don't repeat my mistakes.

But throughout those years I also thought a great deal about the ontology of quantum mechanics and the ontology of consciousness. I could certainly have written philosophical monographs on those subjects. I could do so now, except that I now believe that the explanation of quantum mechanics will be found through the study of our most advanced theories, and not in reasoning about simple models, so the quick path to enlightenment turns out to lead up the slope of genuine particle physics. Anyway, perhaps the main reason I'm trying to do this now is that I have something to contribute and I don't see anyone else doing it.

No one else seems to buy your picture of what is most important (qualia+safe AI). Have you actually thought through and articulated a model, with a chain of cause and effect, between your course of research and your stated aims of affecting AI?

See the first post in this series. You need to know how consciousness works if you are going to correctly act on values that refer to conscious beings. If you were creating a transhuman AI, and couldn't even see that colors actually exist, you would clearly be a menace on account of having no clue about the reality of consciousness. Your theoretical apriori, your intellectual constructs, would have eclipsed any sensitivity to the phenomenological and ontological facts.

The issue is broader than just the viability of whatever ideas SIAI has about outsourcing the discovery of the true ontology of consciousness to an AI. We live in a culture possessing powerful computational devices that are interfacing with, and substituting for, human beings, in a plethora of ways. Human subjectivity has always understood itself through metaphor, and a lot of the metaphors are now coming from I.T. There is a synergy between the advance of computer power and the advance of this "computerized" subjectivity, that has trained itself to see itself as a computer. Perhaps the ultimate wrong turn would be a civilization which uploaded itself, thinking that it had thereby obtained immortality, when in fact they had just killed themselves, to be replaced by a society of unconscious simulations. That's an extreme, science-fictional example, but there are many lesser forms of the problem that could come to pass, which are merely pathologies rather than disasters.

I don't want to declare the impact of computers on human self-understanding as unconditionally negative, not at all; but it has led to a whole new type of false consciousness, a new way of "eclipsing the lifeworld", and the only way to overcome that problem rationally and knowingly (it could instead be overcome by violent emotional luddism), is to transcend these incomplete visions of reality - find a deeper one which makes their incompleteness manifest.

Which came first, your desire to think about quantum consciousness theories or an interest in safe AI?

Quantum consciousness. I was thinking about that a few years before Eliezer showed up in the mid-1990s. There have been many twists and turns since then.

Replies from: moridinamael, katydee
comment by moridinamael · 2012-02-03T21:35:30.848Z · LW(p) · GW(p)

Perhaps the ultimate wrong turn would be a civilization which uploaded itself, thinking that it had thereby obtained immortality, when in fact they had just killed themselves, to be replaced by a society of unconscious simulations. That's an extreme, science-fictional example, but there are many lesser forms of the problem that could come to pass, which are merely pathologies rather than disasters.

Imagine you have signed up to have your brain scanned by the most advanced equipment available in 2045. You set in a tube and close you eyes while the machine recreates all the details of your brain, its connectivity, electromagnetic fields and electrochemical gradients, and transient firing patterns.

The technician says, "Okay, you've been uploaded, the simulation is running."

"Excellent," you respond. "Now I can interact with he simulation and prove that it doesn't have qualia."

"Hold on, there," said the technician. "You can't interact with it yet. The nerve impulses from your sensory organs and non-cranial nerves are still being recorded and used as input for the simulation, so that we can make sure it's a stable duplicate. Observe the screen."

You turn and look at the screen, where you see an image of yourself, seen from a camera floating above you, turned to face the screen. The screen is hanging from a bright green wall.

"That's me," you say. "Where's the simulation?" As you watch, you verify this, because the image of you on the screen says those words along with you.

"That is the simulation, on the monitor," reasserts the technician.

You are somewhat taken aback, but not entirely surprised. It is a high-fidelity copy of you and it's being fed the same inputs as you. You reach up to scratch your ear and notice that the you on the monitor mirrors this action perfectly. He even has the same bemused expression on his face as he examines the monitor, in his simulated world, which he doesn't realize is just showing him an image of himself, in a dizzying hall-of-mirrors effect.

"The wall is green," you say. The copy says this along with you in perfect unison. "No, really. It's not just how the algorithm feels from the inside. Of course you would say it's green," you say pointing at the screen, just as the simulation points at the screen and says what you are saying in the same tone, "But you're saying it for entirely different reasons than I am.

The technician tisks. "Your brain state matches that of the simulation for within an inordinately precise tolerance. He's not thinking anything different than what you're thinking. Whatever internal mental representation leads you to insist that the wall is green is identical to the internal mental representation that leads him to say the same."

"It can't be," you insist. "It doesn't have qualia, it's just saying that because its brain is wired up the same way as mine. No matter what he says, his experience of color is distinct from mine."

"Actually, we de-synchronized the inputs thirty seconds ago. You're the simulation, that's the real you on the monitor."

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-04T03:14:02.300Z · LW(p) · GW(p)

Yes, I've read this in science fiction too. Do you want me to write my own science fiction in opposition to yours, about the monadic physics of the soul and the sinister society of the zombie uploads? It would be a stirring tale about the rise of a culture brainwashed to believe that simulations of mind are the same as the real thing, and the race against time to prevent it from implementing its deluded utopia by force.

Telling a vivid little story about how things would be if so-and-so were true, is not actually an argument in favor of the proposition. The relevant LW buzzword is fictional evidence.

Replies from: moridinamael
comment by moridinamael · 2012-02-04T03:48:37.751Z · LW(p) · GW(p)

Yes, I think your writing a "counter-fiction" would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it. I admit this is a fact about my own state of knowledge, and I would like it if you could at least show me an example of a fictional universe where you were proven right, as I have shown an account of a fictional universe where you are proven wrong.

I don't intend for the story to serve as any kind of evidence, but I did intend for it to serve as an argument. If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a "mind" outside of the mechanistic brain? If it turns out that humans and their simulations both behave and think in exactly the same fashion?

Again, it's not fictional evidence, it's me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.

Replies from: Mitchell_Porter, Eugine_Nier
comment by Mitchell_Porter · 2012-02-04T05:44:59.138Z · LW(p) · GW(p)

it's not fictional evidence, it's me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.

If it's a question of why I believe what I do, the starting point is described here, in the section on being wrong about one's phenomenology. Telling me that colors aren't there at any level is telling me that my color phenomenology doesn't exist. That's like telling you, not just that you're not having a conversation on lesswrong, but that you are not even hallucinating the occurrence of such a conversation. There are hard limits to the sort of doubt one can credibly engage in about what is happening to oneself at the level of appearance, and the abolition of color lies way beyond those limits, out in the land of "what if 2+2 actually equals 5?"

The next step is the insistence that such colors are not contained in physical ontology, and so a standard materialism is really a dualism, which will associate colors (and other ingredients of experience) with some material entity or property, but which cannot legitimately identify them with it. I think that ultimately this is straightforward - the mathematical ontologies of standard physics are completely explicit, it's obvious what they're made of, and you just won't get color out of something like a big logical conjunction of positional properties - but the arguments are intricate because every conceivable attempt to avoid that conclusion is deployed. So if you want arguments for this step, I'm sure you can find them in Chalmers and other philosophers.

Then there's my personal alternative to dualism. The existence of an alternative, as a palpable possibility if not a demonstrated reality, certainly helps me in my stubbornness. Otherwise I would just be left insisting that phenomenal ontology is definitely different to physical ontology, and historically that usually leads to advocacy of dualism, though starting in the late 19th century you had people talking about a monistic alternative - "panpsychism", later Russell's "neutral monism". There's surely an important issue buried here, something about the capacity of people to see that something is true, though it runs against their other beliefs, in the absence of an alternative set of beliefs that would explain the problematic truth. It's definitely easier to insist on the reality of an inconvenient phenomenon when you have a candidate explanation; but one would think that, ideally, this shouldn't be necessary.

your writing a "counter-fiction" would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it.

It shouldn't require a story to convey the idea. Or rather, a story would not be the best vehicle, because it's actually a mathematical idea. You would know that when we look at the world, we see individuated particles, but at the level of quantum wavefunctions, we have, not just a wavefunction per particle, but entangled wavefunctions for several particles at once, that can't be factorized into a "product" of single-particle wavefunctions (instead, such entangled wavefunctions are sums, superpositions, of distinct product wavefunctions). One aspect of the dispute about quantum reality is whether the supreme entangled wavefunction of the universe is the reality, whether it's just the particles, whether it's some combination. But we could also speculate that the reality is something in between - that reality consists of lots of single particles, and then occasional complex entities which we would currently call entangled sets of particles. You could write down an exact specification of such an ontology; it would be a bit like what they call an objective-collapse ontology, except that the "collapses" or "quantum jumps" are potentially between entangled multiparticle states, not just localized single-particle states.

My concept is that the self is a single humongous "multi-particle state" somewhere in the brain, and the "lifeworld" (mentioned at the start of this post) is wholly contained within that state. This way we avoid the dualistic association between conscious state and computational state, in favor of an exact identity between conscious state and physical state. The isomorphism does not involve, on the physical side, a coarse-grained state machine, so here it can be an identity. When I'm not just defending the reality of color (etc) and the impossibility of identifying it with functional states, this is the model that I'm elaborating.

So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor's dynamics in an entity which ontologically consists of a zillion of the simple tensor factors (a classical computer). In other words, whether the state machine is realized within a single tensor factor or a distributed causal network of them, is what determines whether it is conscious or not.

If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a "mind" outside of the mechanistic brain?

I was asked this some time ago - if I found myself to be an upload, as in your scenario, how would that affect my beliefs? To the extent that I believed what was going on, I would have to start considering Chalmers-style dualism.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-02-04T11:49:06.575Z · LW(p) · GW(p)

So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor's dynamics in an entity which ontologically consists of a zillion of the simple tensor factors

I'm more interested in the part in the fiction where the heroes realize that the people they've lived with their whole lives in their revealed-to-be-dystopian future, who've had an upload brain prosthesis after some traumatic injury or disease, are actually p-zombies. How do they find this out, exactly? And how do they deal with there being all these people, who might be their friends, parents or children, who are on all social and cognitive accounts exactly like them, who they are now convinced lack a subjective experience?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-05T04:55:40.194Z · LW(p) · GW(p)

Let's see what we need to assume for such a fictional scenario. First, we have (1) functionally successful brain emulation exists, at a level where the emulation includes memory and personality. Then I see a choice between (2a) the world is still run by human beings, and (2b) the world has powerful AI. Finally, we have a choice between (3a) there has been no discovery of a need for quantum neuroscience yet, and (3b) a quantum neuroscience exists, but a quantum implementation of the personal state machine is not thought necessary to preserve consciousness.

In my opinion, (1) is in tension with (3a) and even with (2a). Given that we are assuming some form of quantum-mind theory to be correct, it seems unlikely that you could have functionally adequate uploads of whole human beings, without this having already been discovered. And having the hardware and the models and the brain data needed to run a whole human sim, should imply that you are well past the threshold of being able to create AI that is nonhuman but with human intellectual potential.

So by my standards, the best chance to make the story work is the combination of (1) with (3b), and possibly with (2b) also. The (2b) scenario might be set after a "semi-friendly" singularity, in which an Iain M. Banks, Culture-like existence for humanity has been created, and the science and technology for brain prostheses has been developed by AIs. Since the existence of a world-ruling friendly super-AI (a "Sysop") raises so many other issues, it might be better to think in terms of an "Aristoi"-like world where there's a benevolent human ruling class who have used powerful narrow AI to produce brain emulation technology and other boons to humanity, and who keep a very tight control on its spread. The model here might be Vinge's Peace Authority, a dictatorship under which the masses have a medieval existence and the rulers have the advanced technology, which they monopolize for the sake of human survival.

However it works, I think we have to suppose a technocratic elite who somehow know enough to produce working brain prostheses, but not enough to realize the full truth about consciousness. They should be heavily reliant on AI to do the R&D for them, but they've also managed to keep the genie of transhuman AI trapped in its box so far. I still have trouble seeing this as a stable situation - e.g. a society that lasts for several generations, long enough for a significant subpopulation to consist of "ems". It might help if we are only dealing with a small population, either because most of humanity is dead or most of them are long-term excluded from the society of uploads.

And even after all this world-building effort, I still have trouble just accepting the scenario. Whole brain emulation good enough to provide a functionally viable copy of the original person implies enormously destabilizing computational and neuroscientific advances. It's also not something that is achieved in a single leap; to get there, you would have to traverse a whole "uncanny valley" of bad and failed emulations.

Long before you faced the issue of whether a given implementation of a perfect emulation produced consciousness, you would have to deal with subcultures who believed that highly imperfect emulations are good enough. Consider all the forms of wishful thinking that afflict parents regarding their children, and people who are dying regarding their prospects of a cure, and on and on; and imagine how those tendencies would interact with a world in which a dozen forms of brain-simulation snake-oil are in the market.

Look at the sort of artificial systems which are already regarded by some people as close to human. We already have people marrying video game characters, and aiming for immortality via "lifebox". To the extent that society wants the new possibilities that copies and backups are supposed to provide, it will not wait around while technicians try to chase down the remaining bugs in the emulation process. And what if some of your sims, or the users of brain prostheses, decide that what the technicians call bugs are actually features?

So this issue - autonomous personlike entities in society, which may or may not have subjective experience - is going to be upon us before we have ems to worry about. A child with a toy or an imaginary friend may speak very earnestly about what its companion is thinking or feeling. Strongly religious people may also have an intense imaginative involvement, a personal relationship, with God, angels, spirits. These animistic, anthropomorphizing tendencies are immediately at work whenever there is another step forward in the simulation of humanity.

At the same time, contemporary humans now spend so much time interacting via computer, that they have begun to internalize many of the concepts and properties of software and computer networks. It therefore becomes increasingly easy to create a nonhuman intelligent agent which passes for an Internet-using human. A similar consideration will apply to neurological prostheses: before we have cortical prostheses based on a backup of the old natural brain, we will have cortical prostheses which are meant to be augmentations, and so the criterion of whether even a purely restorative cortical prosthesis is adequate, will increasingly be based on the cultural habits and practices of people who were using cortical prostheses for augmentation.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-02-05T06:55:27.156Z · LW(p) · GW(p)

There's quite a bit of the least convenient possible world intention in the thought experiment. Yes, assuming that things are run by humans and transhuman or nonhuman AIs are either successfully not pursued or not achievable with anything close to the effort of EMs and therefore still in the future. Assuming that the EMs are made using advanced brain scanning and extensive brute force reverse engineering with narrow AIs, with people in charge and not actually understanding the brain well enough to build one from scratch themselves. Assuming that strong social taboos of the least convenient possible world prevent running EMs in anything but biological or extremely lifelike artificial bodies at the same subjective speed as biological humans, and no rampant copying of them that would destabilize society and lead to a whole new set of thought-experimental problems.

The wishful thinking not-really-there EMs are a good point, but again, the least convenient possible world would probably be really fixated on the idea that the brain is the self, so their culture just rejects stuff like lifeboxes as cute art projects, and goes straight for the whole brain emulation, with any helpful attempts to fudge broken output with a pattern matching chatbot AI being actively looked out for, quickly discovered and leading to swift disgrace for the shortcut-using researchers. It is possible that things would still lead to some kind of wishful thinking outcome, but getting to the stage where the brain emulation is producing actions recognizable as resulting from the personality and memories of the person it was taken from without using any cheating obvious to the researcher, such as a lifebox pattern matcher, sounds like it should be pretty far along being the real thing, given that the in-brain encoding of the personality and memories would be pretty much a complete black box.

There's still a whole load of unknown unknowns with ways things could go wrong at this point, but it looks a lot more like the "NOW what do we do?" situation I was after in grandparent post than the admittedly likely-in-our-world scenario of people treating lifebox chatbots as their dead friends.

comment by Eugine_Nier · 2012-02-06T01:14:11.030Z · LW(p) · GW(p)

One possible counter fiction would have an ending similar to the bad ending of three worlds collide.

comment by katydee · 2012-03-12T05:51:22.587Z · LW(p) · GW(p)

I'm having trouble thinking of anyone who decided they should have been a scientist, then went and made lots of money, then did something of consequence in science.

Jeff Hawkins.

comment by Nick_Tarleton · 2012-02-03T07:23:24.099Z · LW(p) · GW(p)

Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.

If I ever go to work at Google it won't be to live off the proceeds afterwards, it will be because it's relevant to artificial intelligence.

Supposing you can work at Google, why not? Beggaring yourself looks unlikely to maximize total productivity over the next few years, which seems like the timescale that counts.

comment by JoshuaZ · 2012-02-03T06:02:09.253Z · LW(p) · GW(p)

Well, let's start with the opposite: go here for an ex-academic telling me that one part of my research agenda is not worthwhile.

Oh. Lubos Motl. Considering he seems to spend his time doing things like telling Scott Aaronson that Scott doesn't understand quantum mechanics, I don't think Motl's opinion should actually have much weight in this sort of context.

I gave a talk 9 years ago on the possibility of topological quantum computation in the microtubule, and no-one else seems to have explored the theoretical viability of that yet. Ideas can sit unexamined for many years at a time.

Quantum topological computers are more robust than old-fashioned quantum computers, but they aren't that more robust and they have their own host of issues. People likely aren't looking into this because a) it seems only marginally more reasonable than the more common version of microtubules doing quantum computation and b) it isn't clear how one would go about testing any such idea.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-03T06:18:09.374Z · LW(p) · GW(p)

In the article I linked, Lubos is expressing a common opinion about the merit of such formulas. It's a deeper issue than just Lubos being opinionated.

If I had bothered to write a paper, back when I was thinking about anyons in microtubules, we would know a lot more by now about the merits of that idea and how to test it. There would have been a response and a dialogue. But I let it go and no-one else took it up on the theoretical level.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-02-03T06:20:54.332Z · LW(p) · GW(p)

Do you have any suggested method for testing the microtubule claim?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-02-03T06:42:41.324Z · LW(p) · GW(p)

From my perspective, first you should just try to find a biophysically plausible model which contains anyons. Then you can test it, e.g. by examining optical and electrical properties of the microtubule.

People measure such properties already, so we may already have relevant data, even evidence. There is a guy in Japan (Anirban Bandyopadhyay) who claims to have measured all sorts of striking electrical properties of the microtubule. When he talks about theory, I just shake my head, but that doesn't tell us whether his data is good or bad.

comment by amcknight · 2012-02-03T23:29:41.648Z · LW(p) · GW(p)

Which returns us to the dilemma: either "experiences" exist and part of them is actually green, or you have to say that nothing exists, in any sense, at any level of reality, that is actually green.

The third option is my favourite:
Good news everyone! There are all kinds of different things that you can permissibly call green! Classes of wavelengths, dispositions in retnas, experiences in brains, all kinds of things! Now we have the fun choice of deciding which one is most interesting and what we want to talk about! Yay!

Replies from: David_Gerard, Mitchell_Porter
comment by David_Gerard · 2012-02-03T23:50:13.598Z · LW(p) · GW(p)

And Fallacies of Compression was just in the sequence reruns a couple of days ago, too ...

comment by Mitchell_Porter · 2012-02-04T03:16:31.421Z · LW(p) · GW(p)

I'm talking about experiences in brains.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-02-06T00:58:38.392Z · LW(p) · GW(p)

Well, then, you've just told us where to find green. When neuroscientists find the spot to poke that makes their subjects say 'Wow, that is so GREEN', what do you say then?

I haven't been following this closely, but unless you're taking the exact dualist stance you say below that you're denying, it really seems like that should be the answer.

comment by Manfred · 2012-02-03T04:58:55.477Z · LW(p) · GW(p)

"Green" refers to objects which disproportionately reflect or emit light of a wavelength between 520 and 570nm.

~(Solvent, from the previous thread.)

A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green.

If your counterexample is already taken care of by the very second person in the previous thread, you should use a different counterexample. EDIT: I am not endorsing Solvent's definition in any "The Definition" sense - but I felt that you ignored what he wrote when making that counterexample. In a post that is basically all about responding to what people wrote, that's bad, and I think there were other similar lapses.

Anyhow, the question you're regarding as so mysterious isn't even mary's room level - it's "what words mean" level, i.e. it's already solved. Suggested reading: Dissolving questions about disease, How an algorithm feels from inside, Quotation and referent, The meaning of right.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-02-03T05:29:12.287Z · LW(p) · GW(p)

This doesn't actually work. We speak of seeing green things. And we can say an object looks green even when it isn't emitting any such rays, due to various optical illusions and the like. If it turned out that there was some specific wavelength (say around 450 nm in the otherwise blue range) that also triggered the same visual reaction in our systems as waves in the 520-570 range I don't think we'd have trouble calling objects sending off that frequency as green. And we actually do something similar to colors from objects that emit combinations of wavelengths. People with synesthesia are similarly a problem. Dissolving this does have some subtle issues. The real issue is that the difficulty of dissolving what we mean when we say a given color is not good evidence that colors cannot be dissolved.

Replies from: Manfred
comment by Manfred · 2012-02-03T05:34:29.801Z · LW(p) · GW(p)

So, I think I can just say "mind projection fallacy" and you'll know what I mean about most of those things.

But yes, I am not endorsing Solvent's definition (I'll edit in a disclaimer to that effect, and explaining why I still quoted). "Green," as a human word, is a lot more like "disease" from Yvain's post than it is like "a featherless biped."

comment by FeepingCreature · 2012-02-04T18:02:20.479Z · LW(p) · GW(p)

Things that my brain tells me are green, are green. Things that your brain tells you are green, are green. In cases where we disagree, split the label into my!green and your!green.

Now can we move on? This post is a waste of time.

Replies from: Eugine_Nier, None
comment by Eugine_Nier · 2012-02-06T01:09:15.684Z · LW(p) · GW(p)

Things that my brain tells me are green, are green. Things that your brain tells you are green, are green. In cases where we disagree, split the label into my!green and your!green.

To see the problem with the above statement, try replacing the word "green" with "true".

Replies from: FeepingCreature
comment by FeepingCreature · 2012-02-06T01:52:37.033Z · LW(p) · GW(p)

You mean, "to see the problem with a wholly unrelated statement". Green is not the same kind of property as true.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-02-06T04:24:43.161Z · LW(p) · GW(p)

Green is not the same kind of property as true.

Could you expand on that.

Replies from: FeepingCreature
comment by FeepingCreature · 2012-02-06T08:04:33.662Z · LW(p) · GW(p)

Truth is an abstract, rationally defined property that has a meaning beyond my mind. To say that "things my brain tells me are true, are true" is a similar kind of claim would imply that green, like true, has a working definition beyond the perceptual. If this is the case, I'd like to know it. I'm fairly sure it's not actually possible to be wrong about a perceived color, excluding errors in memory. It's possible to consider a statement and be mistaken about its truthfulness, but is it possible to look at an object and be mistaken about the color one perceives it as? That seems nonsensical.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-02-07T05:31:43.328Z · LW(p) · GW(p)

To say that "things my brain tells me are true, are true" is a similar kind of claim would imply that green, like true, has a working definition beyond the perceptual.

So can you provide a working definition of "true"?

Replies from: FeepingCreature
comment by FeepingCreature · 2012-02-07T13:28:42.617Z · LW(p) · GW(p)

If there was definitely such a thing as an objective reality, my answer would be "a claim that is not in contradiction with objective reality". As it stands, I'll have to settle for "a claim that is never in contradiction with perceived reality. " Note that, for instance, ludicrous claims about the distant past do in fact stand in contradiction with perceived reality since "things like that seem to not happen now, and the behavior of the universe seems to be consistent over time" is a true claim which a ludicrous but unverifiable claim would contradict with. Note that the degree to which you believe truth can be objective is exactly proportional to the degree to which you believe reality is objective and modelled by our observations.

comment by [deleted] · 2012-02-05T08:20:08.031Z · LW(p) · GW(p)

This comment is beyond stupid.

His entire point is that the very fact that we percieve colors need to be explained. I can close my eyes and visualize any color I want, but how is that possible when colors do not exist objectively ? So in order to avoid postulating that the mind is a separate entity with it's own "reality" with colors, Mitchell Porter is trying to get colors in our reality.

I am not convinced anything as drastic as quantum mechanisms is needed to explain this and I am very much a functionalist, but the issues he want to investigate is definitely worth digging into.

But since you claim this post is a waste of time, please elaborate on exactly how colors arise in experience...

Replies from: FeepingCreature
comment by FeepingCreature · 2012-02-05T21:49:31.173Z · LW(p) · GW(p)

I have no idea; I'm not a neurobiologist. I'd guess that colors arise in experience by virtue of being fundamentally indexical; what a color "is" is merely a defined unique association in our brains that links sensory data to a bunch of learned responses. It's like the human soul - any property of it that you'd use to make it "unique", to differentiate your soul from another's, or to differentiate red from green, can be described as neurological activation of an associative pattern. Memories - neurological. Instinct, learning, feelings, hormones, habits - all biological or neurological. What is red? It's like fire, like roses, like blood. All associative. Could you build a brain that perceives red meaningfully differently from green while having no such associations, built-in or learned? I suspect that if I was such a being, I would not even be able to differentiate red from green, because my brain would never have been given occasion to treat a red thing with a different response than a green thing. How would you expect there to be nerves for that kind of differentiation if there was never a need for it?

Colors are associated responses and groupings for certain kinds of sensory data. They have no further identity.

That's my take.

[edit] The real stupid thing is that mysteriousness is a property of the question, not the answer! Even if we weren't able to put out a good guess as to how colors work that wouldn't make it a topic to call the entirety of reductionism into question. The correct answer should then be "We don't know yet, but it's probably something in the brain and not magical and/or mysterious". Haven't we learned our lesson with consciousness?

Replies from: TrE
comment by TrE · 2012-02-05T22:01:02.777Z · LW(p) · GW(p)

And this view seems to be consistent with this bbc documentary excerpt, relevant part starts at 03:00. The Himba have different and less color categories, probably because they don't need more or others.

comment by amcknight · 2012-02-03T23:30:54.201Z · LW(p) · GW(p)

The lifeworld is the place where you exist, where time flows, and where things are actually green.

What makes you think these all happen in the same place?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-02-06T01:01:59.622Z · LW(p) · GW(p)

... they're all the naive interpretations of our sensations, so it really seems they ought to overlap at least.

comment by scientism · 2012-02-03T19:12:47.314Z · LW(p) · GW(p)

For what it's worth, I don't take dreams and hallucinations to involve seeing at all, so I don't believe I have anything to explain with regard to colour in dreams and hallucinations. I take the question "Do you dream in colour?" to be incoherent whereas the question "Have you dreamt of colour / coloured things?" is fine. The former question presupposes that perception involves seeing internal imagery rather than directly perceiving the world, which I deny, and that dreaming / hallucinating can therefore be said to be a form of perception also, something which obviously can't follow from my denial of mediating imagery.

comment by [deleted] · 2012-02-04T13:49:18.822Z · LW(p) · GW(p)

I have a question and I think maybe your answer will make it easier for other people to understand what you are arguing.

What about people who are color blind? They see for instance red where in objective reality the objects wavelength is "green". What happens here in your view? In the persons experience he still see red, but it should be green... And eventhough we know approximately the processes that do that people are color blind, this seems to be a interesting question in your model.

comment by Mitchell_Porter · 2012-02-06T02:34:50.692Z · LW(p) · GW(p)

As of a few minutes ago, my problems are solved for the next few months - which should be long enough for this situation never to recur. If I ever go fundraising again, I'll make sure I have something far more substantial ready to make my case.

Replies from: CarlShulman
comment by CarlShulman · 2012-02-10T00:47:44.679Z · LW(p) · GW(p)

By way of these posts, or due to some independent cause?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-03-07T00:41:02.317Z · LW(p) · GW(p)

Independent cause. I did also get $100 back from "ITakeBets" as a result of the posts, but that was all.

comment by Automaton · 2012-02-04T06:25:52.815Z · LW(p) · GW(p)

Just to clarify, I don't really consider my position to be eliminative towards green, only that what we are talking about when we talk about green 'qualia' is nothing more than a certain type of sentient experience. This may eliminate what you think you are talking about what you say green, but not what I think I am talking about when I say green. I am willing to say that the part of a functional pattern of neural activity that is experienced as green qualia is identical to green in the sense that people generally mean when they talk about seeing something green. But there is no way to separate the green from the experience of seeing green.

Do you think that green is something separate from or independent of the experience of seeing green? Do you believe that seeing green is some sort of causal relationship between 'green' and a conscious mind similar to what a content externalist believes about the relationship between intentional states and external content? I don't understand why you believe so strongly that green has to be something beyond a conscious experience.

ETA: For a position that I believe to be more extreme/eliminativist than mine, see Frank Jackson's paper Mind and Illusion, where he argues that seeing red is being in a representational state that represents something non-physical(and also not real), in the same way that someone could be in a representational state representing a fairy.

From that paper:

Intensionalism means that no amount of tub-thumping assertion by dualists (including by me in the past) that the redness of seeing red cannot be accommodated in the austere physicalist picture carries any weight. That striking feature is a feature of how things are being represented to be, and if, as claimed by the tub thumpers, it is transparently a feature that has no place in the physicalist picture, what follows is that physicalists should deny that anything has that striking feature. And this is no argument against physicalism. Physicalists can allow that people are sometimes in states that represent that things have a non-physical property. Examples are people who believe that there are fairies. What physicalists must deny is that such properties are instantiated.

I think he is basically saying that if you can imagine the concept of something that isn't physically real (like a fairy), why couldn't you have a state representing redness, even though redness is not physically real? Or, for an example involving something ontologically unreal, one could have beliefs about the vitalist's élan vital even though it is not made of any ontological entities from the physical universe, so why not have conscious states about red even though it has no place in physical ontology. I'm not sure I agree with his belief that colors can't be considered to have physical ontology, but he seems to agree with you on that point.

comment by loveandthecoexistenc · 2012-02-04T01:46:30.206Z · LW(p) · GW(p)

The most apparent way to talk about such topics here is to completely overhaul the terminology and canonical examples.

And then do something with the resulting referential void.

Certainly not a task for group of less than four people, and likely not a task for group of less than 40.

Is your attempt to single-handedly contribute, with all the costs it imposes, likely enough to give a significant positive result?

comment by Bruno_Coelho · 2012-02-03T22:37:54.162Z · LW(p) · GW(p)

Apparently there is no guarantee return. Suppose that your theoretical assumptions are correct, then why people don't get it? I mean, if the explanations have some power, other physics will accept.

Maybe future neurobiology help us with the consciosuness debate. FAI is another helm.

comment by buybuydandavis · 2012-02-03T04:02:04.536Z · LW(p) · GW(p)

A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green.

Why not? If anything has color, it's light.

Replies from: Manfred, Nick_Tarleton
comment by Manfred · 2012-02-03T05:31:53.290Z · LW(p) · GW(p)

Careful of getting sucked into a Standard Dispute :P

"light is clearly colored."
"No it's not - imagine looking at it from the side!"
"By definition, looking at something means absorbing photons, so if we could look at a beam of light form the side it would look like the color it is."
And so on.

comment by Nick_Tarleton · 2012-02-03T07:25:15.039Z · LW(p) · GW(p)

"[S]aying anything whatsoever about [light], in answer to the question Mitchell is asking, is blatantly running away from the scary and hence interesting part of the problem, which will concern itself solely with matters in the interior part of the skull."

Replies from: buybuydandavis, Morendil
comment by buybuydandavis · 2012-02-03T19:08:09.175Z · LW(p) · GW(p)

Maybe I needed to add a space in my comment.

"If any thing has color, it is light." My point was that if you want to make color a property of things in themselves, and not the reaction of your nervous system to them, green light strikes me as about as green as a green thing can get.

As for the supposed scary and interesting part of the problem, while the science of color perception is no doubt full of interesting facts and concepts, it's hardly scary, and I don't think the perception of color is scary or interesting in philosophical terms at all.

I would call some subset of the possible states of your nervous system as you perceiving green. I can't enumerate those states, but I find nothing scary about the issue; it's completely unproblematic.

What do you find scary about this?

Replies from: Emile
comment by Emile · 2012-02-03T20:39:21.344Z · LW(p) · GW(p)

green light strikes me as about as green as a green thing can get.

How green a ray of light appears can depend of what's around it.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-02-03T21:34:11.653Z · LW(p) · GW(p)

See the rest of my sentence. I was explicitly talking about things in themselves, and not how they appear to an observer.

The original comment I responded to

A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green.

Anyone talking about light in the optical range as it traverses space is likely to talk about the color of that light, and assert "that's green light". More generally, outside the optical range, they're likely to talk about the type of light in terms of frequency bands.

comment by Morendil · 2012-02-03T14:08:08.491Z · LW(p) · GW(p)

I still don't get that.