Book Review: The Root of Thought

post by Scott Alexander (Yvain) · 2010-07-22T08:58:18.873Z · LW · GW · Legacy · 91 comments

Contents

91 comments

Related to: Brain Breakthrough! It's Made of Neurons!

I can't really recommend Andrew Koob's The Root of Thought. It's poorly written, poorly proofread, lacking much more information than is in the Scientific American review, and comes across as about one part neuroscience to three parts angry rant. But it does present an interesting hypothesis and an interesting case study on a major failure of rationality.

Only about ten percent of the brain is made of neurons; the rest is a diverse group of cells called "glia". "Glia" is Greek for glue, because the scientists who discovered them decided that, since they were in the brain and they weren't neurons, they must just be there to glue the neurons together. Since then, new discoveries have assigned glial cells functions like myelination, injury repair, immune defense, and regulation of blood flow: all important, but mostly things only a biologist could love. The Root of Thought argues that glial cells, especially a kind called astrocytes, are also important in some of the higher functions of thought, including memory, cognition, and maybe even creativity. This is interesting to neuroscientists, and the story of how it was discovered is also interesting to us as aspiring rationalists.

Glial cells involved in processing

Koob's evidence is indirect but suggestive. He points out that more intelligent animals have a higher astrocyte to neuron ratio than less intelligent animals, all the way from worms with one astrocyte per thirty neurons, to humans with an astrocyte: neuron ratio well above one. Within the human brain, the areas involved in higher thought, like the cortex, are the ones with the highest astrocyte:neuron ratio, and the most down-to-earth, like the cerebellum, have barely any astrocytes at all. Especially intelligent humans may have higher ratios still: one of the discoveries made from analyzing Einstein's brain was that he had an unusually large number of astrocytes in the part of his brain responsible for mathematical processing. And learning is a stimulus for astrocyte development. When canaries learn new songs, new astrocytes grow in the areas responsible for singing.

Astrocytes have a structure especially suited for learning and cognition. They have their own gliotransmitters, similar in function to neurotransmitters, and they communicate with one another, sparking waves of astrocyte activity across areas of the brain. Like neurons, they can enter an active state after calcium release, but unlike neurons, which get calcium only when externally activated, astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of thought during sensory deprivation and dreaming.

Astrocytes also affect and are affected by neurons. Each astrocyte "monitors" thousands of synapses, and releases calcium based on the input it receives. Output from astrocytes, in turn, affects the behavior of neurons. Astrocytes can take up or break down neurotransmitters, which changes the probability of nearby neurons activating, and they can alter synapses, promoting some and pruning others in a process likely linked to long-term potentiation in the brain.

Although it wasn't in the book, very recent research shows a second type of glial cell, the immune-linked microglia, play a role in behavior that may be linked to obsessive-compulsive disorder; a microglia-altering bone marrow transplant cures an OCD-like disease in mice.

By performing computations that influence the firing of neurons, glial cells at the very least play a strong supporting role in cognition. Koob goes way beyond that (and really beyond what he can support) and argues that actually neurons play a supporting role to glia, being little more than the glorified wires that relay astroglial commands. His argument is very speculative and uses words like "could" a lot, but the evidence at least shows that glia are more important than a century of neurology has given them credit for.


We don't know how much we don't know about cognitive science

Previous Less Wrong articles, for example Artificial Addition, have warned against trying to replicate a process without understanding it by copying a few surface features. One of the most popular such ideas is to replicate the brain by copying the neurons and seeing what happens. For example, IBM's Blue Brain project hopes to create an entire human brain by modeling it neuron for neuron, without really understanding why brains work or why neurons do what they do1.

We've made a lot of progress in cognitive science in the past century. We know where in the brain various activities take place, we know the mechanisms behind some of the more easily studied systems like movement and perception, and we've started researching the principles of intelligence that the brain must implement to do what it does. It's tempting to say that we more or less understand the brain, and the rest is just details. One of the take-home messages from this book is that, although cognitive scientists can justifiably be proud of their progress, our understanding still hasn't even met the low bar of being entirely sure we're even studying all the right kinds of cells, and this calls into question our ability to meet the higher bar of being able to throw what we know into a simulator and hope it works itself out.

A horrible warning about community irrationality

In the late 19th century, microscopy advanced enough to look closely at the cellular structure of the brain. The pioneers of neurology decided that neurons were interesting and glia were the things you had to look past to get to the neurons. This theory should have raised a big red flag: Why would the brain be filled with mostly useless cells? But for about seventy five years, from the late 19th century to the mid to late 20th, no one seriously challenged the assumption that glia played a minor role in the brain.

Koob attributes the glia's image problem to the historical circumstances of their discovery. Neurons are big, peripherally located, and produce electrical action potentials. This made them both easy to study and very interesting back in the days when electricity was the Hot New Thing. Scientists first studied neurons in the periphery, got very excited about them, and later followed them into the brain, which turned out to be a control center for all the body's neurons. This was interesting enough that neurologists, people who already had thriving careers in the study of neurons, were willing to overlook the inconvenient presence of several other types of cells in the brain, which they relegated to a supporting role. The greatest of these early pioneers of neurology, Santiago Ramon y Cajal, was the brother of the neurologist who first proposed the idea that glial cells functioned as glue and may have (Koob theorizes) let familial loyalty influence his thinking. The community took his words as dogma and ignored glia for years, a choice no doubt made easier by all the exciting discoveries going on around neurons. Koob discussed the choice facing neuroscientists in the early 20th century: study the cell that seemed on the verge of yielding all the secrets of the human mind, or tell your advisor you wanted to study glue instead. Faced with that decision, virtually everyone chose to study the neurons.

There wasn't any sinister cabal preventing research into glia. People just didn't think of it. Everyone knew that neurons were the only interesting type of cell in the brain. They assumed that if there was some other cell that was much more common and also very important, somebody would have noticed. I've read neuroscience books, I read the couple of paragraphs where they mentioned glial cells, and I shrugged and kept reading, because I assumed if they were hugely important somebody would have noticed.

The heuristic, that an entire community doesn't just miss low-hanging fruit, is probably a good one and as many people have pointed out the vast majority of people who think they've found something that the scientific community has missed are somewhere between wrong and crackpot. Science is usually pretty good at finding and recognizing its mistakes, and even in the case of glial cells they did eventually find and recognize the mistake. It just took them a century.

One common theme across Less Wrong and SIAI is that there are some relatively little-known issues that, upon a moderate amount of thought, seem vitally important. And one of the common arguments against this theme is that if this were true, surely somebody would have noticed. The lesson of glial cells is that sometimes this just doesn't happen.

Related: Glial Cells: Their Role In Behavior, Underappreciated Star-Shaped Cells May Help Us Breathe, Glial Cells Aid Memory Formation, New Role For Supporting Brain Cells, Support Cells, Not Neurons, Lull Brain To Sleep

91 comments

Comments sorted by top scores.

comment by jpet · 2010-07-22T19:30:10.288Z · LW(p) · GW(p)

One of the most popular such ideas is to replicate the brain by copying the neurons and seeing what happens. For example, IBM's Blue Brain project hopes to create an entire human brain by modeling it neuron for neuron, without really understanding why brains work or why neurons do what they do.

No, the Blue Brain project (no longer affiliated with IBM, AFAIK) hopes to simulate neurons to test our understanding of how brains and neurons work, and to gain more such understanding.

If you can simulate brain tissue well enough that you're reproducing the actual biological spike trains and long-term responses to sensory input, you can be pretty sure that your model is capturing the relevant brain features. If you can't, it's a pretty good indication that you should go study actual brains some more to see if you're missing something. This is exactly what the Blue Brain project is: simulate a brain structure, compare it to an actual rat, and if you don't get the same results, go poke around in some rat brains until you figure out why. It's good science.

comment by mathemajician · 2010-07-22T21:58:27.325Z · LW(p) · GW(p)

Glial cell are actually about 1:1. A few years ago a researcher wanted to cite something to back up the usual 9:1 figure, but after asking everybody for several months nobody knew where the figure came from. So, they did a study themselves and did a count and found it to be 1:1. I don't have the reference on me, it was a talk I went to about a year ago (I work at a neuroscience research institute).

I have asked a number of neuroscientists about the importance of glia and have always received the same answer: the evidence that they are functionally important is still "very weak". They might be wrong, but given that some of these guys could give hour long lectures on exactly why they think this, and know the few works that claim otherwise... I'm inclined to believe them.

Replies from: Alan
comment by Alan · 2010-07-23T02:17:55.664Z · LW(p) · GW(p)

This new finding may be correct, but the old dictum about "nullius in verba" still makes sense.

comment by xamdam · 2010-07-22T21:07:31.500Z · LW(p) · GW(p)

I have a new idea for AI: glial networks!

Replies from: Eliezer_Yudkowsky, Will_Newsome
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-22T21:14:20.925Z · LW(p) · GW(p)

Be sure to add some extra complexity so that you can get more emergence out!

Replies from: xamdam
comment by xamdam · 2010-07-23T00:55:58.404Z · LW(p) · GW(p)

We can optimize the emergence/complexity ratio with an evolutionary algorithm... Sorry Chalmers.

comment by Will_Newsome · 2010-07-22T21:13:34.550Z · LW(p) · GW(p)

Glial networks would totally work. If you have enough of them, eventually you'll get Boltzmann AI.

comment by JanetK · 2010-07-23T19:19:35.273Z · LW(p) · GW(p)

I think that science usually works a little differently. People do not choose what they are going to investigate by what is not boring or is a hot topic. Very often they look for (metaphorically) a chink where they can put a crowbar in and open a crack to see some new knowledge. It was a lot easier to study neurons than glia - they stained well, their activity could be measured (a bit) without opening the skull, their electrical potentials could be measured, in some animals they were extremely large etc. Glial cells were not that forthcoming with their secrets and so they had to wait. That glial cells are not glue or just support has been known for at least a decade but what they might be doing was not (and still isn't) easy to discover. They are not boring - they involve the regulation of calcium ions and calcium ions are very definitely not boring to anyone interested in cellular communication.

The other big motivator is what the grant money is following.

Replies from: homunq
comment by homunq · 2010-07-24T17:17:22.558Z · LW(p) · GW(p)

A further note on staining: pioneer neurobiologist Ramon y Cajal got a lot of mileage out of a staining technique which, for reasons he didn't understand, only stained a small fraction of neurons. Bingo: instead of getting a dense thicket, you get some beautiful branching structures to draw. If his technique had picked out individual astrocytes instead, perhaps glial cells would have gotten more attention.

comment by Pfft · 2010-07-22T16:43:07.174Z · LW(p) · GW(p)

the choice facing neuroscientists in the early 20th century: study the cell that seemed on the verge of yielding all the secrets of the human mind, or tell your advisor you wanted to study glue instead.

Very plausible! But I would also like to know how this stable state eventually got upset. Who were the first people to study glial cells? Did they have any distinctive characteristics (personality/educational background/institutional affiliation/...) or did anything in particular happen that prompted them to take an interest in glue? It seems that if we could replicate that miracle, it could be very beneficial for science.

comment by ocr-fork · 2010-07-23T23:06:37.854Z · LW(p) · GW(p)

astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of anything that's random.

comment by kpreid · 2010-07-22T20:42:35.844Z · LW(p) · GW(p)

Copyedit: Extra space in "astrocyte: neuron" in third paragraph.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-22T22:48:16.433Z · LW(p) · GW(p)

You can use private messages to report typos.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-07-24T03:36:27.862Z · LW(p) · GW(p)

And should, unless they're the sort of typos that might seriously confuse someone before it gets fixed.

comment by [deleted] · 2010-07-22T13:14:16.265Z · LW(p) · GW(p)

Thanks for this. I suspect there's quite a bit of value in every so often discarding a basic working assumption, and seeing where it takes you. Of course, do this too often and you won't make any progress at all, so it's an interesting problem.

Also in the vein of "things we think we know about the brain that might be completely wrong", there's good evidence that our model of nerve propagation (electrical signals mediated by ion channels) isn't actually how the brain works - see this paper, and a good explanation of it here and here

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-22T14:17:53.111Z · LW(p) · GW(p)

Do you have any heuristics for identifying basic working assumptions?

Replies from: None, billswift
comment by [deleted] · 2010-07-22T15:03:20.932Z · LW(p) · GW(p)

A good one might be "the things you would have to mention first if you were explaining your field/problem/whatever to a person with no knowledge of it".

For example, if you're explaining A.I. to someone, something that will come up very quickly is that human-level intelligence is almost certainly not the maximum possible, due to the constraints of biology and evolution. What then, would be the result if you ignore or reverse this, and act as if human-level intelligence IS the maximum possible?

Replies from: NancyLebovitz, SilasBarta
comment by NancyLebovitz · 2010-07-22T15:36:15.687Z · LW(p) · GW(p)

That will get you to the category of reversing your conscious premises, but possbily not to examining something like "glial cells are too boring to bother with".

Replies from: Mass_Driver, None
comment by Mass_Driver · 2010-07-24T03:39:31.982Z · LW(p) · GW(p)

Why not?

comment by [deleted] · 2010-07-22T15:54:43.668Z · LW(p) · GW(p)

Good point - that sort of assumption is much harder to isolate.

I'll have to mull this over for a while.

comment by SilasBarta · 2010-07-22T18:03:32.540Z · LW(p) · GW(p)

A good [heuristic for identifying assumptions] might be "the things you would have to mention first if you were explaining your field/problem/whatever to a person with no knowledge of it".

Unfortunately, what I've found to be commonplace (and am writing an article on) is that the very same people who don't have a deep understanding (and thus don't know what assumptions interplay with their work) are also the ones who are incapable of explaining the field to outsiders.

Replies from: Cyan
comment by Cyan · 2010-07-23T02:49:48.789Z · LW(p) · GW(p)

I hope your article mentions and/or addresses the notion that deep understanding is necessary but not sufficient for the capability of explaining the field to outsiders.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-23T04:53:41.207Z · LW(p) · GW(p)

Yes, I plan to critique that notion. The reason why I don't buy it is that I take a "deep understanding" to be a Level 2 understanding, in which you recognize far-reaching inferential flows between that field and the rest of your knowledge. This would mean there are arbitrarily many inferential paths you can take from your understanding of the field to the nepocu (nearest point of common understanding) between you and the outsider.

If one path isn't clicking, then you should be able to "fall back a rank" to the grounding concepts, if the listener had a tenuous grasp on those, or simply take another path. To the extent that you can't do this, that causes me to call into question just how well you understand.

The caveat, of course, is that no matter how good your understanding, it can become time-consuming if the inferential distance is great, or you can't get immediate feedback about which concepts are confusing.

As for the article, its current status is that I've dropped the idea of listing my whole back of tricks in one article (it got really long), and the first one will just focus on (what I consider to be) the critical connection between understanding and explanation capability, and on the importance of locating the nepocu (defined above, and yes I've already been told it's a flaky term).

Replies from: Cyan
comment by Cyan · 2010-07-23T14:57:16.203Z · LW(p) · GW(p)

You're assuming that the explainer knows enough about explaining to try to identify the nepocu and to solicit feedback about which concepts are confusing.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-23T19:05:29.430Z · LW(p) · GW(p)

Because it's a good assumption. Explaining is nothing but tracing out your own internal model's inferential relationships between the concepts. The only bar to this would be not knowing it. So I don't see what kind of "explaining skill" there is that goes above and beyond that.

Soliciting feedback, for its part, is but a matter of asking, "do you understand [link in my ontology]?" and/or watching and listening for when they say the don't understand.

(And I take it you don't find the term "nepocu" to be particularly annoying?)

Replies from: Mass_Driver, Cyan, Richard_Kennaway
comment by Mass_Driver · 2010-07-24T03:39:13.195Z · LW(p) · GW(p)

No, I think Cyan is right. Have you read Eliezer's "A Technical Explanation of Technical Explanation?" You may wish to write "A Lay Explanation of Lay Explanation." I would certainly read and probably vote up such an article.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-24T15:02:31.935Z · LW(p) · GW(p)

I don't see, though, how I'm describing a different kind of explanation, or a distinctly lay one. The explanation standards I'm giving are what you would need to give for a technical explanation as well, in the case where your listener starts from a point of less knowledge about your field (i.e. a far nepocu).

The technical explanation only differs in terms of its greater detail (afaict -- you may mean something else); it doesn't change in type.

comment by Cyan · 2010-07-23T20:04:20.674Z · LW(p) · GW(p)

Because it's a good assumption...The only bar to this would be not knowing it.

This is exactly the point under dispute. I'm open to evidence on this point, but you'll have to do better than flat assertion. As a skilled explainer, you may not be aware of some things you do automatically that do not come naturally to less skilled explainers -- even those who are competent within their domains of expertise.

I am indifferent to the term "nepocu". Obviously some kind of abbreviation is necessary.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-23T20:33:01.716Z · LW(p) · GW(p)

Are you sure you're not overparsing me there? The part you truncated is the key point, that to explain, you need only trace out your internal ontology. To reject my position, it would have to be possible for someone both to actually know the connection to the nepocu, and be unable to articulate the inferential connection.

There are certainly people who have a "Chinese room" understanding, allowing them to deftly match outputs with the right input, and thus meet the standard "expert" threshold. But this is only a level 1 understanding.

I do appreciate your input, though, about what I should include.

Replies from: Cyan
comment by Cyan · 2010-07-24T00:35:03.262Z · LW(p) · GW(p)

Are you sure you're not overparsing me there?

It's entirely possible.

The part you truncated is the key point, that to explain, you need only trace out your internal ontology. [emphasis added]

I suppose I'd dispute that, then. It seems to me that to explain skillfully, you need to have not just a grasp of your internal ontology, but also a reasonably accurate map of your conversant's internal ontology.

One could, in their own head, recognize far-reaching inferential flows between their field of expertise and the rest of their knowledge, and yet fail to recognize that the task of explaining essentially lies is seeking the nepocu and going from there. Level 2 understanding is a property of one individual's internal ontology; seeking the nepocu is in the same class as understanding the typical mind fallacy and the problem of expecting short inferential distances, these being concerned with the relationship between two distinct internal ontologies.

But it seems premature to go on with this discussion until you've made the post. I'm happy to continue if you want to (there's no shortage of electrons, after all), but if the post is near completion, it probably makes more sense to wait until it's done.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-24T15:12:54.003Z · LW(p) · GW(p)

Okay, point taken. In any case, it would be hard for me to simultaneously claim that understanding necessarily enables you to explain, and that I have advice that would enable you to explain if you only have an understanding.

On the other hand, the advice I'm giving is derided as "obvious", but, if it's so obvious, why aren't people following it?

It seems to me that to explain skillfully, you need to have not just a grasp of your internal ontology, but also a reasonably accurate map of your conversant's internal ontology. ... seeking the nepocu is in the same class as understanding the typical mind fallacy and the problem of expecting short inferential distances, these being concerned with the relationship between two distinct internal ontologies.

But someone doesn't really need to recognize the difference between their own internal ontology and someone else's. In the worst case, they can just abandon attempts to link to the listener's ontology, and "overwrite" with their own, and this would be the obvious next step. In my (admittedly biased) opinion, the reason people don't take this route is not because this would take too long, but because the domain knowledge isn't even well-connected to the rest of their own internal ontology.

(Also, this is distinct from the "expecting short inferential distances" problem in that people don't simply expect it short, but that they wouldn't know what to do even if they knew it were very long.)

But it seems premature to go on with this discussion until you've made the post. I'm happy to continue if you want to (there's no shortage of electrons, after all), but if the post is near completion, it probably makes more sense to wait until it's done.

I still think advice would be helpful at this stage. I'll send you what I have so far, up to the understanding / nepocu points.

comment by Richard_Kennaway · 2010-07-24T15:40:33.418Z · LW(p) · GW(p)

Explaining is nothing but tracing out your own internal model's inferential relationships between the concepts.

I disagree. Your internal model cannot be copied into anyone else's head just by expounding it. To explain something successfully -- that is, to get someone else to understand something -- you have to take account of the state of the person you are explaining it to. An explanation that one person finds a model of clarity, another may find tedious and confusing. (I have seen both reactions to Eliezer's article on Bayes' theorem.)

When I am assisting students in a computer laboratory, and a student indicates they have a problem, the question I ask myself when I listen to them is "what information does this student need, and not have?" That is what I seek to provide, not a dump of my own thought processes around the subject.

I generally get favourable feedback, so I think I'm onto something here.

As a general rule, explanations share this property with software: until you have tried it and seen it work, you do not know that it works.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-24T16:02:58.257Z · LW(p) · GW(p)

I agree with and practice all of that, so I was oversimplifying with the part you quoted. I should probably have said something more like,

"Explaining starts from tracing out your internal model's inferential relationship between the concepts, and proceeds by finding how it can connect to -- and if necessary, correct -- the listener's ontology."

comment by billswift · 2010-07-22T16:45:46.307Z · LW(p) · GW(p)

Look for things that don't look important at first glance. Another example from science history is that proteins were originally considered the likely material of heredity, nucleic acids were overlooked because they were thought to be "too simple" in structure.

comment by Emile · 2010-07-23T17:07:26.794Z · LW(p) · GW(p)

I bet that if glial cells had been baptized "mesosynaptic cells" instead, they'd have been studied much more.

Somewhat related: today I found a bug in the software I'm working on that's due to the fact that a variable's name doesn't correspond to what it actually means, which means that a change that "looked right" actually screwed everything up.

comment by whpearson · 2010-07-22T10:05:27.572Z · LW(p) · GW(p)

Thanks for the interesting article.

and regulation of blood flow: all important, but mostly things only a biologist could love.

I'd argue that people who like designing computer architectures should be interested in this as well.

Ignoring glia seems to me to have been an (mis-)application of assuming the simplest explanation consistent with the facts, when people weren't in a position to fully explain the brain. I.e. people knew that you needed neurons to explain brain function, but because they couldn't predict how the brain functioned, they didn't know that a neural explanation was insufficient.

It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven't been convincing).

Replies from: RobinZ, sharpneli
comment by RobinZ · 2010-07-22T18:11:24.997Z · LW(p) · GW(p)

It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven't been convincing).

I'd agree - I think the reasonable position at this point is to say that we shouldn't privilege the hypothesis. Most of the argumentation along those lines that I have seen cited seems to be permissive, rather than compelling, towards the claim.

Replies from: bogus
comment by bogus · 2010-07-22T23:54:15.580Z · LW(p) · GW(p)

I'd agree - I think the reasonable position at this point is to say that we shouldn't privilege the hypothesis. Most of the argumentation along those lines that I have seen cited seems to be permissive, rather than compelling, towards the claim.

But the fact that we directly experience phenomenal qualia (or at least, you do) is compelling evidence that some fairly exotic physics is happening in the brain. Mesoscopic quantum superpositions is actually the least weird hypothesis in this respect. I think that people who dismiss this problem are biased to think that biology should be simple; they don't understand that evolutions can come up with incredibly clever stuff. It's the same mistake that leads people to dismiss the possible role of glial cells in cognition.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-23T00:01:55.997Z · LW(p) · GW(p)

Mesoscopic quantum superpositions is actually the least weird hypothesis in this respect.

I don't know about it being the least weird hypothesis but it certainly isn't a useful one. I have yet to hear anything resembling a coherent explanation of what consciousness has to do with superposition. And I have even less of an easy time seeing how the presence of qualia have anything to do with this. (This may be connected to the fact that I don't see qualia as a big deal needing some deep explanation.)

Replies from: bogus
comment by bogus · 2010-07-23T00:13:23.868Z · LW(p) · GW(p)

And I have even less of an easy time seeing how the presence of qualia have anything to do with this.

The real issue is not the "presence of qualia", it's what qualia should map to in the underlying physics. Saying that e.g. the color blue is an incredibly complex pattern in the classical physical system corresponding to the human visual cortex--which actually differs physically from human to human--is just not a tenable position.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-23T00:20:12.684Z · LW(p) · GW(p)

The real issue is not the "presence of qualia", it's what qualia should map to in the underlying physics. Saying that e.g. the color blue is an incredibly complex pattern in the classical physical system corresponding to the human visual cortex--which actually differs physically from human to human--is just not a tenable position.

So how is this at all distinct than the fact that words like "good" or "puppy" have complicated mappings onto our brain structure? These present just as much of a difficulty as qualia. And just because we can't precisely map those now doesn't make those positions untenable. Why posit new physical laws for a set of phenomena that we understand better and better with no sign of stopping? If we ran into some apparent wall in understanding how these function then after a while it might make sense to look at new physics that might explain things. But as it is now, we've been making steady progress on these issues for about a hundred years. We now can use electromagnetic stimulation to make people experience specific classes of feelings, and we can use electrodes more directly to trigger direct responses. We can see emotions and sensations actively in the brain by fMRI and other methods. There's no need for spooky suppositions.

Replies from: bogus
comment by bogus · 2010-07-23T00:36:59.959Z · LW(p) · GW(p)

So how is this at all distinct than the fact that words like "good" or "puppy" have complicated mappings onto our brain structure?

It's different because we know for certain that the mapping of words such as "good" and "puppy" onto our basic phenomenology is culturally dependent, learned throughout our childhood, etc. We can say no such thing about the mapping between physics and subjective experience. And in the former case, some drastic simplifications can actually be made: see e.g. the work of George Lakoff and other cognitive linguists about the linkages between "abstract" semantics and basic phenomenology.

And just because we can't precisely map those now doesn't make those positions untenable.

It's not because we can't precisely map them; it's because the possiblity of there even being a mapping is so weird and complicated that spooky, exotic physics looks good by comparison. (Basically you would be forced to argue that the mapping between classical physics and subjective perceptions was picked by an optimizing agent, which is far more spooky.)

We can see emotions and sensations actively in the brain by fMRI and other methods.

Bzzzzzt. We can see macroscopic correlates of emotions and sensations in an fMRI. Ths does not mean that the emotion and sensation is the same thing as the change in fMRI. (In fact, all fMRI does is measure changes in blood flow.)

Replies from: JoshuaZ, WrongBot
comment by JoshuaZ · 2010-07-23T01:57:53.223Z · LW(p) · GW(p)

It's different because we know for certain that the mapping of words such as "good" and "puppy" onto our basic phenomenology is culturally dependent, learned throughout our childhood, etc. We can say no such thing about the mapping between physics and subjective experience.

I'm missing something here. How does the fact that this correlation isn't as culturally dependent imply something spooky is going on?

It's not because we can't precisely map them; it's because the possiblity of there even being a mapping is so weird and complicated that spooky, exotic physics looks good by comparison.

Again, I don't follow your logic. What would be weird and complicated about such a mapping?

Basically you would be forced to argue that the mapping between classical physics and subjective perceptions was picked by an optimizing agent, which is far more spooky.)

Why? What need is there for an optimizing agent? What do you think this optimizing agent would have done? I'm not sure what you are trying to say here but it almost seems to some sort of argument that if one wants to reject theism one needs spooky physics. I don't know how to respond to that.

We can see macroscopic correlates of emotions and sensations in an fMRI. Ths does not mean that the emotion and sensation is the same thing as the change in fMRI. (In fact, all fMRI does is measure changes in blood flow.)

You might notice that I said "fMRI and other methods." We can for example, use deep brain stimulation to directly stimulate emotions (this is in fact a cutting edge treatment for people with severe depression and is being investigated for use in treating other illnesses). We can see which parts of the brain are being used for what emotions and sensations and we can stimulate those regions to duplicate those emotions and sensations.

More generally, it seems like you may be confusing the map with the territory. A blank or poorly drawn area of a map doesn't tell us about the territory. It is true that repeated failure to get a good map of an area of territory can tell us that our mapping method has a problem or that another section of our map has issues. That's essentially what happened with Copernicus and Kepler; the repeated failures to get accurate models of the heavens forced a redrawing of fundamental sections of the map. But in order to justify that, one needs to have repeated problems over a long period of time with trying to get a good map of an area. If your map keeps getting more and more precise, that's not useful. Finally, a question if you don't mind: what hypothetical evidence would convince you that qualia can be explained by our current laws of physics?

Replies from: bogus
comment by bogus · 2010-07-23T03:39:31.808Z · LW(p) · GW(p)

We can for example, use deep brain stimulation to directly stimulate emotions

We can stimulate emotions, yet we are nowhere near a satisfactory explanation of why each emotion has the psychological effects it does. It's quite clear that we can only play with the brain at a very coarse level.

Finally, a question if you don't mind: what hypothetical evidence would convince you that qualia can be explained by our current laws of physics?

Reliable brain simulation would be solid evidence here. Others have pointed out that we probably won't be able to revive cryopreserved patients without a thorough understanding of brain physics.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-23T03:47:39.207Z · LW(p) · GW(p)

We can stimulate emotions, yet we are nowhere near a satisfactory explanation of why each emotion has the psychological effects it does. It's quite clear that we can only play with the brain at a very coarse level.

Sure, but who cares? The point is that our ability to do this has been steadily improving and there's no indication that any part of our coarse play has turned up any evidence of any special physics at work.

comment by WrongBot · 2010-07-23T01:20:47.330Z · LW(p) · GW(p)

We can say no such thing about the mapping between physics and subjective experience.

The wavelength of light maps pretty straightforwardly onto our perception of color. We can trace the activation of cones in our eyes to patterns of neuron firing in the optic nerve to neurons firing in the visual cortex. "Redness" isn't magic. "Redness" is a particular configuration (or, more properly, a set of configurations) of neurons. The only reason it seems special to you is because you are experiencing the algorithm from the inside. Consciousness is what thinking feels like, not magic.

Replies from: saturn, bogus
comment by saturn · 2010-07-23T09:12:48.708Z · LW(p) · GW(p)

Sure... I'm with you until you get to the part where some (all?) configurations of matter have experiences from the inside, which nobody can detect or describe, and the only evidence that these "experiences" exist is that people say they can feel them... isn't this exactly the kind of thinking we ought to dismiss as crazy? But on the other hand, I think I feel experiences too!

Replies from: WrongBot
comment by WrongBot · 2010-07-23T09:23:01.973Z · LW(p) · GW(p)

You're making this more mysterious than it needs to be. No matter what our experiences felt like, we'd still call them qualia. No matter how we used our senses to acquire information about the world, we'd still call that process experience.

Replies from: cousin_it
comment by cousin_it · 2010-07-23T09:26:16.504Z · LW(p) · GW(p)

Are you claiming that any sufficiently complex agent will report a mysterious feeling of consciousness? That can't be right.

Replies from: WrongBot
comment by WrongBot · 2010-07-23T09:35:13.863Z · LW(p) · GW(p)

I wouldn't feel comfortable making that claim until I'd tested it on a couple of non-human agents, and in any case I wouldn't call it mysterious.

Really all I have is the suspicion that consciousness is much more normal than people tend to think. The only thing I'm confident of is that explaining consciousness won't require magic or special exceptions to the laws of physics.

Replies from: prase
comment by prase · 2010-07-23T11:05:02.446Z · LW(p) · GW(p)

What sort of answer, do you think, will people accept as explanation of consciousness? I ask that because I suspect that however deep understanding of thought will not destroy all the feeling of mystery. Even after we become able to model human brains on computers and after we discover which parts of brain are responsible for each exact feeling, I can't imagine how this knowledge stops people wonder about qualia, zombies and Chinese rooms.

Replies from: RobinZ
comment by RobinZ · 2010-07-23T15:29:05.386Z · LW(p) · GW(p)

What sort of answer, do you think, will people accept as explanation of consciousness? I ask that because I suspect that however deep understanding of thought will not destroy all the feeling of mystery.

I imagine Lord Kelvin felt similarly when he thought of the elan vital. It didn't work for that, and it didn't work for a very good reason: your ignorance of the realm of possibilities is not good evidence. An inability to come up with alternatives may be better support for a claim than showing that you have not yet been compelled to admit defeat, but it's still nearly worthless.

Replies from: prase
comment by prase · 2010-07-24T15:04:12.693Z · LW(p) · GW(p)

I didn't mean my question as a Kelvinian declaration that we will never understand. I was only curious whether WrongBot has some more specific idea what sort of answer can destroy the feeling of confusion when thinking about qualia. I am even not sure whether there is a question to be answered.

Replies from: RobinZ, WrongBot
comment by RobinZ · 2010-07-24T15:52:24.344Z · LW(p) · GW(p)

Right. I apologize, I didn't read your comment very clearly. The Kelvin case offers some hope, though - after all, the New Age life-is-energy meme is a lot weaker than elan vital was.

comment by WrongBot · 2010-07-24T20:49:16.169Z · LW(p) · GW(p)

I haven't yet encountered a sufficiently precise definition of qualia (or consciousness, for that matter) to be able to say what exactly the confusion is, much less where it's coming from or how it can be destroyed. The hard problem of consciousness is a wrong question, and I suspect that for any given untangling of it, the answer will be trivial.

comment by bogus · 2010-07-23T01:42:11.461Z · LW(p) · GW(p)

"Redness" is a particular configuration (or, more properly, a set of configurations) of neurons.

You're missing the basic problem: 'neurons' are part of the map, not the territory. The territory is made up of quarks, spacetime and probability amplitudes. What's the set of configurations of quarks which feels from the inside like thinking or like the color red? How can you be so confident that no magic is involved in this "how it feels from the inside" business, while casually talking about configurations of neurons?

Replies from: WrongBot, JoshuaZ, prase
comment by WrongBot · 2010-07-23T01:54:34.299Z · LW(p) · GW(p)

How can you be so confident that no magic is involved in this "how it feels from the inside" business, while casually talking about configurations of neurons?

I usually find Occam's Razor to be sufficient. You are misapplying reductionism: if consciousness maps to a set of configurations of neurons, and neurons map to quarks, spacetime, and probability amplitudes, then we have no need of mysteriously specific exceptions to physical laws. Indeed, such hypothetical and entirely unsupported exceptions have no explanatory power at all.

What's the set of configurations of quarks which feels from the inside like thinking or like the color red?

Why, the set of configurations of quarks which describe any member of the set of neurons which feel from the inside like thinking or like the color red, of course.

comment by JoshuaZ · 2010-07-23T02:00:51.849Z · LW(p) · GW(p)

You're missing the basic problem: 'neurons' are part of the map, not the territory. The territory is made up of quarks, spacetime and probability amplitudes.

No, he's not. Neurons are part of the territory. They are composed of other parts of the territory which are composed of quarks, spacetime, etc. But that doesn't make a neuron not part of the territory. Just because something is ontologically reducible doesn't mean it isn't part of the territory. It just means that you need to be very careful not to treat it is as ontologically fundamental when it isn't.

Replies from: bogus
comment by bogus · 2010-07-23T03:05:13.646Z · LW(p) · GW(p)

No, he's not. Neurons are part of the territory.

Fine, substitute "not ontologically fundamental" for "not part of the territory" if you must.

It just means that you need to be very careful not to treat it is as ontologically fundamental when it isn't.

The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to it, simply because it is foundational enough to you and anyone else with subjective experience. There is a reasonable argument to be made that "the way it feels from the inside" is just as fundamental as the basic physics of how the world works.

This does not imply that the two are necessarily related (for instance, P-zombies or robots can be unconscious yet physically talk about subjective experience). It does mean that Occam's razor should apply to "the way it feels from the inside", which tends to weigh against complex explanations like "configurations of neurons" and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.

Replies from: JoshuaZ, WrongBot
comment by JoshuaZ · 2010-07-23T03:18:16.164Z · LW(p) · GW(p)

The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to subjective experience, simply because it is foundational enough to you and anyone else with subjective experience.

Unfortunately, this is close to nonsense. Just because something strikes me as foundational to me doesn't give me any decent reason for thinking it has any such actually foundational status. Humans suck as introspection. We really, really suck at intuiting out the differences in how we process things unless things are going drastically wrong. For example, it isn't obvious to most humans that we use different sections of our brains to add and multiply. But, there's a lot of evidence for this. For example, fMRI scans show different areas lighting up, with areas corresponding to memory lighting up for multiplication and areas corresponding to reasoning lighting up for addition. Similarly, there are stroke victims who only lose the ability to do one or the other operation. And this is but one example of how humans fail. Relying on human feelings to get an idea about how anything in the world, especially our own mind, works is not a good idea.

It does mean that Occam's razor should apply to "the way it feels from the inside", which tends to weigh against complex explanations like "configurations of neurons" and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.

I don't follow this logic at all. I'm not completely sure what you are trying to do here but it sounds suspiciously like the theistic argument that God is a simple hypothesis. Just because I can posit something as a single, irreducible entity does not make that thing simple. (Also, can you expand on what you mean by a spooky superintelligence running debugging sessions since I can't parse this is in any coherent way)

Replies from: bogus
comment by bogus · 2010-07-23T05:08:11.631Z · LW(p) · GW(p)

Unfortunately, this is close to nonsense. Just because something strikes me as foundational to me doesn't give me any decent reason for thinking it has any such actually foundational status.

Small nitpick: I am not talking about what is foundational to the way our world works. I am only making the fairly trite obsevation that subjective experience/qualia is the only thing we can directly experience; it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.

Humans suck as introspection. We really, really suck at intuiting out the differences in how we process things unless things are going drastically wrong. For example, it isn't obvious to most humans that we use different sections of our brains to add and multiply.

What this is actually saying is that phenomenology (the stuff we can access by introspection) cannot directly map physical areas of the brain of the kind which might get damaged in a stroke. In itself, this is not evidence that humans "suck" at introspection; especially if our consciousness really is a quantum state with $bignum degrees of freedom, rather than a classical system with spatially separate subparts.

it sounds suspiciously like the theistic argument that God is a simple hypothesis.

God is not a simple hypothesis, but "this was affected by an optimization process which cares about X or something like it" is simpler than "this configuration which happens to be near-optimal for X arose by sheer luck". Which is pretty much what one would have to posit in order to explain our subjective experience of the extremely complicated physical systems we call "brains". There are other avenues such as the anthropic principle, but ISTM that at some point one would start to run into circularities.

Replies from: prase, RobinZ
comment by prase · 2010-07-23T08:47:40.602Z · LW(p) · GW(p)

it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.

What else can it depend on? Your original claim was that it has to do something with quantum superpositions, so can you tell, how these superpositions are going to explain qualia any better? Seems like you demand the explanation be black box without internal structure; this is contrary to what actual explanations are.

this configuration which happens to be near-optimal for X arose by sheer luck

The "naive physicalists" don't maintain anything like that. Evolution isn't sheer luck.

Replies from: bogus
comment by bogus · 2010-07-23T10:44:52.716Z · LW(p) · GW(p)

so can you tell, how these superpositions are going to explain qualia any better? Seems like you demand the explanation be black box without internal structure

I'm not trying to explain why qualia occur, just seeking a sensible physical description of them. Given the requirement that qualia should be actually experienced in some sense, a "black box" system which clearly matches these mysterious experiences is better than a complicated classical configuration plus a lengthy description of how this configuration is felt from the inside.

The "naive physicalists" don't maintain anything like that. Evolution isn't sheer luck.

Indeed it's not: it's an optimization process! But why would evolution care about qualia? In fact, many physicalist philosophers think qualia exist as epiphenomena, and an epiphenomenon cannot be naturally selected for.

Replies from: prase
comment by prase · 2010-07-23T10:55:29.592Z · LW(p) · GW(p)

I'm not trying to explain why qualia occur, just seeking a sensible physical description of them.

I use description and explanation as synonyms most of the time. Black box description is not much of a description, it's rather lack of one. What information is contained in "qualia work like a black box", or in a little more fancy language, "qualia work due to still unknown physical mechanism"? These are not description of qualia; the only non-vacuous interpretation of such sentences is "the contemporary physics is not going to explain qualia", which may be true, but still is a statement about our current knowledge, not about qualia.

But why would evolution care about qualia?

Well, you are probably right in that, even if we are getting dangerously close to the philosophical zombies' realm.

Replies from: bogus
comment by bogus · 2010-07-23T11:23:29.629Z · LW(p) · GW(p)

What information is contained in "qualia work like a black box", or in a little more fancy language, "qualia work due to still unknown physical mechanism"?

Very little, but this is not a real description of qualia, just a sketch proposal which demonstrates a promising avenue of research. A complete description would state what physical system in the brain is responsible for maintaining complex, "black box" quantum states, and perhaps how that physical system interacts with known neural correlates of subjective experiences. Unfortunately, we're nowhere near that level yet.

even if we are getting dangerously close to the philosophical zombies' realm.

Dangerously close? Do you fear that P-zombies will infect you with an epiphenomenal virus and cause you to lose your subjective experience?

Replies from: prase
comment by prase · 2010-07-23T11:44:21.726Z · LW(p) · GW(p)

[J]ust a sketch proposal which demonstrates a promising avenue of research. A complete description would state what physical system in the brain is responsible for maintaining complex, "black box" quantum states [...]

What makes this avenue different from investigation of neuron configurations? New physical laws were never discovered after rejecting the old ones, saying that they couldn't possibly work. All discoveries of new physics happened after conducting research using the old paradigm and realising anomalies. I mean, if there is something strangely quantum going on in the brains, we will not miss it even if we use the conventional approach.

Or said differently, I still have no idea what light quantumness can bring into the question.

Do you fear that P-zombies will infect you with an epiphenomenal virus and cause you to lose your subjective experience?

I fear talking about things that aren't connected to observable facts. I fear that I might say a lot of grammatically correct sentences with no actual meaning.

Replies from: bogus
comment by bogus · 2010-07-23T12:27:53.471Z · LW(p) · GW(p)

What makes this avenue different from investigation of neuron configurations?

Not much. It's still neuroscience, but it takes reports of subjective experience a bit more seriously, and tries to explain them by using existing physics, rather than treating them as meaningless or as magical and unexplainable.

I fear talking about thing that aren't connected to observable facts. I fear that I might say a lot of grammatically correct sentences with no actual meaning.

Look, it's not that complicated. I'm not the only person who talks about the Cartesian theater and claims that we can somehow feel brain algorithms from the inside. If subjective experience is not an observable fact to you, then your psychology is radically different from that of many other people.

Replies from: prase
comment by prase · 2010-07-23T12:45:00.822Z · LW(p) · GW(p)

I should have written objective observable facts or something like that. I can observe that I am not a P-zombie, however the beauty of the whole P-zombie business is that such observation is, sort of, insufficient. I would need to observe whether you are a P-zombie, and that I can't.

It is perhaps more economical and Occam-razorish for me to expect that other people are no P-zombies either, but even if they were zombies, I would have no way to realise that, and this renders the zombie question quite uninteresting.

comment by RobinZ · 2010-07-23T06:33:46.513Z · LW(p) · GW(p)

Small nitpick: I am not talking about what is foundational to the way our world works. I am only making the fairly trite obsevation that subjective experience/qualia is the only thing we can directly experience; it would be really, really strange if something so basic to us turned out to be dependent on complicated configurations of neurons and glial cells, as naive physicalists suggest.

Do you question the consensus that you see using your eyes? Because the eye is a blatantly complicated mechanism directly in the middle of one of the direct experiences of the world you stake your theory on.

Replies from: bogus
comment by bogus · 2010-07-23T10:26:54.265Z · LW(p) · GW(p)

I'm not questioning the fact that complicated mechanisms are involved in creating your subjective experience; I question the physical description of that subjective experience as an incredibly complicated configuration in the brain. If your qualia are at all real in some sense, they should correspond to something far simpler than that on Occam's Razor grounds. Alternately, you might just be a P-zombie. But then you'd have serious problems experiencing how your brain feels from the inside, although your brain would definitely be talking about its internal experiences.

Replies from: RobinZ
comment by RobinZ · 2010-07-23T11:25:06.638Z · LW(p) · GW(p)

I'm not questioning the fact that complicated mechanisms are involved in creating your subjective experience;

Why aren't you? You just said that "[qualia] should correspond to something far simpler than that". If a (say) visual quale is simple, then why does the human system need a complicated mechanism to capture large numbers of photons such that they form a coherent image on a surface coated with photosensitive neurons, which are wired so as to cause large-scale effects on other parts of the neural (and glial) system of the brain, starting with the visual cortex and spreading from there ... to cause something simple? Light was simple to start with! If you expect things to be simple at the Cartesian theater, the visual system moves the wrong way.

Replies from: bogus
comment by bogus · 2010-07-23T11:52:31.826Z · LW(p) · GW(p)

Light is simple, but evolved organisms care very little about the fundamental qualities of light. They care a lot about running efficient computations using various inputs, including the excitation of photosensitive neurons. This is probably why the Cartesian theather feels very much like computation on high-level inputs and outputs, rather than objectively fundamental things such as wavelengths of light. And the computations which transform low-level data like excitation of sensory neurons into high-level inputs are probably unconscious because they are qualitatively different from conscious computation.

Replies from: RobinZ
comment by RobinZ · 2010-07-23T15:40:35.465Z · LW(p) · GW(p)

I would expect optimization for efficiency to be something evolution does - but I am compelled to note that I mentioned "the Cartesian theater" as a reference to Daniel Dennett's Consciousness Explained, where he strenuously refuted the idea of the Cartesian theater. By Dennett's argument - and even when Consciousness Explained came out, he had a lot of research data to work from - the collocation of all sensory data in a single channel to run past some homunculus recording our conscious experience is unlikely. After all, there already is a data-processing entity right there to collect all the sensory data - that's the entire brain. So within the brain, it should not be surprising that different conscious experiences are saved to memory from different parts. Particularly since the brain is patently a parallel computer anyway.

Replies from: bogus
comment by bogus · 2010-07-23T18:37:00.693Z · LW(p) · GW(p)

Daniel Dennett's "refutation" of the Cartesian theater has been widely criticized. Basically, he relies on perceptual illusions such as discrete motion being perceived as continuous, arguing that there should be a fact of the matter as to whether "the motion in the Cartesian theater" is continuous or not. But phenomenology is far simpler (or more complicated) than that: the fact that we perceive the quale of continuous_motion does not imply that a homunculous somewhere is seeing the object in an intermediate position at each given moment in time. It is a strawman argument.

Replies from: RobinZ
comment by RobinZ · 2010-07-23T19:00:41.091Z · LW(p) · GW(p)

Before I respond: are we actually getting anywhere in this discussion? I have this sinking feeling that I'm asking the wrong questions.

comment by WrongBot · 2010-07-23T03:20:19.283Z · LW(p) · GW(p)

There is a reasonable argument to be made that "the way it feels from the inside" is just as fundamental as the basic physics of how the world works.

Well, what is it, then?

The problem is that most philosophers who care about phenomenology at all would assign at least some ontologically foundational status to subjective experience, simply because it is foundational enough to you and anyone else with subjective experience.

Ahhhh, I see now. Subjective experience must be ontologically foundational because it feels foundational, subjectively. This seems oddly... circular.

It does mean that Occam's razor should apply to "the way it feels from the inside", which tends to weigh against complex explanations like "configurations of neurons" and in favor of either exotic physics or a spooky superintelligence who can figure out how to run debugger sessions on our physical brains.

Configurations of neurons are not complex. They are complicated, but they can still be explained by the same physics as everything else in the world. You are proposing a more complex universe. Or possibly a god. They are equally implausible without supporting evidence.

Replies from: bogus
comment by bogus · 2010-07-23T03:50:17.975Z · LW(p) · GW(p)

Ahhhh, I see now. Subjective experience must be ontologically foundational because it feels foundational, subjectively. This seems oddly... circular.

Feel free to run garbage collection on that circularity. You'll find out what it feels like to subjectively vanish in a puff of logic.

You are proposing a more complex universe.

Not really, since both subjective experience and quantum mechanics are part of our universe already. Perhaps one could say that I'm proposing more complicated brains, but that adds little or nothng to the overall complexity budget given what we know about quantum biology, biophysics, evolution etc.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-23T12:38:01.993Z · LW(p) · GW(p)

You are proposing a more complex universe.

Not really, since both subjective experience and quantum mechanics are part of our universe already.

No, you are proposing a more complicated universe. Quantum mechanical systems can be simulated on a classical computer given a source of randomness. The only caveat, is that if certain compsci conjectures are true then it actually takes more time or more memory for a classical system to simulate these runs than a quantum system would. If the complexity hierarchy exhibits partial collapse with say BQP being equal to P, then even this would in some sense not be true and we'd then have quantum computers as just classical machines with a source of random bits. Now, most comp sci people don't believe that, but the thrust of this argument just requires the fact that classical machines with randomness can simulate quantum machines given extra time and space. since that is the case, in order to assert that quantum mechanics has any chance of causing things like qualia and consciousness would require that there are fundamental gaps in our understanding of quantum mechanics. It would also likely violate many forms of the Church-Turing thesis. So you'd have to basic failings in our understanding of QM and theoretical comp sci for this sort of approach to be even have a chance at working.

Replies from: bogus
comment by bogus · 2010-07-23T12:53:26.334Z · LW(p) · GW(p)

Quantum mechanical systems can be simulated on a classical computer given a source of randomness.

This implies that unconscious classical systems can simulate a conscious being. But such a simulation of consciousness would not involve the systems in our physical world which can actually be "felt from the inside". In this theory, qualia and consciousness are not caused by quantum mechanics; they are what some extremely complex quantum states feel like.

The only caveat, is that if certain compsci conjectures are true then it actually takes more time or more memory

If quantum algorithms are at all useful, this is enough for evolution to favor quantum computation over classical.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-23T13:05:08.593Z · LW(p) · GW(p)

This implies that unconscious classical systems can simulate a conscious being. But such a simulation of consciousness would not involve the systems in our physical world which can actually be "felt from the inside". Qualia and consciousness are not caused by quantum mechanics, they are what some extremely complex quantum states feel like.

At this point how is this claim any different than claiming that these are classical systems and that qualia and consciousness are what those algorithms feel like?

If quantum algorithms are at all useful, this is enough for evolution to favor quantum computation over classical.

That's actually the best argument I've heard for supposing that there's a quantum mechanical aspect to our processing. Thank you for bringing it to my attention. It does make a QM aspect more plausible. However, it is still a very weak argument since a) evolution would only do this if it had an easy way of keeping things in coherence that didn't take up too much resources b) It seems unlikely that there's a substantive evolutionary advantage to any form of computational speedup to processes which we needed to do in the wild. I don't think for example that humans needed to factor large integers in our hunter gatherer societies. This does lead to the idea of deliberately evolving beings that actually use quantum mechanics in their thought processes by selecting for ones that are good at algorithms that do have speedups in a QM system.

Replies from: bogus
comment by bogus · 2010-07-23T13:41:35.985Z · LW(p) · GW(p)

At this point how is this claim any different than claiming that these are classical systems and that qualia and consciousness are what those algorithms feel like?

Quantum systems have much nicer properties from this point of view. An internally entangled quantum state can be an ontologically basic entity while still possessing a rich internal structure, in a way that has no direct equivalents in classical physics.

evolution would only do this if it had an easy way of keeping things in coherence that didn't take up too much resources

Models of quantum computation are quite variable in how resistant they are to decoherence. Topological quantum computing is much more resistant to errors than models based on ordinary quantum particles.

If there's a substantive evolutionary advantage to any form of computational speedup to processes which we needed to do in the wild.

Why wouldn't there be? Intelligent processing clearly confers some evolutionary advantage, and there have been many proposals for artificial general intelligence (AGI) using quantum computation.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-23T13:54:22.435Z · LW(p) · GW(p)

Quantum systems have much nicer properties from this point of view. An internally entangled quntum state can be an ontologically basic entity while still possessing a rich internal structure, in a way that has no direct equivalents in classical physics

That makes some sense, although I don't see why a classical simulation of the same wouldn't feel identical.

Models of quantum computation are quite variable in how resistant they are to decoherence. Topological quantum computing is much more resistant to errors than models based on ordinary quantum particles.

This may be true in the same sense that sending a probe to Betelgeuse is easier than sending a probe to the Andromeda galaxy. You are still talking about fantastically difficult things to keep in coherence. We're still talking about systems kept below at most 5 kelvin or so (being generous). It is noteworthy that so far we've actually had far more success implementing standard quantum computers than we have with topological quantum computers.

Why wouldn't there be? Intelligent processing clearly confers some evolutionary advantage, and there have been many proposals for artificial general intelligence (AGI) using quantum computation.

There's no evidence of any process we associate as part of "intelligence" as being sped-up or made more efficient by quantum computation. I'd also be very interested in seeing citations for the claim that there are "many proposals for artificial general intelligence (AGI) using quantum computation."

comment by prase · 2010-07-23T10:26:14.411Z · LW(p) · GW(p)

What's the set of configurations of quarks which feels from the inside like thinking or like the color red?

Do you demand the exact wave function?

How can you be so confident that no magic is involved in this "how it feels from the inside" business, while casually talking about configurations of neurons?

I was never much comfortable with "consciousness is how thinking feels from inside" explanation, since it hardly explains anything. However, the alternatives are non-explanations even more. Unless the hypothesis predicts something testable, it is useless. The position that no non-standard physics is involved is a kind of default which is held whenever there are no clear reasons to think otherwise, that's all.

comment by sharpneli · 2010-07-27T12:30:31.005Z · LW(p) · GW(p)

It is why I am hesitant to argue that there are no quantum effects of any sort in the brain (although the quantum effects people have suggested so far haven't been convincing).

Considering that quantum physics is turing complete (unless it's nonlinear etc) any quantum effects could be reproduced with classical computation. Therefore the assumption that cognition must involve quantum effects implicitly assumes that quantum physics is nonlinear or one of the various other requirements.

In this light the first question that ought to be asked from persons claiming quantum effects on brain is: What computation [performed in brain] requires basically infinite loops completed on finite time and based on what physics experiment they believe that quantum effects are more than turing complete.

Replies from: whpearson
comment by whpearson · 2010-07-27T12:55:24.382Z · LW(p) · GW(p)

I think the brain is probably ultimately computable by a classical computer and yet quantum computing in the brain might be significant. Here are couple of the potential problems we'll have if the brain relies on quantum effects.

1) Difficulty in replacing bits of the brain functionally. If consciousness is some strange transitory gestalt quantum field; then you would need to to make a brain prosthesis that had the same electromagnetic properties as a neuron. Which might be quite hard.

2) A harder time simulating brains/doing AI: You might have to up the date you expect Whole Brain Emulations to become available (depending upon when we expect quantum computers to be useful).

Replies from: JoshuaZ, sharpneli
comment by JoshuaZ · 2010-07-27T13:14:14.810Z · LW(p) · GW(p)

I'm having trouble parsing your above comment. Are the points labeled 1 and 2 arguments for the presence of quantum computing in the brain or consequences of that belief?

Replies from: whpearson
comment by whpearson · 2010-07-27T13:47:18.976Z · LW(p) · GW(p)

Sorry consequences. I'll edit for clarity.

comment by sharpneli · 2010-07-27T13:12:05.707Z · LW(p) · GW(p)

Quantum computing in the brain might be happening, but if we want to understand conciousness it is irrelevant (Unless conciousness is noncomputable where it becomes a claim about quantum physics yet again). It's as relevant as details about transistors or vacuum tubes are for understanding sorting algorithms.

Naturally when considering brain prostheses or simulating a brain the actual method with which brain computes is relevant.

Replies from: whpearson
comment by whpearson · 2010-07-27T14:12:35.083Z · LW(p) · GW(p)

Whoever said that this conversation was about understanding consciousness?

Personally I think that that topic is a tarpit, which I prefer to ignore until we know how the brain works.

Replies from: sharpneli
comment by sharpneli · 2010-07-27T14:23:47.632Z · LW(p) · GW(p)

I merely wished to clarify the difference between conciousness and how it is implemented in the brain. I had no intention of implying that it was part of the discussion. On retrospect the clarification was not required.

It's just way too common for the two issues to get mixed up, as can be seen on the various threads.