Might whole brain emulation require quantum-level emulation?

post by lukeprog · 2011-04-14T06:12:37.620Z · LW · GW · Legacy · 15 comments

Most experts seem to think that whole brain emulation will not require emulation down to the quantum level, but perhaps at the level of atoms or even molecules, either of which is far more computationally tractable than quantum-level brain emulation. Those who think quantum-level emulation will be required for whole brain emulation are often considered to be cranks.

However, it is worth noting that a few biological processes - including photosynthesis - have recently been found [excellent video lecture] to depend on the particularities of quantum phenomena. This lends no support to Penrose's views on quantum phenomena and the brain, but it may not be so crankish after all to suppose that whole brain emulation may require emulation down to the quantum level.

Thoughts?

 

Links:

On photosynthesis, see Fleming's papers on the topic.

Here is the quantum bird navigation paper.

Here is some coverage on Turin's controversial theory of quantum smell.

15 comments

Comments sorted by top scores.

comment by Manfred · 2011-04-14T06:29:14.384Z · LW(p) · GW(p)

Well, to one extent of course it will, since ion pumps and neurotransmitter-sensors run entirely on quantum mechanics, just like chlorophyll. The question for a simulation would seem to be whether we can model each and each instance of these things seperately, thus neatly cordoning off the quantum mechanics. Simulating a tree, for example, can be done quite well above the quantum level, because even though chlorophyll's properties depend on QM, once you know the properties you can model it without further reference to quantum mechanics.

Replies from: lukeprog, paulfchristiano, jsalvatier, Davorak
comment by lukeprog · 2011-10-08T07:49:11.562Z · LW(p) · GW(p)

Are you saying roughly the same thing as Tegmark (2000)? (Louie pointed me to it.) Have you read the reply by Hagan et al. (2002)? A brief 2011 review is here. Unfortunately, my own training offers me limited ability to judge these matters.

For others, useful links on quantum computation:

Quantum Computing: A General Introduction (2011)
A Quantum Information Science and Technology Roadmap (2004)
List of quantum algorithms that offer speed-up over classical algorithms (2011)
Recent progress in quantum algorithms (2010)
Practical Quantum Computers Creep Closer to Reality (2011)

comment by paulfchristiano · 2011-04-14T13:57:55.101Z · LW(p) · GW(p)

I have never seen a coherent description of a situation in which we wouldn't be able to model each instance separately; naively, it would require exceptionally careful engineering (to maintain quantum superpositions over large or spatially separated objects) which we have never witnessed in nature.

All of the examples I have seen support the obvious assertion "to one extent of course it will." Is this also your impression?

Replies from: Manfred
comment by Manfred · 2011-04-16T11:42:04.409Z · LW(p) · GW(p)

Yeah, pretty much. It might be possible to have very brief entanglement between different neurons, but because the brain is so messy and not-a-microwave-transmitter there's nothing to actually act on that entanglement, and forget having anything that looks like our quantum computing.

comment by jsalvatier · 2011-04-14T06:38:01.150Z · LW(p) · GW(p)

exactly.

comment by Davorak · 2011-04-14T17:07:53.323Z · LW(p) · GW(p)

I can imagine situations which would make it impractical to not use a quantum computer for at least parts of the emulation. Shor's alogorithm is an example of how quantum computers can fundamentally be more efficient then classical computers. If our brain uses quantum algorithms with fundamental advantages over the classical analog, then it could be impractical to use classical computers alone for emulation due to the exponential increase in computation time required.

I unable to assign a probability with confidence for this case. So I unconfidently assign a low probability for our brain using quantum algorithm that make it impractical to emulate with out quantum computers. A high probability to finding organisms that take advantage of quantum mechanics to a higher degree then a bird for navigation and plants for photosynthesis. This is due the ability to leverage quantum mechanics has evolved at least twice for very different purposes.

Replies from: SimonF
comment by Simon Fischer (SimonF) · 2011-04-18T14:01:47.556Z · LW(p) · GW(p)

The question is not whether "quantum computers can fundamentally be more efficient then classical computers", but if quantum mechanical entanglement can be used by the brain, which seems to be improbable. I asked a professor of biophysics about this issues, he knew about the result concerning photosynthesis and was pretty sure that QM does not matter for simulating the brain.

Replies from: Davorak
comment by Davorak · 2011-04-18T17:30:20.162Z · LW(p) · GW(p)

I was trying to express in my post that the extra efficiency gained from a switch to quantum computers only matters when it makes the simulation practical rather impractical with the current resources. This transition would only happen if the brain used quantum algorithms with a fundamental advantage over classical computing, which I assigned a low probability to. Meaning that a QM computer would probably not be necessary.

It sounds like we agree in conclusion but are failing to comunicate some details or disagree on some details.

comment by JoshuaZ · 2011-04-14T14:32:27.203Z · LW(p) · GW(p)

Our models of individual neurons behave a lot like what neurons behave like. Similarly, fairly crude models of the laminar pattern seem to mimic it very well. It is possible that there are other subtle effects that we aren't noticing but that seems difficult.

On the other hand, evolution is very good at finding clever tricks, and we know that there has been a fair bit of evolution concerning our brains. So, it isn't at all implausible that evolution at some point found a clever way to use quantum effects.Moreover, given that we know that there are genes active in the brain that are slightly different among the great apes from the versions in other mammals and still others that are different in humans from the other great apes, one could hypothesize in a not completely unlikely that this is part of the difference between humans and other species.

However, there are two separate issue from QM effects that could make emulation more difficult: first, there's growing evidence that not just neurons but also glial cells matter to cognition. Second, there's some reason to think that neurons can interact with nearby neurons by their electromagnetic fields. Either one of these issues could drastically increase the amount of computation needed to run an emulation.

comment by Nisan · 2011-04-21T15:11:14.769Z · LW(p) · GW(p)

Upvoted for the references to bird navigation and quantum smell. Those will probably be the most interesting things I'll learn all day.

comment by Armok_GoB · 2011-04-14T12:05:42.061Z · LW(p) · GW(p)

Voted up because I think many people might vote it down incorrectly due to not getting the "joke"; That it might indeed mean you need to simulate quantum event, but not implying this to be an argument against the practicality of EMs. Sort of, I'm not very good at communicating this kind of subtle artsy concept with words.

comment by XiXiDu · 2011-04-15T16:17:20.539Z · LW(p) · GW(p)

The problem of emulating human "minds" might be much more difficult than just emulating the human brain. Here are three quotes that will highlight why this might be the case:

What we call "mind" is really embodied. There is no true separation of mind and body. These are not two independent entities that somehow come together and couple. The word "mental" picks out those bodily capacities and performances that constitute our awareness and determine our creative and constructive responses to the situation we encounter. Mind isn't some mysterious abstract entity that we bring to bear on our experience. Rather, mind is part of the very structure and fabric of our interactions with our world.

Philosophy in the Flesh, by George Lakoff and Mark Johnson

They say the division between mind and environment is less rigid than previously thought; the mind uses information within the environment as an extension of itself.

While a person can learn a route through a maze and then negotiate the maze by memory, a person would appear equally smart to an outsider if they simply followed signposts in the maze to reach the exit. "A smart person, like the droplets, is often smart due to canny combinations of internal and external structure," says Clark.

What a maze-solving oil drop tells us of intelligence (Original)

It’s widely thought that human language evolved in universally similar ways, following trajectories common across place and culture, and possibly reflecting common linguistic structures in our brains. But a massive, millennium-spanning analysis of humanity’s major language families suggests otherwise.

Instead, language seems to have evolved along varied, complicated paths, guided less by neurological settings than cultural circumstance. If our minds do shape the evolution of language, it’s likely at levels deeper and more nuanced than many researchers anticipated.

“It’s terribly important to understand human cognition, and how the human mind is put together,” said Michael Dunn, an evolutionary linguist at Germany’s Max Planck Institute and co-author of the new study, published April 14 in Nature. The findings “do not support simple ideas of the mind as a computer, with a language processor plugged in. They support much-more complex ideas of how language arises.”

Evolution of Language Takes Unexpected Turn (See also, Is Grammar More Cultural Than Universal? Study Challenges Chomsky’s Theory)

Even more: Embodied cognition


I don't know what to make of this as I haven't done any research into it but I thought it should be accounted for when one wants to talk about the emulation of "minds". It seems a lot of what makes us human, intelligent and what shapes our languages, values and goals seems to be a complex interrelationship between our brain, body, culture and the environment.

Replies from: orthonormal
comment by orthonormal · 2011-04-19T15:51:13.127Z · LW(p) · GW(p)

Most of the quotes above (at least, the ones that make sense) are talking about the way that intelligence grows in the first place, not about what would happen if you changed the context for a grown adult brain. Since a person paralyzed in an accident or stroke can nevertheless keep their mental faculties, it seems that changing the connection between the brain and its body/environment need not destroy the intellect that's already formed.

Also, it would be pretty reasonable to simulate some kind of body and environment (in less detail than one simulates the brain) while you're at it. Would that address your query?

Replies from: XiXiDu
comment by XiXiDu · 2011-04-19T17:40:47.964Z · LW(p) · GW(p)

Also, it would be pretty reasonable to simulate some kind of body and environment (in less detail than one simulates the brain) while you're at it. Would that address your query?

Whole brain emulation will probably work irregardless of the environment or body, as long as you use a "grown up" mind. What I thought needs to be addressed is the potential problem with emulating empty mind templates without a rich environment or bodily sensations and still expect them to exhibit "general" intelligence, i.e. solve problems in the physical and social universe.

The same might be true for seed AI. It will be able to use its given capabilities but needs some sort of fuel to solve "real life" problems like social engineering.

An example would be a boxed seed AI that is going FOOM. Either the ability to trick people into letting it out of the box is given or it needs to be acquired. How is it going to acquire it?

If a seed AI is closer to AIXI, i.e. intelligence in its most abstract form, it might need to be bodily embedded into the environment it is supposed to master. Consequently an AI that is capable of taking over the world by using an Internet connection will require a lot more hard-coded, concrete "intelligence" or a lot of time.

I just don't see how an abstract AGI could possibly solve something like social engineering without a lot of time or the hard coded ability to do so.

Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar at least and then wait for the environment to provide a lot of feedback.

So even if we're talking about the emulation of a grown up mind it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?

There seem to be some arguments in favor of embodied cognition...

comment by lsparrish · 2011-04-14T19:34:35.955Z · LW(p) · GW(p)

I'm not sure I should care about whole-brain emulation itself, so much as whole-brain scanning and connectome mapping. Whole-brain is one of two possible paths from that to immortality (which is the part I care about), the other being to print/assemble a new biological brain with the same set of neural connections.

To me, the brain-printing route seems to depend on fewer unresolved questions about the universe than the brain-emulation route, e.g. how much computing power can be packed in a reasonable amount of space and whether you need to simulate advanced biophysics in detail to accurately simulate the human mind.