Who thinks quantum computing will be necessary for AI?

post by ChrisHallquist · 2013-05-28T22:59:57.039Z · LW · GW · Legacy · 101 comments

Contents

101 comments

While writing my article "Could Robots Take All Our Jobs?: A Philosophical Perspective" I came across a lot of people who claim (roughly) that human intelligence isn't Turing computable. At one point this led me to tweet something to the effect of, "where are the sophisticated AI critics who claim the problem of AI is NP-complete?" But that was just me being whimsical; I was mostly not-serious.

A couple times, though, I've heard people suggest something to the effect that maybe we will need quantum computing to do human-level AI, though so far I've never heard this from an academic, only interested amateurs (though ones with some real computing knowledge). Who else here has encountered this? Does anyone know of any academics who adopt this point of view? Answers to the latter question especially could be valuable for doing article version 2.0.

Edit: This very brief query may have given the impression that I'm more sympathetic to the "AI requires QC" idea than I actually am; see my response to gwern below.

101 comments

Comments sorted by top scores.

comment by Viliam_Bur · 2013-05-29T15:21:44.935Z · LW(p) · GW(p)

To me it seems straightforward: Intelligence is magical. Classical computers are not magical. Quantum computing is magical. Therefore we need quantum computing for AI.

However, if after a few years quantum computing becomes non-magical, it will become obvious that we need something else.

Replies from: ikrase
comment by ikrase · 2013-05-31T08:45:12.122Z · LW(p) · GW(p)

Do they play Mass Effect? It's possible that they picked it up from sci-fi in which A) its' required or B) in which brains are considered quantum.

Replies from: Osiris
comment by Osiris · 2013-05-31T15:58:11.834Z · LW(p) · GW(p)

I am reminded of Asimov's "positronic brain" and how he came up with it. Perhaps the new goal of research in artificial intelligence should be coming up with new magical terms and explaining as little as possible. It could earn enough money and public interest to create an artificial person...

The forms of intelligence I am familiar with (really only one kind, from a materials points of view) are not enough to discuss what is truly necessary for successful AI.

comment by gwern · 2013-05-28T23:19:03.310Z · LW(p) · GW(p)

Why would QC be relevant? What quantum effects does the brain exploit? Or what classical algorithms which are key to AI tasks would benefit so enormously from running on a genuine quantum computer (as opposed to a quantum or quantum-inspired algorithm running on a classical computer) that they would make the difference between AI being possible and impossible?

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-05-29T00:59:01.935Z · LW(p) · GW(p)
  1. No reason I know of.
  2. None, in my opinion (and, I think the opinion of most neuroscientists)
  3. None that I know of.

The thought is not that QC is actually likely to be necessary for AI, just that, with all the people saying AI is impossible (or saying things that make it sound like they think AI is impossible, without being quite so straightforward about it), it would be interesting to find people who think AI is [i]just hard enough[/i] to require QC.

My own view, though, is that AI is neither impossible nor would require anything like QC.

(Edit: if I had to make a case that AI is likely to require QC, I might focus on brain emulation and citing the fact that quantum chemistry models increase exponentially in their computational demands as the number of atoms increases.

In reality, I think we'd likely be able to find acceptable approximations for doing brain emulation, but maybe someone could take this kind of arguments and strengthen it. At least, it would be somewhat less surprising to me than if the brain turned out to be a quantum computer in a stronger sense.)

Replies from: jsteinhardt
comment by jsteinhardt · 2013-05-29T17:13:25.141Z · LW(p) · GW(p)

This post made me realize that the following fun fact: if AI were in BQP but not in BPP, then that would provide non-negligible evidence for anthropics being valid.

Replies from: ESRogs
comment by ESRogs · 2013-05-29T19:40:17.373Z · LW(p) · GW(p)

Could you flesh that out a bit? Is the idea that it's just one more case where a feature of our universe turns out to be necessary for consciousness?

Replies from: jsteinhardt
comment by jsteinhardt · 2013-05-29T23:38:57.064Z · LW(p) · GW(p)

Yes, and a pretty weird feature at that (being in BQP but not P is pretty odd unless BQP was designed to contain the problem in the first place).

Replies from: ESRogs
comment by ESRogs · 2013-05-30T01:11:57.966Z · LW(p) · GW(p)

Gotcha, thanks.

comment by [deleted] · 2013-05-29T14:40:26.058Z · LW(p) · GW(p)

No serious neurologists actually consider quantum effects inside microtubules or arrangements of phosporyulation on microtubules or whatever important for neuron function. They're all either physicists who don't understand the biology or computer scientists who don't understand the biology. Nothing happens in neural activity or long-term-potentiation or other processes that cannot be accounted for by chemical processes, even if we don't understand exactly the how of some of them. The open questions are mostly exactly how neurons are able to change their excitability and structure over time and how they manage to communicate in large scale systems.

Replies from: shminux
comment by Shmi (shminux) · 2013-05-29T15:08:24.634Z · LW(p) · GW(p)

No serious neurologists actually consider quantum effects inside microtubules or arrangements of phosp[h]oryulation on microtubules or whatever important for neuron function.

Actually, protein phosphorylation (like many other biochemical and biophysical processes, such as ion channel gating) is based on quantum tunneling. It may well be irrelevant, as the timing of the process can probably be simulated well enough with pseudo-random numbers, but on an off-chance that "true randomness" is required, a purely classical approach might be inadequate.

Replies from: Eliezer_Yudkowsky, jsteinhardt, None, Luke_A_Somers, DanielLC
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T18:13:24.525Z · LW(p) · GW(p)

Quantum tunneling != quantum computing.

Quantum 'randomness' != quantum computing. No one has ever introduced, even in principle, a cognitive algorithm that requires quantum 'randomness' as opposed to thermal noise.

comment by jsteinhardt · 2013-05-29T17:27:07.418Z · LW(p) · GW(p)

How could "true randomness" be required, given that it's computationally indistinguishable from pseudorandomness?

Replies from: shinoteki
comment by shinoteki · 2013-05-29T18:11:51.384Z · LW(p) · GW(p)

If there is a feasible psuedorandomness generator that is computationally indistinguishable from randomness, then randomness is indeed not necessary. However, the existence of such a pseudorandomness generator is still an open problem.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T18:22:30.872Z · LW(p) · GW(p)

What? No it's not. There are no pseudo-random generators truly ultimately indistinguishable in principle from the 'branch both ways' operation in quantum mechanics, the computations all have much lower Kolmogorov complexity after running for a while. There are plenty of cryptographically strong pseudo-random number generators which could serve any possible role a cognitive algorithm could possibly demand for a source of bits guaranteed not to be expectedly correlated with other bits playing some functional role, especially if we add entropy from a classical thermal noise source, the oracular knowledge of which would violate the second law of thermodynamics. This is not an open problem. There is nothing left to be confused about.

Replies from: ciphergoth, jsteinhardt
comment by Paul Crowley (ciphergoth) · 2013-05-29T19:27:36.791Z · LW(p) · GW(p)

A proof that any generator was indistinguishable from random, given the usual definitions, would basically be a proof that P != NP, so it is an open problem. However we're pretty confident in practice that we have strong generators.

Replies from: paulfchristiano, Eliezer_Yudkowsky, ThisSpaceAvailable
comment by paulfchristiano · 2013-05-29T22:29:39.843Z · LW(p) · GW(p)

As a pedantic note, if you want to derandomize algorithms it is necessary (and sufficient) to assume P/poly != E, i.e. polynomial size circuits cannot compute all functions computed by exponential time computations. This is much weaker than P != NP, and is consistent with e.g. P = PSPACE. You don't have to be able to fool an adversary, to fool yourself.

This is sometimes sloganized as "randomness never helps unless non-uniformity always helps," since it is obvious that P << E and generally believed that P/poly is about as strong as P for "uniform" problems. It would be a big shock if P/Poly was so much bigger than P.

But of course, in the worlds where you can't derandomize algorithms in the complexity-theoretic sense, you can still look up at the sky and use the whole universe to get your randomness. What this means is that you can exploit much of the stuff going on in the universe to do useful computation without lifting a finger, and since the universe is so much astronomically larger than the problems we care about, this is normally good enough. General derandomization is extremely interesting and important as a conceptual framework in complexity theory, but useless for actually computing things.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-05-30T05:59:45.776Z · LW(p) · GW(p)

Are you referring to this result? Doesn't seem to be identical to what you said, but very close.

Replies from: paulfchristiano
comment by paulfchristiano · 2013-05-30T10:21:35.984Z · LW(p) · GW(p)

Yeah, I was using "derandomize" slightly sloppily (to refer to a 2^n^(epsilon) slowdown rather than a poly slowdown). The result you cite is one of the main ones in this direction, but there are others (I think you can find most of them by googling "hardness vs. randomness").

If poly size circuits can't compute E, we can derandomize poly time algorithms with 2^(m^c) complexity for any c > 0, and if 2^(m^c) size circuits can't compute E for sufficiently small c, we can derandomize in poly time. Naturally there are other intermediate tradeoffs, but you can't quite get BPP = P from P/poly < E.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T19:32:24.138Z · LW(p) · GW(p)

Can you refer me to somewhere to read more about the "usual definitions" that would make this true? If I know the Turing machine, I can compare the output to that Turing machine and be pretty sure it's not random after running the generator for a while. Or if the definition is just lack of expected correlation with bits playing a functional role, then that's easy to get. What's intermediate such that 'indistinguishable' randomness means P!=NP?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2013-05-29T19:54:24.686Z · LW(p) · GW(p)

You don't sound like you're now much less confident you're right about this, and I'm a bit surprised by that!

I got the ladder down so I could get down my copy of Goldreich's "Foundations of Cryptography", but I don't quite feel like typing chunks out from it. Briefly, a pseudorandom generator is an algorithm that turns a small secret into a larger number of pseudorandom bits. It's secure if every distinguisher's advantage shrinks faster than the reciprocal of any polynomial function. Pseudorandom generators exist iff one-way functions exist, and if one-way functions exist then P != NP.

If you're not familiar with PRGs, distinguishers, advantage, negligible functions etc I'd be happy to Skype you and give you a brief intro to these things.

Replies from: Wei_Dai, Eliezer_Yudkowsky
comment by Wei Dai (Wei_Dai) · 2013-05-29T21:00:55.347Z · LW(p) · GW(p)

If you're not familiar with PRGs, distinguishers, advantage, negligible functions etc I'd be happy to Skype you and give you a brief intro to these things.

There are also intros available for free on Oded Goldreich's FoC website.

Here's my simplified intuitive explanation for people not interested in learning about these technical concepts. (Although of course they should!) Suppose you're playing rock-paper-scissors with someone and you're using a pseudorandom number generator, and P=NP, then your opponent could do the equivalent of trying all possible seeds to see which one would reproduce your pattern of play, and then use that to beat you every time.

In non-adversarial situations (which may be what Eliezer had in mind) you'd have to be pretty unlucky if your cognitive algorithm or environment happens to serve as a distinguisher for your pseudorandom generator, even if it's technically distinguishable.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T21:44:46.358Z · LW(p) · GW(p)

Okay, makes sense if you define "distinguishable from random" as "decodable with an amount of computation polynomial in the randseed size".

EDIT: Confidence is about standard cryptographically strong randomness plus thermal noise being sufficient to prevent expected correlation with bits playing a functional role, which is all that could possibly be relevant to cognition.

Replies from: ciphergoth, JoshuaZ
comment by Paul Crowley (ciphergoth) · 2013-05-30T06:02:56.335Z · LW(p) · GW(p)

Decoding isn't the challenge; the challenge is to make a guess whether you're seeing the output of the PRG or random output. Your "advantage" is

Adv_PRG[Distinguisher] = P(Distinguisher[PRG[seed]] = "PRG") - P(Distinguisher[True randomness] = "PRG")

comment by JoshuaZ · 2013-05-30T01:29:57.622Z · LW(p) · GW(p)

Note that this is standard notation when one discusses pseudorandom generators. Hence Ciphergoth's comment about "the usual definitions."

Replies from: Eliezer_Yudkowsky
comment by ThisSpaceAvailable · 2013-05-30T03:08:08.592Z · LW(p) · GW(p)

For it to be an open problem, there would have to not be a proof either way. Since Eliezer is claiming (or, at least, implying) that there is a proof that there is no PRNG indistinguishable, arguing that there is no proof that there is a PRNG indistinguishable doesn't show that it is an open problem.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-31T12:00:16.053Z · LW(p) · GW(p)

Quite. They seem to be agreeing that any PRNG can in principle distinguished, and then Eliezer goes on to say that a mind is a place that will not be able to make that distinction - which ciphergoth didn't begin to address.

comment by jsteinhardt · 2013-05-29T23:43:31.614Z · LW(p) · GW(p)

You missed the key word "computationally". Of course a pseudorandom generator is a mathematically distinct object, but not in a way that the universe is capable of knowing about (at least assuming that there are cryptographic pseudorandom generators that are secure against quantum adversaries, which I think most people believe).

comment by [deleted] · 2013-05-30T07:14:14.446Z · LW(p) · GW(p)

Holy crap that comment (posted very quickly from a tablet hence the typos) produced a long comment thread.

Yes quantum tunneling goes on in a lot of biological processes because it happens in chemistry. There is nothing special about neurology there. I was mostly referring to writings I've seen where someone proposed that humans must be doing hypercomputation because we dont blow up at the godel incompleteness theorem (which made a cognitive scientist in my circle laugh due to the fact that we just don't actually deal with the logic) and another that actually was posted here that proposed that digital information was somehow being stored in the pattern of phosphorylation of subunits of microtubules (which made multiple cell biologists laugh because those structures are so often erased and replaced and phosphorylation is ridiculously dynamic and moderated by the randomness of enzymes hitting substrates via diffusion and not retained on any one molecule for long). In the end it mostly serves to just modify the electrical properties of the membranes and their ability to chemically affect and be affected by each other.

As for 'true randomness', we don't run on algorithms, we run on messy noisy networks. If we must frame the way cells work in terms of simulation of gross behavior its a whole lot more like noisy differential equations than discrete logic. I fail to see any circumstance in which you need quantum effects to make those behave as they usually do.

On top of that, every single cell is a soup of trillions of molecules bouncing off each other at dozens of meters per second like lottery balls. If that's not close enough to 'true randomness', such that you somehow need quantum effects like the decay of atoms, what is?

comment by Luke_A_Somers · 2013-05-29T15:57:49.106Z · LW(p) · GW(p)

Even if the Mersenne twister isn't good enough, you could still get a quantum noise generator hooked up. And that's basically a classical device, certainly doesn't need any coherence.

Replies from: Dreaded_Anomaly, shminux
comment by Dreaded_Anomaly · 2013-05-30T01:58:07.142Z · LW(p) · GW(p)

you could still get a quantum noise generator hooked up

In case anybody needs one: ANU Quantum Random Numbers Server

comment by Shmi (shminux) · 2013-05-29T16:24:20.062Z · LW(p) · GW(p)

And that's basically a classical device, certainly doesn't need any coherence.

I suppose we ought to define what "classical" and "quantum" means.

Replies from: DanielLC
comment by DanielLC · 2013-05-29T18:54:17.138Z · LW(p) · GW(p)

It's a quantum effect, but it's one that's easily taken advantage of, as opposed to the crazy difficult stuff a quantum computer can do. As such, a computer that can do that can be considered classical.

For that matter, transistors work by exploiting quantum effects. We still don't call them quantum computers.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-31T11:51:25.889Z · LW(p) · GW(p)

Thanks for the first paragraph. I came here to clarify this, but you beat me to it.

More clearly: a quantum noise generator can have a design such that someone who only understands classical mechanics will understand based on that design that it is a noise generator. They just won't catch the detail that this noise has an additional property.

The above statement may depend on the implementation, but I meant in principle, so there it is.

Replies from: DanielLC
comment by DanielLC · 2013-05-31T20:21:10.974Z · LW(p) · GW(p)

Someone who only understands classical mechanics will not understand a noise generator. Classical physics is deterministic, so noise generators are impossible.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-31T22:09:35.394Z · LW(p) · GW(p)

Only if you're omniscient. A noise generator is a way of controllably injecting your ignorance of some system into a particular channel.

comment by DanielLC · 2013-05-29T18:50:22.876Z · LW(p) · GW(p)

You don't need a quantum computer to exploit quantum effects for random number generation. I've heard it's common to do that by sending electricity backwards through a diode and amplifying it.

comment by Wei Dai (Wei_Dai) · 2013-05-29T08:14:05.795Z · LW(p) · GW(p)

There's an overview of the "quantum mind" debate among academics (whether quantum effects play an important role in the function of the brain) in FHI's Whole Brain Emulation Roadmap (page 37). This isn't quite the same question you're asking (since even if the brain uses quantum computing, an AI may be able to avoid it through some kind of algorithmic workaround), but I'd guess that most supporters of the "quantum mind" hypotheses would also answer "yes" to your question.

Replies from: OrphanWilde, jsteinhardt
comment by OrphanWilde · 2013-05-29T14:01:05.079Z · LW(p) · GW(p)

I think there's an important distinction to be drawn between human-level AI and human-like AI, as far as the "quantum mind" hypothesis and its relationship to quantum computing goes. It could be a necessary ingredient to consciousness while being unimportant for intelligence more generally.

comment by jsteinhardt · 2013-05-29T17:21:27.305Z · LW(p) · GW(p)

Really? I think it's plausible that quantum effects play an important role in the brain, but I'd be very surprised if that was actually an obstacle to AI.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T18:12:26.350Z · LW(p) · GW(p)

Quantum effects or quantum computation? Technically our whole universe is a quantum effect, but most of it can't be regarded as doing information processing, and of the parts that do information processing, we don't yet know of any that are faster on account of quantum superpositions maintained against decoherence.

Replies from: jsteinhardt
comment by jsteinhardt · 2013-05-29T23:47:59.448Z · LW(p) · GW(p)

I'm not sure where the line would be drawn; I think it's possible that neurons are getting speedups by exploiting quantum effects. I don't think it's using it to solve problems that aren't in P.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T23:53:31.046Z · LW(p) · GW(p)

My understanding is that any speedup would be fairly implausible, I mean isn't the whole lesson of l'affaire D-Wave that you need maintained quantum coherence and that requires quantum error-correction which is why Scott Aaronson didn't believe the D-Wave claims? Or is that just an unusually crisp human-programming way of doing things?

comment by TrE · 2013-05-29T05:34:36.739Z · LW(p) · GW(p)

I don't think that most (perhaps not all) people who say such things (QC is necessary for AI) understand both what building blocks might be needed for AI and what quantum computers actually can and can't do better or worse than classical computers. Sounds like people throwing two awesome (but so far impractical) concepts they've heard about together randomly, hoping for an even more awesome statement. Like "for colonizing Mars it's necessary that we build room-temperature superconductors first".

Please excuse the ridicule, but I don't see how large quantum computers are necessary for AI. They certainly are helpful, but then, room-temperature superconductors also are...

Replies from: David_Gerard
comment by David_Gerard · 2013-05-29T15:11:06.291Z · LW(p) · GW(p)

It's the quantum syllogism:

  1. I don't understand quantum.
  2. I don't understand consciousness
  3. Therefore, consciousness involves quantum.

(1. need not apply e.g. if you are Roger Penrose, but it's still logically fallacious.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T18:15:34.423Z · LW(p) · GW(p)

Penrose would claim not to understand how 'collapse' occurs.

Replies from: nigerweiss
comment by nigerweiss · 2013-05-30T01:21:05.853Z · LW(p) · GW(p)

When I was younger, I picked up 'The Emperor's New Mind' in a used bookstore for about a dollar, because I was interested in AI, and it looked like an exciting, iconoclastic take on the idea. I was gravely disappointing when it took a sharp right turn into nonsense right out of the starting gate.

comment by Shmi (shminux) · 2013-05-29T02:09:23.170Z · LW(p) · GW(p)

Anything a quantum computer can do, a classical computer can do, if slower.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-05-29T03:14:25.376Z · LW(p) · GW(p)

Yes, I know. The point is that it seems to be generally accepted that some things (particularly, certain kinds of code breaking) are only likely to only become doable in a realistic amount of time with quantum computing, so some people (I'm not one of them) might think AI is in a similar boat.

Replies from: Luke_A_Somers, shminux
comment by Luke_A_Somers · 2013-05-29T15:50:20.243Z · LW(p) · GW(p)

We have natural intelligence made of meat, processing by ion currents in liquid. Ion currents in liquid have an extremely short decoherence time, way too short to compute with.

Are you arguing with students of Deepak Chopra?

Replies from: jsteinhardt
comment by jsteinhardt · 2013-05-29T17:11:15.466Z · LW(p) · GW(p)

While I doubt AI needs QC, I don't think this argument works. Your same argument seems to rule out birds exploiting quantum phenomena to navigate, yet they are thought to do so.

Replies from: JoshuaZ, Luke_A_Somers
comment by JoshuaZ · 2013-05-30T01:26:59.638Z · LW(p) · GW(p)

There's a difference between exploiting quantum phenomena and using entanglement. There's a large set of quantum mechanical behavior which doesn't really add much computationally. (To some extent this is part of why we don't call our normal laptops quantum computers even though transistors and hard drives use quantum mechanics to work.)

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-30T14:11:38.999Z · LW(p) · GW(p)

Precisely. That's why we shouldn't be calling our brains 'quantum' either...

Or if we do, then that is in no way an argument against our using our current off-the-shelf 'quantum' computers!

Entanglement is what QM does that classical can't do directly (can in sim, of course). Everything else is just funny force laws.

comment by Luke_A_Somers · 2013-05-30T14:08:23.656Z · LW(p) · GW(p)

No, it doesn't. I addressed the ion current nature of nerve action potentials.

Birds' directional sensing couples to such a system but is not made of it.

comment by Shmi (shminux) · 2013-05-29T03:28:35.467Z · LW(p) · GW(p)

Then the discussion should be about the amount of computations required, not about classical vs quantum.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-05-29T08:25:35.615Z · LW(p) · GW(p)

It's not possible to discuss "the amount of computations required" without specifying a model of computation. Chris is asking whether an AI might be much slower on a classical computer than a quantum computer, to the extent that it's practically infeasible unless large scale quantum computing is feasible. This is a perfectly reasonable question to ask and I think your objection must be due to an over-literal interpretation of his post title or some other misunderstanding.

Replies from: shminux
comment by Shmi (shminux) · 2013-05-29T14:32:06.490Z · LW(p) · GW(p)

It's not possible to discuss "the amount of computations required" without specifying a model of computation.

I agree, there are more steps in between "AI is hard" and "we need QC".

However, from what I understand, those who say "QC is required for AI" just use this "argument" (e.g. "AI is at least as hard as code breaking") as an excuse to avoid thinking about AI, not as a thoughtful conclusion from analyzing available data.

comment by timtyler · 2013-05-30T23:27:08.539Z · LW(p) · GW(p)

Who thinks quantum computing will be necessary for AI?

David Pearce for one:

The theory presented predicts that digital computers - and all inorganic robots with a classical computational architecture - will 1) never be able efficiently to perform complex real-world tasks that require that the binding problem be solved; and 2) never be interestingly conscious since they are endowed with no unity of consciousness beyond their constituent microqualia - here hypothesized to be the stuff of the world as described by the field-theoretic formalism of physics.

Replies from: davidpearce
comment by davidpearce · 2013-06-03T22:13:04.287Z · LW(p) · GW(p)

Alas so. IMO a solution to the phenomenal binding problem (cf. http://cdn.preterhuman.net/texts/body_and_health/Neurology/Binding.pdf) is critical to understanding the evolutionary success of organic robots over the past 540 million years - and why classical digital computers are (and will remain) insentient zombies, not unitary minds. This conjecture may be false; but it has the virtue of being testable. If / when our experimental apparatus allows probing the CNS at the sub-picosecond timescales above which Max Tegmark ("Why the brain is probably not a quantum computer") posits thermally-induced decoherence, then I think we'll get a huge surprise! I predict we'll find, not random psychotic "noise", but instead the formal, quantum-coherent physical shadows of the macroscopic bound phenomenal objects of everyday experience - computationally optimised by hundreds of millions years of evolution, i.e. a perfect structural match. (cf. http://consc.net/papers/combination.pdf) By contrast, critics of the quantum mind conjecture must presumably predict we'll find just "noise".

Replies from: huh
comment by huh · 2013-06-03T22:35:47.788Z · LW(p) · GW(p)

Your first link appears to be broken.

It seems possible that the OpenWorm project to emulate the brain of a C. Elegans flatworm on a classical computer may yield results prior to the advent of experimental techniques capable of " probing the CNS at ... sub-picosecond timescales." Would you consider a successful emulation of worm behavior evidence against the need for quantum effects in neuronal function, or would you declare it the worm equivalent of a P-Zombie?

Replies from: davidpearce, elharo
comment by davidpearce · 2013-06-04T17:12:01.940Z · LW(p) · GW(p)

Huh, yes, in my view C. elegans is a P-zombie. If we grant reductive physicalism, the primitive nervous system of C. elegans can't support a unitary subject of experience. At most, its individual ganglia (cf. http://www.sfu.ca/biology/faculty/hutter/hutterlab/research/Ce_nervous_system.html) may be endowed with the rudiments of unitary consciousness. But otherwise, C. elegans can effectively be modelled classically. Most of us probably wouldn't agree with philosopher Eric Schwitzgebel. ("If Materialism Is True, the United States Is Probably Conscious" http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious-130208.pdf) But exactly the same dilemma confronts those who treat neurons as essentially discrete, membrane-bound classical objects. Even if (rightly IMO) we take Strawsonian physicalism seriously (cf. http://en.wikipedia.org/wiki/Physicalism#Strawsonian_physicalism) then we still need to explain how classical neuronal "mind-dust" could generate bound experiential objects or a unitary subject of experience without invoking some sort of strong emergence.

Replies from: wedrifid, timtyler
comment by wedrifid · 2013-06-06T10:29:55.710Z · LW(p) · GW(p)

Most of us probably wouldn't agree with philosopher Eric Schwitzgebel. (If Materialism Is True, the United States Is Probably Conscious.)

I think you're right. Mind you I suspect saying that I disagreed per se would be being generous.

Replies from: davidpearce
comment by davidpearce · 2013-06-06T12:14:10.279Z · LW(p) · GW(p)

Wedrifid, yes, if Schwitzgebel's conjecture were true, then farewell to reductive physicalism and the ontological unity of science. The USA is a "zombie". Its functionally interconnected but skull-bound minds are individually conscious; and sometimes the behaviour of the USA as a whole is amenable to functional description; but the USA not a unitary subject of experience. However, the problem with relying on this intuitive response is that the phenomenology of our own minds seems to entail exactly the sort of strong ontological emergence we're excluding for the USA. Let's assume, as microelectrode studies tentatively confirm, that individual neurons can support rudimentary experience. How can we rigorously derive bound experiential objects, let alone the fleeting synchronic unity of the self, from discrete, distributed, membrane-bound classical feature processors? Dreamless sleep aside, why aren't we mere patterns of "mind dust"?

None of this might seem relevant to ChrisHallquist's question. Computationally speaking, who cares whether Deep Blue, Watson, or Alpha Dog (etc) are unitary subjects of experience. But anyone who wants to save reductive reductive physicalism should at least consider why quantum mind theorists are prepared to contemplate a role for macroscopic quantum coherence in the CNS. Max Tegmark hasn't refuted quantum mind; he's made a plausible but unargued assumption, namely that sub-picosecond decoherence timescales are too short to do any computational and/or phenomenological work. Maybe so; but this assumption remains to be empirically tested. If all we find is "noise", then I don't see how reductive physicalism can be saved.

comment by timtyler · 2013-06-06T10:15:23.077Z · LW(p) · GW(p)

Most of us probably wouldn't agree with philosopher Eric Schwitzgebel. ("If Materialism Is True, the United States Is Probably Conscious" http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious-130208.pdf)?

Really? A poll seems as though it would be in order.

Maybe it it explained exactly what was meant by "conscious" there might even be a consensus on the topic.

Replies from: davidpearce
comment by davidpearce · 2013-06-06T12:15:30.088Z · LW(p) · GW(p)

Tim, perhaps I'm mistaken; you know lesswrongers better than me. But in any such poll I'd also want to ask respondents who believe the USA is a unitary subject of experience whether they believe such a conjecture is consistent with reductive physicalism?

comment by elharo · 2013-06-03T23:45:40.326Z · LW(p) · GW(p)

Interesting project. I would consider such a result to be at least weak evidence against the need for quantum effects in neuronal function, maybe stronger. It would be still stronger evidence if the project managed to produce such an emulation on the same scale and energy budget as a flatworm brain. And strongest of all if they manage to hook the flatworm brain up to an actual flatworm body and drive it without external input.

comment by JoshuaZ · 2013-05-29T16:02:10.946Z · LW(p) · GW(p)

Quantum computers can be simulated on classical computers with exponential slow down. So even if you think the human mind uses quantum computation, this doesn't mean that the same thing can't be done on a classical machine. Note also that BQP (the set of efficiently computable problems by a quantum computer) is believed (although not proven) to not contain any NP complete problems.

Note also that at a purely practical level, since quantum computers can do a lot of things better than classical computers and our certainty about their strength is much lower, trying to run an AI on a quantum computer is a really bad idea if you take the threat of AI going FOOM seriously.

Replies from: jsteinhardt
comment by jsteinhardt · 2013-05-29T17:30:48.962Z · LW(p) · GW(p)

So even if you think the human mind uses quantum computation, this doesn't mean that the same thing can't be done on a classical machine.

An exponential slowdown basically means that it can't be done. If you have an oracle for EXPTIME then you're basically already set for most problems you could want to solve.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-05-30T01:20:44.814Z · LW(p) · GW(p)

So that is practically true, and in fact. EXPTIME is one of the few things we can show is large enough that we can even prove it properly contains P. But, in context this isn't as bad as it looks. The vast majority of interesting things we can do on quantum computers take in practice much less than exponential time (look at factoring for example). In fact, BQP actually lives inside PSPACE, so this shouldn't be that surprising.

But practical issues aside , most of the arguments about using quantum computers to do AI or consciousness involve claims that they are fundamentally necessary. The fact that we can simulate them with sufficient slow down demonstrates that at least that version of that thesis is false.

comment by GeraldMonroe · 2013-05-29T01:23:34.461Z · LW(p) · GW(p)

These people's objections are not entirely unfounded. It's true that there is little evidence the brain exploits QM effects (which is not to say that it is completely certain it does not). However, if you try to pencil in real numbers for the hardware requirements for a whole brain emulation, they are quite absurd. Assumptions differ, but it is possible that to build a computational system with sufficient nodes to emulate all 100 trillion synapses would cost hundreds of billions to over a trillion dollars if you had to use today's hardware to do it.

The point is : you can simplify people's arguments to "I'm not worried about the imminent existence of AI because we cannot build the hardware to run one". The fact that a detail about their argument is wrong doesn't change the conclusion.

Replies from: nigerweiss
comment by nigerweiss · 2013-05-29T08:31:43.857Z · LW(p) · GW(p)

Building a whole brain emulation right now is completely impractical. In ten or twenty years, though... well, let's just say there are a lot of billionaires who want to live forever, and a lot of scientists who want to be able to play with large-scale models of the brain.

I'd also expect de novo AI to be capable of running quite a bit more efficiently than a brain emulation for a given amount of optimization power.. There's no way simulating cell chemistry is a particularly efficient way to spend computational resources to solve problems.

Replies from: GeraldMonroe
comment by GeraldMonroe · 2013-05-29T13:40:45.047Z · LW(p) · GW(p)

An optimal de novo AI, sure. Keep in mind that human beings have to design this thing, and so the first version will be very far from optimal. I think it's a plausible guess to say that it will need on the order of the same hardware requirements as an efficient whole brain emulator.

And this assumption shows why all the promises made by past AI researchers have so far failed : we are still a factor of 10,000 or so away from having the hardware requirements, even using supercomputers.

comment by Alsadius · 2013-05-29T05:38:45.514Z · LW(p) · GW(p)

In principle, it should be quite possible to map a human brain, replace each neuron with a chip, and have a human-level AI. Such a design would not have the long-term adaptability of the human brain, but it'd pass a Turing test trivially. Obviously, the cost involved is prohibitive, but it should be a sufficient boundary case to show that QC is not strictly necessary. It may still be helpful, but I'm sufficiently skeptical of the viability of commercialized QC to believe that the first "real" AI will be built from silicon.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-29T15:56:05.001Z · LW(p) · GW(p)

Note for the downvoters of the above: I suspect you're downvoting because you think a complete hardware replacement of neurons would result in long-term adaptibility. This is so, but is not what was mentioned here - replacing each neuron with a momentarily equivalent chip that does not have the ability to grow new synaptic connections would provide consciousness but would run into long-term problems as described.

Replies from: Alsadius
comment by Alsadius · 2013-05-29T18:17:57.665Z · LW(p) · GW(p)

Yeah, I was using the non-adaptive brain as a baseline reducto ad absurdum. Obviously, it's possible to do better - the computing power wasted in the above design would be monumental, and the human brain is not such a model of efficiency that I don't think you can do better by throwing a few extra orders of magnitude at it. But it's something that even an AI skeptic should recognize as a possibility.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-29T20:57:41.373Z · LW(p) · GW(p)

If we're going to be picky, also the idea that only neurons are relevant isn't right; if you replaced each neuron with a neuron-analog (a chip or a neuron-emulation-in-software or something else) but didn't also replace the non-neuron parts of the cognitive system that mediate neuronal function, you wouldn't have a working cognitive system.
But this is a minor quibble; you could replace "neuron" with "cell" or some similar word to steelman your point.

Replies from: nigerweiss
comment by nigerweiss · 2013-05-30T01:34:19.934Z · LW(p) · GW(p)

Yeah, The glia seem to serve some pretty crucial functions as information-carriers and network support infrastructure - and if you don't track hormonal regulation properly, you're going to be in for a world of hurt. Still, I think the point stands.

comment by Yosarian2 · 2013-06-01T12:04:40.671Z · LW(p) · GW(p)

I don't think we'll need quantum computing specifically for AI.

I do think that it's possible, though, that we might need to make significant improvements in hardware before we can run anything like a human-level AI.

comment by elharo · 2013-05-30T09:55:07.722Z · LW(p) · GW(p)

I begin to think that we should taboo the words "AI" and "intelligence" when talking about these subjects. It's not obvious to me that, for example, whole brain emulation and automated game playing have much in common at all. There are other forms of "AI" as well. Consequently we seem to be talking past each other as often as not.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-05-30T17:18:39.903Z · LW(p) · GW(p)

For a while I got into the habit of talking about systems that optimize their environment for certain values, rather than talking about intelligences (whether NI, AI, or AGI). I haven't found that it significantly alters the conversations I'm in, but I find it gives me more of a sense that I know what I'm talking about. (Which might be a bad thing, if I don't.)

comment by elharo · 2013-05-29T11:10:22.763Z · LW(p) · GW(p)

There are a lot of things we simply don't know about the brain, and even less so about consciousness and intelligence in the human sense. In many ways, I don't think we even have the right words to talk about this. Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain. Thus it's conceivable that a whole-brain emulation at the level of individual neurons might be insufficient to produce human-type intelligence and consciousness. If so, we'd need quite a few more generations of Moore's law before we could expect to finish a whole brain emulation than we're currently estimating.

Furthermore the smaller structures would be more susceptible to quantum effects. Then again, maybe not. Roger Penrose and Stuart Hameroff have developed this as Orchestrated objective reduction theory. This theory has been hotly disputed; but so far I don't think it's been conclusively proven or disproven. However, it is experimentally testable and falsifiable. I suspect it's too early to claim definitively either that quantum effects either are or are not required for human type intelligence and consciousness; but more research will likely help us answer this question one way or the other.

I will say this: there is a lot of bad physics and philosophy out there that has been misled by bad popular descriptions of quantum mechanics and how the conscious observer collapses the wave function, and thus came to the conclusion that consciousness is intimately tied up with quantum mechanics. I feel safe ruling that much out. However it still seems possible that our consciousness and intelligence is routinely or occasionally susceptible to quantum randomness, depending on the scale at which it operates.

Even if Penrose's ideas about how human intelligence arises from quantum effects is all true, that still does not prove that all intelligence requires quantum randomness. If you want to answer that question, then the first thing you need to do is define what you mean by "intelligence". That's trickier than it sounds at first, but I think it can be usefully done. In fact, there are multiple possible definitions of intelligence useful for different purposes. For instance one is the ability to formulate plans that enable one to achieve a goal. Consciousness is a much thornier nut to crack. I don't know that anyone has a good handle on that yet.

Replies from: Nisan, nigerweiss, DSherron
comment by Nisan · 2013-05-29T15:50:25.626Z · LW(p) · GW(p)

Skimming the article you linked, it looks like Penrose believes human mathematical intuition comes from quantum-gravitational effects. So on Penrose's view it might be possible that AGI requires a quantum-gravitational hypercomputer, not just a quantum computer.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-05-30T01:23:21.939Z · LW(p) · GW(p)

Note that according to Scott Aaronson (in his recent book), Penrose thinks that human minds can solve the Halting problem and conjectures that humans can even solve the Halting problem for machines with access to a Halting oracle.

comment by nigerweiss · 2013-05-30T01:31:45.244Z · LW(p) · GW(p)

Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain.

Sure? No. Pretty confident? Yeah. The people who think microtubules and exotic quantum-gravitational effects are critical for intelligence/consciousness are a small minority of (usually) non-neuroscientists who are, in my opinion, allowing some very suspect intuitions to dominate their thinking. I don't have any money right now to propose a bet, but if it turns out that the brain can't be simulated on a sufficient supply of classical hardware, I will boil, shred, and eat my entire (rather expensive) hat.

Consciousness is a much thornier nut to crack. I don't know that anyone has a good handle on that yet.

Daniel Dennet's papers on the subject seem to be making a lot of sense to me. The details are still fuzzy, but I find that having read them, I am less confused on the subject, and I can begin to see how a deterministic system might be designed that would naturally begin to have behavior that would cause them to say the sorts of things about consciousness that I do.

Replies from: Baughn
comment by Baughn · 2013-05-30T18:00:33.089Z · LW(p) · GW(p)

If you find someone to bet against you, I'm willing to eat half the hat.

Replies from: None
comment by [deleted] · 2013-06-04T07:24:41.368Z · LW(p) · GW(p)

We could split it three ways, provided agreeing in principle despite doubting that an actual complete human brain will ever be simulated counts.

comment by DSherron · 2013-05-29T16:18:09.679Z · LW(p) · GW(p)

"More susceptible" is not the same as "susceptible". If it's bigger than an atom, we don't need to take quantum effects into account to get a good approximation, and moreover any effects that do happen are going to be very small and won't affect consciousness in a relevant way (since we don't experience random changes to consciousness from small effects). There's no need to accurately model the brain to perfect detail, just to roughly model it, which almost certainly does not involve quantum effects at all.

Incidentally, there's nothing special about quantum randomness. Why should consciousness be related to splitting worlds in a special way? Once you drop the observer-focused interpretations, there's nothing related between them. If the brain needs randomness there are easier sources.

comment by iDante · 2013-05-29T01:06:09.049Z · LW(p) · GW(p)

There will be AI long before there are quantum computers.

Replies from: DanielLC, JoshuaZ
comment by DanielLC · 2013-05-29T03:17:17.843Z · LW(p) · GW(p)

There are already quantum computers. Just really small quantum computers.

Replies from: Flipnash
comment by Flipnash · 2013-05-29T05:01:31.957Z · LW(p) · GW(p)

Therefore, AI has already arrived. /joke

Replies from: Alsadius
comment by Alsadius · 2013-05-29T05:40:23.627Z · LW(p) · GW(p)

My video games have had AI for decades. Awful AI, but not really any more awful than a quantum computer that successfully factors the number 15.

comment by JoshuaZ · 2013-05-30T01:25:08.628Z · LW(p) · GW(p)

So, DanielILC has pointed out that there are already quantum computers. A charitable interpretation of your statement might be that there will be AI long before there are general quantum computers powerful enough to do practical computations. Is this what you meant? If so, can you explain what leads to this conclusion?

comment by Tuxedage · 2013-05-29T04:13:38.178Z · LW(p) · GW(p)

At the very least, I'm relatively certain that quantum computing will be necessary for emulations. It's difficult to say with AI because we have no idea what their cognitive load is like, considering we have very little information on how to create intelligence from scratch yet.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-05-30T01:21:30.032Z · LW(p) · GW(p)

At the very least, I'm relatively certain that quantum computing will be necessary for emulations.

Why?

comment by Kawoomba · 2013-05-29T07:57:28.444Z · LW(p) · GW(p)

There is no evidence that the speedup by quantum computing is necessary for the emergence of intelligence. As far as we can tell, quantum phenomena are not especially involved in our cognitive architecture - despite Penrose's protestations to the contrary -, no more than they are involved in our normal PCs.

Now what does E-v-i-d-e-n-c-e spell?

Replies from: nigerweiss, Mestroyer
comment by nigerweiss · 2013-05-29T08:28:24.586Z · LW(p) · GW(p)

Evidence?

EDIT: Sigh. Post has changed contents to something reasonable. Ignore and move on.

Reply edit: I don't have a copy of your original comment handy, so I can't accurately comment on what I was thinking when I read it. However, I don't recall it striking me as a joke, or even an exceptionally dumb thing for someone on the internet to profess belief in.

Replies from: Kawoomba, Kawoomba
comment by Kawoomba · 2013-06-01T16:08:55.928Z · LW(p) · GW(p)

even an exceptionally dumb thing for someone on the internet to profess belief in.

Wrong reference class, "someone on the internet", much too broad. Just as your comment shouldn't usefully be called an exceptionally smart thing for a mammal to say, we should refer to the most applicable reference class -- "someone on LW" -- which screens for most simple "haha, that guy is clearly dumb, damn I'm so smart figuring that out" gotcha moments. Shift gears.

The original comment was close to "we'll need quantum and/xor quarks to explain qualia (qualai?)." Not exactly subtle with the "xor" ...

comment by Kawoomba · 2013-05-30T08:43:23.644Z · LW(p) · GW(p)

I'd really want to know this (no need to pay the karma penalty, just PM or edit your comment): Did you really take the comment at face value? This was the intent of the comment pre-edit.

It may be interesting if that's a cultural boundaries thing for humor, or if LW'ers just keep an unusually open mind and are ready to accept others to hold outlandish positions.

comment by Mestroyer · 2013-05-29T21:16:26.736Z · LW(p) · GW(p)

I'm taking a troll toll to point out that this post has been completely changed in content after being downvoted because of its original meaning. For shame.

Replies from: Kawoomba
comment by Kawoomba · 2013-05-29T21:25:45.502Z · LW(p) · GW(p)

Actually, its content has not changed, what has changed is how the content is presented. Note how "quantum computing", or quantum this or that, is usually brought up like a mystifying secret ingredient to e.g. consciousness/qualia (Penrose) or any other property that some would rather see remain mystic. Quantum computing actually is a thing, but an "AI is unfeasible until we do quantum x" just pattern matches too well to arbitrary mystic roadblocks.

Since the point was lost with the humor apparently either disliked or taken at face value, I've decided to edit the comment such that the same point is made, in clearer presentation, so that those that just skim comments (and their vote counts) and are quick to judge before parsing are accommodated as well.