Shawn Mikula on Brain Preservation Protocols and Extensions

post by oge · 2015-04-29T02:47:59.009Z · LW · GW · Legacy · 33 comments

Contents

33 comments

Recently published article in Nature Methods on a new protocol for preserving mouse brains that allows the neurons to be traced across the entire brain, something that wasn't possible before. This is exciting because in as little as 3 years, the method could be extended to larger mammals (like humans), and pave the way for better neuroscience or even brain uploads. From the abstract:

Here we describe a preparation, BROPA (brain-wide reduced-osmium staining with pyrogallol-mediated amplification), that results in the preservation and staining of ultrastructural details throughout the brain at a resolution necessary for tracing neuronal processes and identifying synaptic contacts between them. Using serial block-face electron microscopy (SBEM), we tested human annotator ability to follow neural ‘wires’ reliably and over long distances as well as the ability to detect synaptic contacts. Our results suggest that the BROPA method can produce a preparation suitable for the reconstruction of neural circuits spanning an entire mouse brain

http://blog.brainpreservation.org/2015/04/27/shawn-mikula-on-brain-preservation-protocols/

33 comments

Comments sorted by top scores.

comment by Kyre · 2015-04-29T05:32:47.265Z · LW(p) · GW(p)

That is very interesting; there does seem to be quite rapid progress in this area.

From the blog entry:

... the reason for this is because simulating the neural activity on a Von Neumann (or related computer) architecture does not reproduce the causal structure of neural interactions in wetware. Using a different computer architecture may avert this problem ...

Can anyone explain what that means ? I can't see how it can be correct.

Replies from: jacob_cannell, brainmaps
comment by jacob_cannell · 2015-04-29T07:16:22.830Z · LW(p) · GW(p)

Well, the simplest explanation may be: it's not correct.

He doesn't believe in functionalism (or at least he probably doesn't):

The question of uploading consciousness can be broken down into two parts: 1) can you accurately simulate the mind based on complete structural or circuit maps of the brain?, and 2) assuming you can run accurate simulations of the mind based on these structural maps, are they conscious? I think the answer is probably ‘no’ to both.

Perhaps he doesn't really understand the implications of universal computability. I've found that as a rule of thumb, almost everyone with a background in computer science believes in functionalism, as do most physicists, but it's somewhat less common for those with a bio science related background.

Someone can be an expert in the details of neurochemistry without having the slightest clue how artificial consciousness would actually work in practice.

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-04-29T13:27:02.051Z · LW(p) · GW(p)

Perhaps he doesn't really understand the implications of universal computability.

Or perhaps he's skeptical of the fidelity of that kind of model. Evolution famously abhors abstraction barriers.

Would you care to quantify your 'almost everyone' claim? Are there surveys, etc.?

Replies from: jacob_cannell, Luke_A_Somers, Silver_Swift
comment by jacob_cannell · 2015-04-29T18:17:59.472Z · LW(p) · GW(p)

No - it's just an observation from my experience (CS degree in the 90's).

Just to be clear, he is making a clear conceptual mistake that indicates he does not understand universal computability:

... the reason for this is simulating the neural activity on a Von Neumann (or related computer) architecture does not reproduce the causal structure of neural interactions in wetware. Using a different computer architecture may avert this problem ...

If there is some other weird computer architecture that can reproduce the causal structure of neural interactions in wetware, then a universal computer (such as a Von Neumann machine) can also reproduce the causal structure of neural interactions simply by simulating the weird computer. This really is theory of computation 101.

Replies from: V_V, Lumifer
comment by V_V · 2015-04-30T15:58:36.790Z · LW(p) · GW(p)

"He does not understand universal computability" seems an overstatement, universal computability doesn't logically imply functionalism, although I agree that it tends to imply that definitions of consciousness which are not invariant under simulation have little epistemic usefulness.

comment by Lumifer · 2015-04-29T18:45:22.284Z · LW(p) · GW(p)

In theory there is no difference between theory and practice. In practice there is.

A physical Turing machine can simulate an iPhone, in theory. Would you like to try to build one? :-D

Replies from: Viliam, jacob_cannell
comment by Viliam · 2015-04-30T06:57:17.736Z · LW(p) · GW(p)

The only problems would be speed and memory.

There is a tiny chance that when he said "does not reproduce the causal structure of neural interactions", what he actually meant was "would simulate the neural interactions extremely slowly", but if that was the case, he really could have said it better.

My priors are that when people without formal computer science education talk about brains and computers, they usually believe that parallelism is the magical power that gives you much more than merely an increase in speed.

comment by jacob_cannell · 2015-04-29T20:49:05.456Z · LW(p) · GW(p)

In practice it's just a matter of computational power. His statement makes it fairly clear that he doesn't understand this distinction.

Circuit level simulations of advanced microchips certainly exist - this is not just theory. Yes they are super expensive when run on standard CPUs (real-time simulation of an iphone CPU naively would require on the order of an exaflop). However, low level circuit binary logic ops are much simpler than the 32/64 bit ops that CPUs implement, and there are more advanced simulation algorithms. Companies such as Cadence provide general purpose binary logic emulators that actually work, in practice for reasonable cost, not just theory.

comment by Luke_A_Somers · 2015-04-29T17:52:54.880Z · LW(p) · GW(p)

The problem is, he just - JUST - got done saying that he's talking about the exact case where it turns out that the simulation's subject completely encompasses the source of consciousness.

If that were his objection, it wouldn't matter if it was Von Neumann or not.

comment by Silver_Swift · 2015-04-30T12:28:29.702Z · LW(p) · GW(p)

To add my own highly anecdotal evidence: my experience is that most people with a background in computer science or physics have no active model of how consciousness maps to brains, but when prodded they indeed usually come up with some form of functionalism*.

My own position is that I'm highly confused by consciousness in general, but I'm leaning slightly towards substance dualism, I have a background in computer science.

*: Though note that quite a few of these people simultaneously believe that it is fundamentally impossible to do accurate natural language parsing with a turing machine, so their position might not be completely thought through.

Replies from: dxu
comment by dxu · 2015-04-30T15:49:05.007Z · LW(p) · GW(p)

I'm leaning slightly towards substance dualism

This seems a bit like trying to fix a problem by applying a patch that causes a lot more problems. The stunning success of naturalistic explanations so far in predicting the universe (plus Occam's Razor) alone would enough to convince me that consciousness is a naturalistic process (and, in fact, they were what convinced me, plus a few other caveats). I'd assign maybe 95% probability to this conclusion. Still, I'd be interested in hearing what led you to your conclusion. Could you expand in more detail?

comment by brainmaps · 2015-04-30T16:42:21.407Z · LW(p) · GW(p)

Shawn Mikula here. Allow me to clear up the confusion that appears to have been caused by being quoted out of context. I clearly state in the part of my answer preceding the quoted text the following:

"2) assuming you can run accurate simulations of the mind based on these structural maps, are they conscious?".

So this is not a question of misunderstanding universal computation and whether a computer simulation can mimic, for practical purposes, the computations of the brain. I am already assuming the computer simulation is mimicking the brain's activity and computations. My point is that a computer works very differently from a brain which is evident in differences in its underlying causal structure. In other words, the coordinated activity of the binary logic gates underlying the computer running the simulation has a vastly different causal structure than the coordinated activity and massive parallelism of neurons in a brain.

The confusion appears to result from the fact that I'm not talking about the pseudo-causal structure of the modeling units comprising the simulation, but rather the causal structure of the underlying physical basis of the computer running the simulation.

Anyway, I hope this helps.

Replies from: Kyre, V_V, jacob_cannell
comment by Kyre · 2015-05-01T09:21:36.642Z · LW(p) · GW(p)

Thanks for replying ! Sorry if the bit I quoted was too short and over-simplified.

That does clarify things, although I'm having difficulty understanding what you mean by the phrase "causal structure". I take it you do not mean the physical shape or substance, because you say that a different computer architecture could potentially have the right causal structure.

And I take it you don't mean the cause and effect relationship between parts of the computer that are representing parts of the brain, because I think that can be put into one-to-one correspondence with the cause and effect relationship of the things being represented.

For example, If neuron N1 causes changes to neurons N2, N3 and N4; and I have a simulated S1 causing changes to simulated S2, S3 and S4, then that simulated cause and effect happens by honest-to-god physical cause and effect: voltage levels in the memory gates representing S1 propagate through the architecture to the gates representing S2, S3, S4 causing them to change.

Using a different computer architecture may avert this problem ...

So consciousness would have to then be something that flesh brains and "correctly causally structured" computer hardware have in common, but which is not shared by a simulation of either of those things running on a conventional computer ?

Replies from: brainmaps
comment by brainmaps · 2015-05-01T14:39:43.282Z · LW(p) · GW(p)

Thanks for the replies. I will try to answer and expand on the points raised. There are a number of reductio ad absurdums that dissuade me from machine functionalism, including Ned Block's China brain and also the idea that a Turing machine running a human brain simulation would possess human consciousness. Let me try to take the absurdity to the next level with the following example:

Does an animated GIF possess human consciousness?

Imagine we record the activity of every neuron in a human brain at every millisecond; at each millisecond, we record whether each of the 100 billion neurons in the human brain is firing an action potental or not. We record all of this for a 1 second duration. Now, for each of the 1000 milliseconds, we represent the neural firing state of all neurons as a binary GIF image of about 333,000 pixels in height and width (this probably exceeds GIF format specifications, but who cares), where each pixel represents the firing state of a specific neuron. We can make 1000 of these GIFs for each millisecond over the 1 second duration. With these 1000 GIFs, we concatenate them to form an animated GIF and then play the animated GIF on an endless loop. Since we are now "simulating" the neural activities of all the neurons in the human brain, we might expect that the animated GIF possesses human consciousness... But this view is absurd and this exercise suggests there is more to consciousness than reproducing neural activities in different substrates.

To V_V, I don't think it has human consciousness. If I answer otherwise, I'm pressed to acknowledge that well-coded chatbots have human consciousness, which is absurd. With regard to what "conscious" means in epistemic terms, I don't know, but I do know that the Turing test is insufficient because it only deals with appearances and it's easy to be duped. About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.

To Kyre, you hit the crux in your second example. The absurdity of China brain and the Turing machine with human consciousness stems from the fact that the causal structures (i.e., space-time diagrams) in these physical systems are completely different from the causal structure of the human brain. As you describe, in a typical computer there are honest-to-god physical cause and effect in the voltage levels in the memory gates, but the causal structure is completely different from wetware, and this is where the absurdity of attributing consciousness to computations (or simulation) comes from, at least for me. Consciousness is not just computational. Otherwise you have absurdities like China brain and animated GIFs with human consciousness. It seems more likely to be physico-computational, as reflected in the causal structure of interactions of the physical system which underlies the computations and simulations.

There may be a computer architecture that reproduces the correct causal structure, but Von Neumann and related architectures do not. And to your last question, yes! A simulation is just an image. If you think it is the real thing, then you must accept that an animated GIF can possess human consciousness. Personally, this conclusion is too absurd for me to accept.

To jacob_cannell, thanks for the congrats. Sure, consciousness has baggage but using self-awareness instead already commits one to consciousness as a special type of computation, which the reductio ad absurdums above try to disprove. I agree it's likely that "Self-awareness is just a computational capability", depending on what you mean by 'Self' and 'awareness'. You state that "The 'causal structure' is just the key algorithmic computations" but this is not quite right. The algorithmic computations can be instantiated in many different causal structures but only some will resemble those of the human brain and presumably possess human consciousness.

TLDR: The basis of consciousness is very speculative and there is good reason to believe it goes beyond computation to the physico-computational and causal (space-time) structure.

Replies from: ahbwramc, V_V, hairyfigment, jacob_cannell
comment by ahbwramc · 2015-05-01T15:37:31.726Z · LW(p) · GW(p)

I think we might be working with different definitions of the term "causal structure"? The way I see it, what matters for whether or not two things have the same causal structure is counterfactual dependency - if neuron A hadn't have fired, then neuron B would have fired. And we all agree that in a perfect simulation this kind of dependency is preserved. So yes, neurons and transistors have different lower-level causal behaviour, but I wouldn't call that a different causal structure as long as they both implement a system that behaves the same under different counterfactuals. That's what I think is wrong with your GIF example, btw - there's no counterfactual dependency whatsoever. If I delete a particular pixel from one frame of the animation, the next frame wouldn't change at all. Of course there was the proper dependency when the GIF was originally computed, and I would certainly say that that computation, however it was implemented, was conscious. But not the GIF itself, no.

Anyway, beyond that, we're obviously working from very different intuitions, because I don't see the China Brain or Turing machine examples as reductio's at all - I'm perfectly willing to accept that those entities would be conscious.

Replies from: brainmaps, Kyre
comment by brainmaps · 2015-05-02T16:40:28.003Z · LW(p) · GW(p)

It's unclear why counterfactual dependencies would be necessary for machine functionalism, but ok, let's include them in the GIF example. Take the first GIF as the initial condition and let the (binary) state of pixel, Xi, at time step, t, take the form, f(i,X1(t-1),X2(t-1),...Xn(t-1)). Does this make it any more plausible that the animated GIF has human consciousness? If you think the GIF has human consciousness, then what is the significance of the fact that the system of equations is generally underdetermined? Personally, it's not plausible that the GIF has human consciousness, but would agree that since it's an extreme example, my intuition could be wrong. Unfortunately, this appears to mean that we must agree to disagree on the question of the validity of machine functionalism, or is there another way forward?

Replies from: DanielLC
comment by DanielLC · 2015-05-15T01:56:07.559Z · LW(p) · GW(p)

I'm not sure I understand you. What do you mean by the system of equations being undetermined. Are you saying to take the same animated gif and not alter the actual physics in any way, and just refer to it differently? That obviously doesn't change anything. You need to alter the causal structure.

My problem with non-machine functionalism is that any reason we have to say we're conscious would equally apply to a simulation. If you one day found out that you were really a simulation would you decide your consciousness is an illusion, or figure you must have gotten it backwards which one is conscious, and it's the simulations that are conscious and the real people that are p-zombies?

comment by Kyre · 2015-05-02T14:53:28.584Z · LW(p) · GW(p)

Thank you, you saved me a lot of typing. No amount of straight copying of that GIF will generate a conscious experience; but if you print out the first frame and give it to a person with a set of rules for simulating neural behaviour and tell them to calculate the subsequent frames into a gigantic paper notebook, that might generate consciousness.

comment by V_V · 2015-05-02T16:53:03.182Z · LW(p) · GW(p)

Thanks for your answers.

Does an animated GIF possess human consciousness?

No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.

Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical function (or, more precisely the same posterior, if the system is stochastic).

An animated GIF doesn't respond to inputs, therefore it doesn't compute the same function that the brain computes.

Think of playing an old console video game on an emulator vs watching a video recorded from the console screen of somebody playing that game. Clearly the emulator and the video are very different objects:
you can legitimately say that the emulator is simulating the game, furthermore you can say that the emulator is actually running the game: "Being a video game" is a property of certain patterns of input-output mappings, and this property is invariant (up to a performance overhead) under simulation, it is independent on the physical substrate.
on the other hand, the video record of somebody playing a game can't be said to be a game, or even the simulation of a game.

To V_V, I don't think it has human consciousness. If I answer otherwise, I'm pressed to acknowledge that well-coded chatbots have human consciousness, which is absurd. With regard to what "conscious" means in epistemic terms, I don't know, but I do know that the Turing test is insufficient because it only deals with appearances and it's easy to be duped.

Well-coded chatbots don't come any close to simulating the linguistic behavior of humans. There are claims now and then that some chatbot passed the Turing test, but if you look past the hype, all these claims are fundamentally false. Here it is Scott Aaronson's take on the last of these claims.

Seriously, if we really had computer programs passing the Turing test, we would probably also have computer programs working as engineers or lawyers.

About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.

I'm asking how you understand the term at operational level right now.

Let me introduce you Foo. Foo may be a human, an animal, a plant, a non-living object, etc. It may be an artifact, or a natural-occurring object, or a combination of both. It may be in a normal state for its kind of objects or an abnormal state (e.g. in coma, out of fuel, out of battery charge) I won't tell you.
If I ask you questions about the behavior of Foo, e.g. "Does Foo move if prodded with a stick?", "Can Foo find the exit of a maze?", "How does Foo behave in front of a mirror?", "Can you train Foo to push a button when a certain light goes on?", "Can you trade with Foo?", "Can you discuss philosophy with Foo?", you can't answer these questions. In Bayesian terms, your subjective probability distribution over possible empirical observations about Foo has a large entropy.

Now I tell you that Foo is conscious. I tell you what I mean by "conscious", I'm leaving that to your interpretation.
I bet that now you can answer many of the questions above, if not with certainty at least with some significant confidence. In Bayesian terms, after conditioning on the piece of evidence "Foo is conscious", the entropy of your subjective probability distribution over possible empirical observations about Foo became smaller.
Do you agree with that? If so, how do you reconcile that with non-functionalism?

Replies from: brainmaps
comment by brainmaps · 2015-05-03T11:50:06.738Z · LW(p) · GW(p)

No, but as others pointed out, an animated GIF is not a simulation of the thing it represents.

The animated GIF, as I originally described it, is an "imitation of the operation of a real-world process or system over time", which is the verbatim definition (from Wikipedia) of a simulation. Counterfactual dependencies are not needed for imitation.

Just to be clear, when we are talking of simulations of a computational system, we mean something that computes the same input to output mapping of the system that is simulated, the same mathematical function

Ok, let's go with this definition. As I understand it then, machine functionalism is not about simulation (as imitation) per se but rather about recreating the mathematical function that the human brain is computing. Is this correct?

An animated GIF doesn't respond to inputs, therefore it doesn't compute the same function that the brain computes.

A brain doesn't necessarily respond to inputs, but sure, we can require that the simulation responds to inputs, though I find this requirement a bit strange.

"Being a video game" is a property of certain patterns of input-output mappings, and this property is invariant (up to a performance overhead) under simulation, it is independent on the physical substrate.

It sounds like a beautiful idea, being invariant under a simulation that is independent of substrate.

There are claims now and then that some chatbot passed the Turing test, but if you look past the hype, all these claims are fundamentally false.

I agree.

About updating posterior beliefs, I would have to know the basis for consciousness, which I acknowledge uncertainty over.

I'm asking how you understand the term at operational level right now.

In short, it's a combination of a Turing test and the possession of a functioning human brain-like structure. If an entity exhibits awake human-like behavior (i.e, by passing the Turing test or suitable approximation) and possesses a living human brain (inferred from visual inspection of their biological form) or human brain-like equivalent (which I've yet to see, except possibly in some non-human primates), then I generally conclude it has human or human-like consciousness.

When I consider your comment here with your previous comment above that "definitions of consciousness which are not invariant under simulation have little epistemic usefulness", I think I understand your argument better. However the epistemic argument you're advancing is a fallacy because you're demonstrating what you assume: If I run an accurate simulation of a human brain on a computer and ask it whether it has human consciousness, of course it will say 'yes' and it will even pass the Turing test because we're assuming it's an accurate simulation of a human brain. The reasoning is circular and does not actually inform us whether the simulation is conscious. So your "epistemic usefulness" appears irrelevant to the question of whether machine functionalism is correct. Or am I missing something?

My general question to the machine functionalists here is, why are you assuming it is sufficient to merely simulate the human brain to recreate its conscious experience? The human brain is a chemico-physical system and such systems are generally explained in terms of causal structures involving physical or chemical entities, though such explanations (including simulations) are never mistaken for the thing itself. So why should human consciousness, which is a part of the natural world and whose basis we know first-hand involves the human brain, be any different?

If the question here is, is consciousness a substrate-independent function that the brain computes or is it associated with a unique type of physico-chemical causal (space-time) structure, then I would say the latter is more likely due to the past successes in physics and chemistry in explaining natural phenomena. In any event, our knowledge of the basis of consciousness is still highly speculative. I can attempt further reductio ad absurdums with machine functionalism involving ever more ridiculous scenarios but will probably not convince anyone who has taken the requisite leap of faith.

Replies from: ahbwramc, hairyfigment
comment by ahbwramc · 2015-05-03T20:03:08.800Z · LW(p) · GW(p)

You seem to be discussing in good faith here, and I think it's worth continuing so we can both get a better idea of what the other is saying. I think differing non-verbal intuitions drive a lot of these debates, and so to avoid talking past one another it's best to try to zoom in on intuitions and verbalize them as much as possible. To that end (keeping in mind that I'm still very confused about consciousness in general): I think a large part of what makes me a machine functionalist is an intuition that neurons...aren't that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another. Why should we have expected either of those processes to generate consciousness? In both cases you just have non-mental, syntactical operations taking place. If you hadn't heard of neurons, wouldn't they also seem like a reductio to you?

What it comes down to is that consciousness seems mysterious to me. And (on an intuitive level) it kind of feels like I need to throw something "special" at consciousness to explain it. What kind of special something? Well, you could say that the brain has the special something, by virtue of the fact that it's made of neurons. But that doesn't seem like the right kind of specialness to me, somehow. Yes, neurons are special in that they have a "unique" physico-chemical causal structure, by why single that out? To me that seems as arbitrary as singling out only specific types of atoms as being able to instantiate consciousness (which some people seem to do, and which I don't think you're doing, correct?). It just seems too contingent, too earth-specific an explanation. What if you came across aliens that acted conscious but didn't have any neurons or a close equivalent? I think you'd have to concede that they were conscious, wouldn't you? Of course, such aliens may not exist, so I can't really make an argument based on that. But still - really, the answer to the mystery of consciousness is going to come down to the fact that particular kinds of cells evolved in earth animals? Not special enough! (or so say my intuitions, anyway)

So I'm led in a different direction. When I look at the brain and try to see what could be generating consciousness, what pops out to me is that the brain does computations. It has a particular pattern, a particular high-level causal structure that seems to lie at the heart of its ability to perform the amazing mental feats it does. The computations it performs are implemented on neurons, of course, but that doesn't seem central to me - if they were implemented on some other substrate, the amazing feats would still get done (Shakespeare would still get written, Fermat's Last Theorem would still get proved). What does seem central, then? Well, the way the neurons are wired up. My understanding (correct me if I'm wrong) is that in a neural network such as the brain, any given neuron fires iff all the inhibitory and excitatory inputs feeding into the neuron exceed some threshold. So roughly speaking, any given brain can be characterized by which neurons are connected to which other neurons, and what the weights of those connections are, yes? In that case (forgetting consciousness for a moment), what really matters in terms of creating a brain that can perform impressive mental feats is setting up those connections in the right way. But that just amounts to defining a specific high-level causal structure - and yes, that will require you to define a set of counterfactual dependencies (if neurons A and B had fired, then neuron C wouldn't have fired, etc). I was kind of surprised that you were surprised that we brought up counterfactual dependence earlier in the discussion. For one I think it's a standard-ish way of defining causality in philosophy (it's at least the first section in the wikipedia article, anyway, and it's the definition that makes the most sense to me). But even beyond that, it seems intuitively obvious to me that your brain's counterfactual dependencies are what make your brain, your brain. If you had a different set of dependencies, you would have to have different neuronal wirings and therefore a different brain.

Anyway, this whole business of computation and higher-level causal structure and counterfactual dependencies: that does seem to have the right kind of specialness to me to generate consciousness. It's hard for me to break the intuition down further than that, beyond saying that it's the if-then pattern that seems like the really important thing here. I just can't see what else it could be. And this view does have some nice features - if you wind up meeting apparently-conscious aliens, you don't have to look to see if they have neurons. You can just look to see if they have the right if-then pattern in their mind.

To answer your question about simulations not being the thing that they're simulating: I think the view of consciousness as a particular causal pattern kind of dissolves that question. If you think the only thing that matters in terms of creating consciousness is that there be a particular if-then causal structure (as I do), then in what sense are you "simulating" the causal structure when you implement it on a computer? It's still the same structure, still has the same dependencies. That seems just as real to me as what the brain does - you could just as easily say that neurons are "simulating" consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a "simulation" versus being "real" kind of disappears.

Does that help you understand where I'm coming from? I'd be interested to hear where in that line of arguments/intuitions I lost you.

Replies from: brainmaps
comment by brainmaps · 2015-05-05T15:33:00.240Z · LW(p) · GW(p)

Thank you for the thoughtful reply.

I think a large part of what makes me a machine functionalist is an intuition that neurons...aren't that special. Like, you view the China Brain argument as a reductio because it seems so absurd. And I guess I actually kind of agree with that, it does seem absurd that a bunch of people talking to one another via walkie-talkie could generate consciousness. But it seems no more absurd to me than consciousness being generated by a bunch of cells sending action potentials to one another.

Aren't neurons special? At the very least, they're mysterious. We're far from understanding them as physico-chemical systems. I've had the same reaction and incredulity as you to the idea that interacting neurons can 'generate consciousness'. The thing is, we don't understand individual neurons. Yes, neurons compute. The brain computes. But so does every physical system we encounter. So why should computation be the defining feature of consciousness? It's not obvious to me. In the end, consciousness is still a mystery and machine functionalism requires a leap of faith that I'm not prepared to take without convincing evidence.

But even beyond that, it seems intuitively obvious to me that your brain's counterfactual dependencies are what make your brain, your brain.

Yes, counterfactual dependencies appear necessary for simulating a brain (and other systems) but the causal structure of the simulated objects is not necessarily the same as the causal structure of the underlying physical system running the simulation, which is my objection to Turing machines and Von Neumann architectures.

you could just as easily say that neurons are "simulating" consciousness. Essentially machine functionalists think that causal structure is all there is in terms of consciousness, and under that view the line between something being a "simulation" versus being "real" kind of disappears.

it's an interesting thought, and I generally agree with this. The question seems to come down to defining causal structure. The problem is that the causal structure of the computer system running a simulation of an object does not appear anything like that of the object. A Turing machine running a human brain simulation appears to have a very different causal structure compared with the human brain.

comment by hairyfigment · 2015-05-04T06:58:28.137Z · LW(p) · GW(p)

So, one reason I pointed you at orthonormal's sequence is that if you read all those posts they seem likely to trigger different intuitions for you.

I would also ask if you think that Aristotle - had he only been smarter - could have figured out his "unique type of physico-chemical causal (space-time) structure" from pure introspection. A negative answer would not automatically prove functionalism. We know of other limits on knowledge. But it does show that the thought experiment in which you are currently a simulation is at least as 'conceivable' as the thought experiment of a zombie without consciousness and perhaps even your scenarios. Furthermore, the mathematical examples of limits on self-knowledge actually point towards structure being independent of 'substrates'. That's how computer science started in the first place.

comment by hairyfigment · 2015-05-02T06:54:43.871Z · LW(p) · GW(p)

You may want to look at the short sequence that starts here.

Replies from: brainmaps
comment by brainmaps · 2015-05-02T16:51:11.345Z · LW(p) · GW(p)

thanks. I'm not sure if you were pointing me in that direction for a specific reason but found commentator pjeby's explanation for the ineffability of qualia insightful.

comment by jacob_cannell · 2015-05-04T18:28:51.005Z · LW(p) · GW(p)

Since we are now "simulating" the neural activities of all the neurons in the human brain, we might expect that the animated GIF possesses human consciousness...

A GIF is just an image, it is not a simulation. The appeal of the GIF thought experiment relies on a misunderstanding of computation and simulation.

Take a photo of a dolphin swimming - can the photo swim? Of course not. But imagine scanning a perfect nanometer resolution 3D image of a dolphin and using that data to construct an artificial robotic dolphin. Can the robot dolphin swim? Obviously - yes, if constructed correctly. Can the 3D image swim by itself ? No. Now replace dolphin with brain, and swim with think.

Thinking is a computational process, and computation is physical, like swimming - it involves energy, mass, and state transitions. Physics is computational.

You state that "The 'causal structure' is just the key algorithmic computations" but this is not quite right.

Yes it is - causal structure is just computational structure, there is no difference.

The algorithmic computations can be instantiated in many different causal structures but only some will

Any sentence of this form is provably false, due to the universality of computation and multiple realizability. Any algorithmic computation can be instantiated in any universal computer and is always the same.

Replies from: brainmaps
comment by brainmaps · 2015-05-05T14:52:43.235Z · LW(p) · GW(p)

The algorithmic computations can be instantiated in many different causal structures but only some will

Any sentence of this form is provably false, due to the universality of computation and multiple realizability.

This is incorrect because the causal structure of a Turing machine simulating a human brain is very different from an actual human brain. Of course, you can redefine causality in terms of "simulation causality" but the underlying causal structure of the respective systems will be very different.

Yes it is - causal structure is just computational structure, there is no difference.

If you accept Wheeler's "it from bit" argument, then anything can be instantiated with information. But at this point, you're veering far from science.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-05-05T16:09:11.692Z · LW(p) · GW(p)

This is incorrect because the causal structure of a Turing machine simulating a human brain is very different from an actual human brain.

There are at least two causal structure levels in a computational system: the physical substrate level and the program level (and potentially more with multiple levels of simulation). A computational system is one that can organize it's energy flow (state transitions in the substrate) in a very particular way so as to realize/implement any computable causal structure at the program/simulation level.

The causal structure at the substrate level is literally factored out - it does not matter (beyond performance constraints). Universal computability is not a theory at this point - it is a proven hard true fact.

causal structure of a Turing machine simulating a human brain is very different from an actual human brain.

This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).

A brain is just matter, and more specifically it is just an electromechanical biological computer. It is also just a conventional irreversible computer which dissipates energy along it's wires and junctions according to the same exact physical constraints that face modern electronic computers. It can be simulated because anything can be simulated!

Let's cut to the chase: are there any empirical predictions where your viewpoint disagrees with functionalism?

For example, I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.

Furthermore, you won't be able to tell the difference between a human controlling a humanoid avatar in virtual reality and an AI controlling a humanoid avatar (imitating human control).

People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.

Replies from: brainmaps
comment by brainmaps · 2015-05-05T20:54:33.881Z · LW(p) · GW(p)

causal structure of a Turing machine simulating a human brain is very different from an actual human brain.

This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).

My statement does not contravene universal computability since I'm assuming a Turing machine can simulate a human brain. Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation. The causal structures in the space-time diagrams are very different. Yes, you can simulate a causal structure, but this is not the same thing as the causal structure of the underlying physical substrate performing the simulation.

It can be simulated because anything can be simulated!

Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.

are there any empirical predictions where your viewpoint disagrees with functionalism?

I'm just exhibiting skepticism over claims from machine functionalism relating to Turing (and related) machine consciousness. I'm not promoting a specific viewpoint.

I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.

There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.

People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.

Have we rejected solipsism? Certainly panpsychism is consistent with it and this appears untouched in consciousness research.

Replies from: jacob_cannell
comment by jacob_cannell · 2015-05-06T06:04:40.620Z · LW(p) · GW(p)

My statement does not contravene universal computability since I'm assuming a Turing machine can simulate a human brain.

Well, if you assume that, then you are already most of the way to functionalism, but I suspect we may be talking about different types of simulations.

Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation.

Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic (addition over real-number distributions rather than digital addition) . My use of the term 'simulation' encompasses probabilistic simulation which entails matching the statistical distribution over state transitions rather than deterministic simulation.

Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.

Neural analog computational systems can be simulated perfectly in a probabilistic sense when you can recreate the exact conditional probability distributions that govern spike events. You can't necessarily predict the exact actions the brain will output (due to noise effects), but you can - in theory - predict actions from the exact correct distribution. At the limits of simulation we can predict exact samples from our multiverse distribution, rather than predict the exact future of our particular (unknowable) branch.

Simulation of intelligent minds is fundamentally different than weather simulation - for the weather we are interested in the exact outcome in our specific universe. That would be comparable to simulating the exact thoughts of a particular human mind in some situation - which in general is computationally intractable (and unimportant for AI).

There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.

Science is concerned with objective reality. A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.

In common usage the term consciousness refers to objective reality. Sentences of the form " I was conscious of X", or "Y rendered Bob unconscious", or "Perhaps at a subconscious level" all suggest a common meaning involving objectively verifiable computations.

We know that consciousness is the particular mental state arising from various computations coordinated across some hundreds of major brain regions. We know that certain drugs can cause loss of consciousness even while neural activity persists. Consciousness depends on precise synchronized coordination between major brain circuits - a straightforward result of the brain being an hybrid digital/analog computer.

We aren't so far away from being able to objectively detect consciousness via brain scanning and some form of statistical inference - see this interesting work for example (using a clever compressibility or k-complexity perturbation measure).

Replies from: brainmaps
comment by brainmaps · 2015-05-06T18:00:06.082Z · LW(p) · GW(p)

Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic

Surely you realize that quibbling over the use of analog vs digital neural summation in my toy example does not address my main argument.

Neural analog computational systems can be simulated perfectly in a probabilistic sense

Anything can be simulated perfectly (and trivially) in a probabilistic sense.

There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.

A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.

If we knew the basis for consciousness, we would have objective tests. It's possible that studying the brain's structural and connectional organization in detail will provide the clues we need to develop better informed opinions about the basis of consciousness.

This is my final post and I would like to thank everyone for the discussion. If anyone is interested in developing autotracing and autosegmentation programs for connectomics and neural circuit reconstruction in whole-brain volume electron microscopy datasets, please email me at brainmaps at gmail dot com or visit http://connectomes.org for more information. Thanks again.

comment by V_V · 2015-04-30T21:18:11.692Z · LW(p) · GW(p)

The confusion appears to result from the fact that I'm not talking about the pseudo-causal structure of the modeling units comprising the simulation, but rather the causal structure of the underlying physical basis of the computer running the simulation.

The natural objection is, why would the physical substrate matter?

Let's assume you replace somebody's brain with a Von Neumann computer running a simulation of that person's brain. You get something that behaves like a conscious person, and even claims to be conscious person if asked. Would you say that this thing is not conscious?

If you think it is not conscious, then what does "conscious" actually mean in epistemic terms? If I tell you that X is conscious, how do you update your posterior beliefs on the outcomes of future observations about X?

comment by jacob_cannell · 2015-04-30T21:09:27.783Z · LW(p) · GW(p)

Shawn - firstly, congratulations on your BROPA research and publication; it is likely to have high future impact.

I am already assuming the computer simulation is mimicking the brain's activity and computations. My point is that a computer works very differently from a brain which is evident in differences in its underlying causal structure.

Universal computation necessarily implies/requires multiple realizability of causal systems and thus functionalism.

Part of the confusion stems from the use of the term 'consciousness' and all of it's associated baggage. So let us taboo the word and use self-awareness instead. Self-awareness conveys most of the same meaning, but without the connotations (just as we may prefer the term 'mind' over 'soul').

Self-awareness is a specific key information processing capability that some intelligent systems/agents possess. Some animals (dolphins, monkeys, humans, etc) demonstrate general self-awareness through their ability to recognize themselves in mirrors. Self-recognition in a mirror test requires a specific ability to construct a predictive model of one's self as an object embedded in the world.

The other day while on a walk I came upon a songbird that was repeatedly attacking a car (with a short hop ramming maneuver). I was puzzled until I realized that the bird was specifically attacking the side view mirror. I watched it for about 10 minutes and it just did the same attack over and over again. The next day I saw it attacking a different car in about the same location.

Humans possess a more advanced form of self-awareness related to our ability to use language to communicate. Natural linguistic communication is very complex - it requires a sophisticated capability to model not only one's self - but other self-aware agents as well, along with those other agent's models of oneself and other agents, and so on recursively.

Self-awareness isn't a binary concept - obviously it comes in many varieties and flavours, such that individual humans are not self-aware in exactly the same way. Nonetheless, these differences are tiny in comparison to those that separate typical human self-awareness from feline SA or the rudimentary SA of current artificial agents.

Self-awareness is just a computational capability, and once we scale up ANNs to the size and complexity of the human brain, we will prove beyond any doubt that machines can possess human level self-awareness.

I am already assuming the computer simulation is mimicking the brain's activity and computations. My point is that a computer works very differently from a brain which is evident in differences in its underlying causal structure.

If a computer simulation could actually mimick all of the key algorithmic computations in the brain necessary for human level cognition, intelligence, self-awareness, attention, etc etc. - then the computer simulation would be essentially indistinguishable from a human mind (as embodied say - in virtual reality).

The 'causal structure' is just the key algorithmic computations. It is the 'what'. The 'how' of those computations is the implement issue - and there are literally - provably - an infinite number of implementations/realizations for any specific causal computational structure.

An nvidia GPU works differently than an AMD GPU which both are quite different from an intel CPU or a hybrid analog/digital neural ASIC (similar to the brain) or a memristor neural ASIC. Nonetheless, a brain simulation could run on any of those architectures and it would work just the same.