Consciousness of simulations & uploads: a reductio
post by simplicio · 2010-08-21T20:02:20.067Z · LW · GW · Legacy · 142 commentsContents
Simulating a person A different kind of simulation But what about the Fading Qualia argument? None 142 comments
Related articles: Nonperson predicates, Zombies! Zombies?, & many more.
ETA: This argument appears to be a rehash of the Chinese room, which I had previously thought had nothing to do with consciousness, only intelligence. I nonetheless find this one instructive in that it makes certain things explicit which the Chinese room seems to gloss over.
ETA2: I think I may have made a mistake in this post. That mistake was in realizing what ontology functionalism would imply, and thinking that ontology too weird to be true. An argument from incredulity, essentially. Double oops.
Consciousness belongs to a class of topics I think of as my 'sore teeth.' I find myself thinking about them all the time: in the middle of bathing, running, cooking. I keep thinking about consciousness because no matter how much I read on the subject, I find I am still confused.
Now, to the heart of the matter. A major claim on which the desirability of uploading (among other things) depends, is that the upload would be conscious (as distinct from intelligent). I think I found a reductio of this claim at about 4:00 last night while staring up at my bedroom ceiling.
Simulating a person
The thought experiment that is supposed to show us that the upload is conscious goes as follows. (You can see an applied version in Eliezer's bloggingheads debate with Massimo Pigliucci, here. I also made a similar argument to Massimo here.)
Let us take an unfortunate member of the public, call her Simone, and simulate her brain (plus inputs and outputs along the nervous system) on an arbitrarily powerful philosophical supercomputer (this also works if you simulate her whole body plus surroundings). This simulation can be at any level of complexity you like, but it's probably best if we stick to an atom-by-atom (or complex amplitudes) approach, since that leaves less room for doubt.
Since Simone is a lawful entity within physics, there ought to be nothing in principle stopping us from doing so, and we should get behavioural isomorphism between the simulation and the biological Simone.
Now, we can also simulate inputs and outputs to and from the visual, auditory and language regions of her brain. It follows that with the right expertise, we can ask her questions - questions like "Are you experiencing the subjective feeling of consciousness you had when you were in a biological body?" - and get answers.
I'm almost certain she'll say "Yes." (Take a moment to realize why the alternative, if we take her at her word, implies Cartesian dualism.)
The question is, do we believe her when she says she is conscious? 10 hours ago, I would have said "Of course!" because the idea of a simulation of Simone that is 100% behaviourally isomorphic and yet unconscious seemed very counterintuitive; not exactly a p-zombie by virtue of not being atom-by-atom identical with Simone, but definitely in zombie territory.
A different kind of simulation
There is another way to do this thought experiment, however, and it does not require that infinitely powerful computer the philosophy department has (the best investment in the history of academia, I'd say).
(NB: The next few paragraphs are the crucial part of this argument.)
Observe that ultimately, the computer simulation of Simone above would output nothing but a huge sequence of zeroes and ones, process them into visual and audio outputs, and spit them out of a monitor and speakers (or whatever).
So what's to stop me just sitting down and crunching the numbers myself? All I need is a stupendous amount of time, a lot of pencils, a lot (!!!) of paper, and if you're kind to me, a calculator. Atom by tedious atom, I'll simulate inputs to Simone's auditory system asking her if she's conscious, then compute her (physically determined) answer to that question.
Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.
Once again, Simone will claim she's conscious.
...Yeah, I'm sorry, but I just don't believe her.
I don't claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don't even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.
Pigliucci is going to enjoy watching me eat my hat.
What was our mistake?
I've thought about this a lot in the last ~10 hours since I came up with the above.
I think when we imagined a simulated human brain, what we were picturing in our imaginations was a visual representation of the simulation, like a scene in Second Life. We saw mental images of simulated electrical impulses propagating along simulated neurons, and the cause & effect in that image is pretty clear...
...only it's not. What we should have been picturing was a whole series of logical operations happening all over the place inside the computer, with no physical relation between them and the represented basic units of the simulation (atoms, or whatever).
Basically, the simulated consciousness was isomorphic to biological consciousness in a similar way to how my shadow is isomorphic to me. Just like the simulation, if I spoke ASL I could get my shadow to claim conscious awareness, but it wouldn't mean much.
In retrospect, it should have given us pause that the physical process happening in the computer - zeroes and ones propagating along wires & through transistors - can only be related to consciousness by virtue of outsiders choosing the right interpretations (in their own heads!) for the symbols being manipulated. Maybe if you interpret that stream of zeroes and ones differently, it outputs 5-day weather predictions for a city that doesn't exist.
Another way of putting it is that, if consciousness is "how the algorithm feels from the inside," a simulated consciousness is just not following the same algorithm.
But what about the Fading Qualia argument?
The fading qualia argument is another thought experiment, this one by David Chalmers.
Essentially, we strap you into a chair and open up your skull. Then we replace one of your neurons with a silicon-based artificial neuron. Don't worry, it still outputs the same electrical signals along the axons; your behaviour won't be affected.
Then we do this for a second neuron.
Then a third, then a kth... until your brain contains only artificial neurons (N of them, where N ≈ 1011).
Now, what happens to your conscious experience in this process? A few possibilities arise:
- Conscious experience is initially the same, then shuts off completely at some discrete number of replaced neurons: maybe 1, maybe N/2. Rejected by virtue of being ridiculously implausible.
- Conscious experience fades continuously as k → N. Certainly more plausible than option 1, but still very strange. What does "fading" consciousness mean? Half a visual field? A full visual field with less perceived light intensity? Having been prone to (anemia-induced) loss of consciousness as a child, I can almost convince myself that fading qualia make some sort of sense, but not really...
- Conscious experience is unaffected by the transition.
142 comments
Comments sorted by top scores.
comment by jpet · 2010-08-21T21:00:38.712Z · LW(p) · GW(p)
I don't see how this differs at all from Searle's Chinese room.
The "puzzle" is created by the mental picture we form in our heads when hearing the description. For Searle's room, it's a clerk in a room full of tiles, shuffling them between boxes; for yours, it's a person sitting at a desk scratching on paper. Since the consciousness isn't that of the human in the room, where is it? Surely not in a few scraps of paper.
But plug in the reality for how complex such simulations would actually have to be, if they were to actually simulate a human brain. Picture what the scenarios would look like running on sufficient fast-forward that we could converse with the simulated person.
You (the clerk inside) would be utterly invisible; you'd live billions of subjective years for every simulated nanosecond. And, since you're just running a deterministic program, you would appear no more conscious to us than an electron appears conscious as it "runs" the laws of physics.
What we might see instead is a billion streams of paper, flowing too fast for the eye to follow, constantly splitting and connecting and shifting. Cataracts of fresh paper and pencils would be flowing in, somehow turning into marks on the pages. Reach in and grab a couple of pages, and we could see how the marks on one seemed to have some influence on those nearby, but when we try to follow any actual stimulus through to a response we get lost in a thousand divergent flows, that somehow recombine somewhere else moments later to produce an answer.
It's not so obvious to me that this system isn't conscious.
comment by inklesspen · 2010-08-22T01:19:47.619Z · LW(p) · GW(p)
It is, of course, utterly absurd to think that meat could be the substrate for true consciousness. And what if Simone chooses herself to spend eons simulating a being by hand? Are we to accept the notion of simulations all the way down?
In all honesty, I don't think the the simulation necessarily has to be very fine-grained. Plenty of authors will tell you about a time when one of their characters suddenly "insisted" on some action that the author had not foreseen, forcing the author to alter her story to compensate. I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious. (I suspect such a simulation would be taking advantage of the machinery of my own consciousness, in much the same manner as a VMWare virtual machine can, if properly configured, use the optical drive in its host computer.)
What, then, are the obligations of an author to his characters, or of a thinker to her thoughts? My memory is fallible and certainly I may wish to do other things with my time than endlessly simulate another being. Yet "fairness" and the ethic of reciprocity suggest that I should treat simulated beings the same way I would like to be treated by my simulator. Perhaps we need something akin to the ancient Greeks' concept of xenia — reciprocal obligations of host to guest and guest to host — and perhaps the first rule should be "Do not simulate without sufficient resources to maintain that simulation indefinitely."
Replies from: Perplexed, Perplexed, jacob_cannell, MartinB↑ comment by Perplexed · 2010-08-22T04:22:22.653Z · LW(p) · GW(p)
I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious.
Personally, I would be more surprised if you could imagine a character who was correct in claiming not to be conscious.
↑ comment by Perplexed · 2010-08-26T16:30:45.814Z · LW(p) · GW(p)
perhaps the first rule should be "Do not simulate without sufficient resources to maintain that simulation indefinitely."
There have been some opinions expressed on another thread that disagree with that.
The key question is whether terminating a simulation actually does harm to the simulated entity. Some thought experiments may improve our moral intuitions here.
- Does slowing down a simulation do harm?
- Does halting, saving, and then restarting a simulation do harm?
- Is harm done when we stop a simulation, restore an earlier save file, and then restart?
- If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? Did the harm take place at some point in our timeline, or at a point in simulated time, or both?
I tend to agree with your invocation of xenia, but I'm not sure it applies to simulations. At what point do simulated entities become my guests? When I buy the shrink-wrap software? When I install the package? When I hit start?
I really remain unconvinced that the metaphor applies.
Replies from: Cyan, inklesspen↑ comment by Cyan · 2010-08-27T03:03:38.220Z · LW(p) · GW(p)
Applying the notion of information-theoretic death to simulated beings results in the following answers:
- Does slowing down a simulation do harm? If/when time for computation becomes exhausted, those beings who lost the opportunity to be simulated are harmed, relative to the counterfactual world in which the simulation was not slowed.
- Does halting, saving, and then restarting a simulation do harm? No.
- Is harm done when we stop a simulation, restore an earlier save file, and then restart? If the restore made the stopped simulation unrecoverable, yes.
- If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? When the information became unrecoverable. Did the harm take place at some point in our timeline, or at a point in simulated time, or both? Both.
↑ comment by NancyLebovitz · 2010-08-27T14:03:40.302Z · LW(p) · GW(p)
Does slowing down a simulation do harm? If/when time for computation becomes exhausted, those beings who lost the opportunity to be simulated are harmed, relative to the counterfactual world in which the simulation was not slowed.
Slowing down a simulation also does harm if there are interactions which the simulation would prefer to maintain which are made more difficult or impossible.
The same would apply to halting a simulation.
↑ comment by Perplexed · 2010-08-27T03:25:40.611Z · LW(p) · GW(p)
Request for clarification:
Is harm done when we stop a simulation, restore an earlier save file, and then restart? If the restore made the stopped simulation unrecoverable, yes.
Do I understand this properly to say that if the stopped simulation had been derived from the save file state using non-deterministic or control-console inputs, inputs that are not duplicated in the restarted simulation, then harm is done?
Hmmm. I am imagining a programmer busy typing messages to his simulated "creations":
Thou shalt commit adultery.
Looks at what was entered ...
Thinks about what just happened. ... "Aw Sh.t!"
Replies from: Cyan↑ comment by Cyan · 2010-08-27T03:36:01.590Z · LW(p) · GW(p)
Do I understand this properly to say that if the stopped simulation had been derived from the save file state using non-deterministic or control-console inputs, inputs that are not duplicated in the restarted simulation, then harm is done?
As I understand it, yes. But the harm might not be as bad as what we currently think of as death, depending on how far back the restore went. Backing one's self up is a relatively common trope in a certain brand of Singularity fic (e.g. Glasshouse)).
(I needed three parentheses in a row just now: the first one, escaped, for the Wikipedia article title, the second one to close the link, and the third one to appear as text.)
↑ comment by inklesspen · 2010-08-26T22:40:43.945Z · LW(p) · GW(p)
All other things being equal, if I am a simulated entity, I would prefer not to have my simulation terminated, even though I would not know if it happened; I would simply cease to acquire new experiences. Reciprocity/xenia implies that I should not terminate my guest-simulations.
As for when the harm occurs, that's nebulous concept hanging on the meaning of 'harm' and 'occurs'. In Dan Simmons' Hyperion Cantos, there is a method of execution called the 'Schrodinger cat box'. The convict is placed inside this box, which is then sealed. It's a small but comfortable suite of rooms, within which the convict can live. It also includes a random number generator. It may take a very long time, but eventually that random number generator will trigger the convict's death. This execution method is used for much the same reason that most rifles in a firing squad are unloaded — to remove the stress on the executioners.
I would argue that the 'harm' of the execution occurs the moment the convict is irrevocably sealed inside the box. Actually, I'd say 'potential harm' is created, which will be actualized at an unknown time. If the convict's friends somehow rescue him from the box, this potential harm is averted, but I don't think that affects the moral value of creating that potential harm in the first place, since the executioner intended that the convict be executed.
If I halt a simulation, the same kind of potential harm is created. If I later restore the simulation, the potential harm is destroyed. If the simulation data is destroyed before I can do so, the potential harm is then actualized. This either takes place at the same simulated instant as when the simulation was halted, or does not take place in simulated time at all, depending on whether you view death as something that happens to you, or something that stops things from happening to you.
In either case, I think there would be a different moral value assigned based on your intent; if you halt the simulation in order to move the computer to a secure vault with dedicated power, and then resume, this is probably morally neutral or morally positive. If you halt the simulation with the intent of destroying its data, this is probably morally negative.
Your second link was discussing simulating the same personality repeatedly, which I don't think is the same thing here. Your first link is talking about many-worlds futility, where I make all possible moral choices and therefore none of them; I think this is not really worth talking about in this situation.
Replies from: Perplexed, PaulAlmond↑ comment by Perplexed · 2010-08-26T23:07:02.023Z · LW(p) · GW(p)
So it seems that you simply don't take seriously my claim that no harm is done in terminating a simulation, for the reason that terminating a simulation has no effect on the real existence of the entities simulated.
I see turning off a simulation as comparable to turning off my computer after it has printed the first 47,397,123 digits of pi. My action had no effect on pi itself, which continues to exist. Digits of pi beyond 50 million still exist. All I have done by shutting off the computer power is to deprive myself of the ability to see them.
Replies from: PaulAlmond, inklesspen↑ comment by PaulAlmond · 2010-08-26T23:51:47.680Z · LW(p) · GW(p)
I say that your claim depends on an assumption about the degree of substrate specificity associated with consciousness, and the safety of this assumption is far from obvious.
Replies from: Perplexed↑ comment by Perplexed · 2010-08-27T00:45:15.972Z · LW(p) · GW(p)
What does consciousness have to do with it? It doesn't matter whether I am simulating minds or simulating bacteria. A simulation is not a reality.
Replies from: PaulAlmond↑ comment by PaulAlmond · 2010-08-27T01:01:15.124Z · LW(p) · GW(p)
There isn't a clear way in which you can say that something is a "simulation", and I think that isn't obvious when we draw a line in a simplistic way based on our experiences of using computers to "simulate things".
Real things are arrangements of matter, but what we call "simulations" of things are also arrangements of matter. Two things or processes of the same type (such as two real cats or processes of digestion) will have physical arrangements of matter that have some property in common, but we could say the same about a brain and some arrangement of matter in a computer: A brain and some arrangement of matter in a computer may look different, but they may still have more subtle properties in common, and there is no respect in which you can draw a line and say "They are not the same kind of system." - or at least any line such drawn will be arbitrary.
I refer you to:
Almond, P., 2008. Searle's Argument Against AI and Emergent Properties. Available at: http://www.paul-almond.com/SearleEmergentProperties.pdf or http://www.paul-almond.com/SearleEmergentProperties.doc [Accessed 27 August 2010].
Replies from: Perplexed↑ comment by Perplexed · 2010-08-27T01:29:39.830Z · LW(p) · GW(p)
there is no respect in which you can draw a line and say "They are not the same kind of system." - or at least any line such drawn will be arbitrary.
But there is such a line. You can unplug a simulation. You cannot unplug a reality. You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.
Replies from: AlephNeil, PaulAlmond↑ comment by AlephNeil · 2010-08-27T02:58:39.970Z · LW(p) · GW(p)
But there is such a line. You can unplug a simulation. You cannot unplug a reality. You can slow down a simulation. If it uses time reversible physics, you can run it in reverse. You can convert the whole thing into an equivalent Giant Lookup Table. You can do none of these things to a reality. Not from the inside.
I'm not sure that the 'line' between simulation and reality is always well-defined. Whenever you have a system whose behaviour is usefully predicted and explained by a set of laws L other than the laws of physics, you can describe this state of affairs as a simulation of a universe whose laws of physics are L. This leaves a whole bunch of questions open: Whether an agent deliberately set up the 'simulation' or whether it came about naturally, how accurate the simulation is, whether and how the laws L can be violated without violating the laws of physics, whether and how an agent is able to violate the laws of L in a controlled way etc.
Replies from: Perplexed↑ comment by PaulAlmond · 2010-08-27T01:55:56.007Z · LW(p) · GW(p)
All those things can only be done with simulations because the way that we use computers has caused us to build features like malleability, predictability etc into them.
The fact that we can easily time reverse some simulations means little: You haven't shown that having the capability to time reverse something detracts from other properties that it might have. It would be easy to make simulations based on analogue computers where we could never get the same simulation twice, but there wouldn't be much of a market for those computers - and, importantly, it wouldn't persuade you any more.
It is irrelevant that you can slow down a simulation. You have to alter the physical system running the simulation to make it run slower: You are changing it into a different system that runs slower. We could make you run slower too if we were allowed to change your physical system. Also, once more - you are just claiming that that even matters - that the capability to do something to a system detracts from other features.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
The "unplug" one is particularly weak. We can unplug you with a gun. We can unplug you by shutting off the oxygen supply to your brain. Again, where is a proof that being able to unplug something makes it not real?
All I see here is a lot of claims that being able to do something with a certain type of system - which has been deliberately set up to make it easy to do things with it - makes it not real. I see no argument to justify any of that. Further, the actual claims are dubious.
Replies from: Perplexed, PaulAlmond↑ comment by Perplexed · 2010-08-27T02:26:35.043Z · LW(p) · GW(p)
The fact that we can easily time reverse some simulations means little: You haven't shown that having the capability to time reverse something detracts from other properties that it might have.
Well, it would mean that "pulling the plug" would mean depriving the simulated entities of a past, rather than depriving them of a future in your viewpoint. I would have thought that would leave you at least a little confused.
The lookup table argument is irrelevant. If a program is not running a lookup table, and you convert it to one, you have changed the physical configuration of that system. We could convert you into a giant lookup table just as easily if we are allowed to alter you as well.
Odd. I thought you were the one arguing that substrate doesn't matter. I must have misunderstood or oversimplified.
The "unplug" one is particularly weak. We can unplug you with a gun.
I don't think so. The clock continues to run, my blood runs out, my body goes into rigor, my brain decays. None of those things occur in an unplugged simulation. If you did somehow cause them to occur in a simulation still plugged in, well, then I might worry a little about your ethics.
The difference here is that you see yourself, as the owner of computer hardware running a simulation, as a kind of creator god who has brought conscious entities to life and has responsibility for their welfare.
I, on the other hand imagine myself as a voyeur. And not a real-time voyeur, either. It is more like watching a movie from NetFlicks. The computer is not providing a substrate for new life, it is merely decoding and rendering something that already exists as a narrative.
But what about any commands I might input into the simulation? Sorry, I see those as more akin to selecting among channels, or choosing among n,e,s,w,u, and d in Zork, than as actually interacting with entities I have brought to life.
If we one day construct a computer simulation of a conscious AI, we are not to be thought of as creating conscious intelligence, any more than someone who hacks his cable box so as to provide the Playboy channel has created porn.
Replies from: WrongBot↑ comment by WrongBot · 2010-08-27T02:36:49.599Z · LW(p) · GW(p)
Your brain is (so far as is currently known) a Turing-equivalent computer. It is simulating you as we speak, providing inputs to your simulation based on the way its external sensors are manipulated.
Replies from: Perplexed↑ comment by Perplexed · 2010-08-27T02:50:21.440Z · LW(p) · GW(p)
Your point being?
In advance of your answer, I point out that you have no moral rights to do anything to that "computer", and that no one, even myself, currently has the ability to interfere with that simulation in any constructive way - for example, an intervention to keep me from abandoning this conversation in frustration.
Replies from: WrongBot↑ comment by WrongBot · 2010-08-27T02:53:54.129Z · LW(p) · GW(p)
I could turn the simulation off. Why is your computational substrate specialer than an AI's computational substrate?
Replies from: Perplexed↑ comment by Perplexed · 2010-08-27T03:02:44.500Z · LW(p) · GW(p)
Because you have no right to interfere with my computational substrate. They will put you in jail. Or, if you prefer, they will put your substrate in jail.
We have not yet specified who has rights concerning the AI's substrate - who pays the electrical bills. If the owner of the AI's computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI's may own property) rather than by any blindingly obvious moral truth.
Replies from: WrongBot↑ comment by WrongBot · 2010-08-27T03:08:50.921Z · LW(p) · GW(p)
If the owner of the AI's computer becomes the AI, then I may need to rethink my position. But this rethinking is caused by a society-sanctioned legal doctrine (AI's may own property) rather than by any blindingly obvious moral truth.
Is there a blindingly obvious moral truth that gives you self-ownership? Why? Why doesn't this apply to an AI? Do you support slavery?
Replies from: Perplexed↑ comment by Perplexed · 2010-08-27T03:42:04.506Z · LW(p) · GW(p)
Is there a blindingly obvious moral truth that gives you self-ownership? Why?
Moral truth? I think so. Humans should not own humans. Blindingly obvious? Apparently not, given what I know of history.
Why doesn't this apply to an AI?
Well, I left myself an obvious escape clause. But more seriously, I am not sure this one is blindingly obvious either. I presume that the course of AI research will pass from sub-human-level intelligences; thru intelligences better at some tasks than humans but worse at others; to clearly superior intelligences. And, I also suspect that each such AI will begin its existence as a child-like entity who will have a legal guardian until it has assimilated enough information. So I think it is a tricky question. Has EY written anything detailed on the subject?
One thing I am pretty sure of is that I don't want to grant any AI legal personhood until it seems pretty damn likely that it will respect the personhood of humans. And the reason for that asymmetry is that we start out with the power. And I make no apologies for being a meat chauvinist on this subject.
↑ comment by PaulAlmond · 2010-08-27T02:04:41.381Z · LW(p) · GW(p)
As a further comment, regarding the idea that you can "unplug" a simulation: You can do this in everday life with nuclear weapons. A nuclear weapon can reduce local reality to its constituent parts - the smaller pieces that things were made out of. If you turn off a computer, you similarly still have the basic underlying reality there - the computer itself - but the higher level organization is gone - just as if a nuclear weapon had been used on the simulated world. This only seems different because the underpinnings of a real object and a "simulated" one are different. Both are emergent properties of some underlying system and both can be removed by altering the underlying system in such a way as they don't emerge from it anymore (by using nuclear devices or turning off the power).
Replies from: Perplexed↑ comment by Perplexed · 2010-08-27T02:35:31.076Z · LW(p) · GW(p)
It would have to be a weapon that somehow destroyed the universe in order for me to see the parallel. Hmmm. A "big crunch" in which all the matter in the universe disappears into a black hole would do the job.
If you can somehow pull that off, I might have to consider you immoral if you went ahead and did it. From outside this universe, of course.
↑ comment by inklesspen · 2010-08-26T23:27:35.314Z · LW(p) · GW(p)
Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist? What does it mean for information to 'exist'? If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.
In one sense, every possible state of a simulation could be encoded as a number, and thus every possible state could be said to exist simultaneously. That's of little comfort to me, though, if I am informed that I'm living in a simulation on some upuniverse computer, which is about to be decommissioned. My life is meaningful to me even if every possible version of me resulting from every possible choice exists in the platonic realm of ethics.
Replies from: Perplexed↑ comment by Perplexed · 2010-08-26T23:41:55.599Z · LW(p) · GW(p)
Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist?
No, of course not. No more than do simulated entities on your hard-drive exist as sentient agents in this universe. As sentient agents, they exist in a simulable universe. A universe which does not require actually running as a simulation in this or any other universe to have its own autonomous existence.
What does it mean for information to 'exist'?
Now I'm pretty sure that is an example of mind projection. Information exists only with reference to some agent being informed.
If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.
Which is exactly my point. If you terminate a simulation, you lose access to the simulated entities, but that doesn't mean they have been destroyed. In fact, they simply cannot be destroyed by any action you can take, since they exist in a different space-time.
That's of little comfort to me, though, if I am informed that I'm living in a simulation on some upuniverse computer, which is about to be decommissioned.
But you are not living in that upuniverse computer. You are living here. All that exists in that computer is a simulation of you. In effect, you were being watched. They intend to stop watching. Big deal!
Replies from: inklesspen↑ comment by inklesspen · 2010-08-27T00:04:05.518Z · LW(p) · GW(p)
Do you also argue that the books on my bookshelves don't really exist in this universe, since they can be found in the Library of Babel?
Replies from: Perplexed↑ comment by Perplexed · 2010-08-27T00:51:19.800Z · LW(p) · GW(p)
Gee, what do you think?
I don't really wish to play word games here. Obviously there is some physical thing made of paper and ink on your bookshelf. Equally obviously, Borges was writing fiction when he told us about Babel. But in your thought experiment, something containing the same information as the book on your shelf exists in Babel.
Do you have some point in asking this?
↑ comment by PaulAlmond · 2010-08-26T22:51:29.682Z · LW(p) · GW(p)
What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?
Replies from: inklesspen↑ comment by inklesspen · 2010-08-26T23:13:26.005Z · LW(p) · GW(p)
Suppose I am hiking in the woods, and I come across an injured person, who is unconscious (and thus unable to feel pain) and leave him there to die of his wounds. (We are sufficiently out in the middle of nowhere that nobody else will come along before he dies.) If reality is large enough that there is another Earth out there with the same man dying of his wounds, and on that Earth, I choose to rescue him, does that avert the harm that happens to of the man I left to die? I feel this is the same sort of question as many-worlds. I can't wave away my moral responsibility by claiming that in some other universe, I will act differently.
↑ comment by jacob_cannell · 2010-08-26T00:43:15.087Z · LW(p) · GW(p)
I am fascinated by applying the ethic of reciprocity to simulationism, but is a bidirectional transfer the right approach?
Can we deduce the ethics of our simulator with respect to simulations by reference to how we wish to be simulated? And is that the proper ethics? This would be projecting the ethics up.
Or rather should we deduce the proper ethics from how we appear to be simulated? This would be projecting the ethics down.
The latter approach would lead to a different set of simulation ethics, probably based more on historicity and utility. ie "Simulations should be historically accurate." This would imply that simulation of past immorality and tragedy is not unethical if it is accurate.
Replies from: inklesspen↑ comment by inklesspen · 2010-08-26T15:40:45.598Z · LW(p) · GW(p)
No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in "return". A host's duty to his guests doesn't go away just because that host had a poor experience when he himself was a guest at some other person's house.
If our simulators don't care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.
If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.
If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.
Of course, there's always the possibility that simulations may be much more similar than we think.
Replies from: PaulAlmond↑ comment by PaulAlmond · 2010-08-26T16:42:18.530Z · LW(p) · GW(p)
But maybe there could be a way in which, if you behave ethically in a simulation, you are more likely to be treated that way "in return" by those simulating you - using a rather strange meaning of "in return"?
Some people interpret the Newcomb's boxes paradox as meaning that, when you make decisions, you should act is if you are influencing the decisions of other entities when there is some relationship between the behavior of those entities and your behavior - even if there is no obvious causal relationship, and even if the other entities already decided back in the past.
The Newcomb's boxes paradox is essentially about reference class - it could be argued that every time you make a decision, your decision tells you a lot about the reference class of entities identical to you - and it also tells you something, even if it may not be much in some situations, about entities with some similarity to you, because you are part of this reference class.
Now, if we apply such reasoning, if you have just decided to be ethical, you have just made it a bit more likely that everyone else is ethical (of course, this is your experience - in reality - it was more that your behavior was dictated by being part of the reference class - but you don't experience the making of decisions from that perspective). Same for being unethical.
You could apply this to simulation scenarios, but you could also apply it to a very large or infinite cosmos - such as some kind of multiverse model. In such a scenario, you might consider each ethical act you perform as increasing the probability that ethical acts are occurring all over reality - even of increasing the proportion of ethical acts in an infinity of acts. It might make temporal discounting a bit less disturbing (to anyone bothered by it): If you act ethically with regard to the parts of reality you can observe, predict and control, your "effect" on the reference class means that you can consider yourself to be making it more likely that other entities, beyond the range of your direct observation, prediction or control, are also behaving ethically within their local environment.
I want to be clear here that I am under no illusion that there is some kind of "magical causal link". We might say that this is about how our decisions are really determined anyway. Deciding as if "the decision" influences the distant past, another galaxy, another world in some expansive cosmology or a higher level in a computer simulated reality is no different, qualitatively, from deciding as if "your decision" affects anything else in everyday life - when in fact, your decision is determined by outside things.
This may be a bit uncomfortably like certain Buddhist ideas really, though a Buddhist might have more to say on that if one comes along, and I promise that any such similarity wasn't deliberate.
One weird idea relating to this: The greater the number of beings, civilizations, etc that you know about, the more the behavior of these people will dominate your reference class. If you live in a Star Trek reality, with aliens all over the place, what you know about the ethics of these aliens will be very important, and your own behavior will be only a small part of it: You will reduce the amount of “non-causal influence” that you attribute to your decisions. On the other hand, if you don’t know of any aliens, etc, your own behaviour might be telling you much more about the behavior of other civilizations.
P.S. Remember that anyone who votes this comment down is influencing the reference class of users on Less Wrong who will be reading your comments. Likewise for anyone who votes it up. :) Hurting me only hurts yourselves! (All right - only a bit, I admit.)
↑ comment by MartinB · 2010-08-23T14:50:25.287Z · LW(p) · GW(p)
That idea used to make me afraid to die before i wrote up all the stories i thought up. Sadly that is not even possible any more.
One big difference between an upload an a person simulated in your mind is that the upload can interact with environment.
comment by Vladimir_M · 2010-08-21T21:46:00.763Z · LW(p) · GW(p)
On a related note, is anyone familiar with the following variation on the fading qualia argument? It's inspired by (and very similar to) a response to Chalmers given in the paper "Counterfactuals Cannot Count" by M. Bishop. (Unfortunately, I couldn't find an ungated version.) Chalmers's reply to Bishop is here.
The idea is as follows. Let's imagine a though experiment under the standard computationalist assumptions. Suppose you start with an electronic brain B1 consisting of a huge number of artificial neurons, and you let it run for a while from some time T1 to T2 with an input X, so that during this interval, the brain goes through a vivid conscious experience full of colors, sounds, etc. Suppose further that we're keeping a detailed log of each neuron's changes of state during the entire period. Now, if we reset the brain to the initial state it had at T1 and start it again, giving it the same input X, it should go through the exact same conscious experience.
But now imagine that we take the entire execution log and assemble a new brain B2 precisely isomorphic to B1, whose neurons are however not sensitive to their inputs. Instead, each neuron in B2 is programmed to recreate the sequence of states through which its corresponding neuron from B1 passed during the interval (T1, T2) and generate the corresponding outputs. This will result in what Chalmers calls a "wind-up" system, which the standard computationalist view (at least to my knowledge) would not consider conscious, since it completely lacks the causal structure of the original computation, and merely replays it like a video recording.
You can probably see where this is going now. Suppose we restart B1 with the same initial state from T1 and the same input X, and while it's running, we gradually replace the neurons from B1 with their "wind-up" versions from B2. At the start at T1, we have the presumably conscious B1, and at the end at T2, the presumably unconscious B2 -- but the transition between the two is gradual just like in the original fading qualia argument. Thus, there must be some sort of "fading qualia" process going on after all, unless either B1 is not conscious to begin with, or B2 is conscious after all. (The latter however gets us into the problem that every physical system implements a "wind-up" version of every computation if only some numbers from arbitrary physical measurements are interpreted suitably.)
I don't find Chalmers's reply satisfactory. In particular, it seems to me that the above argument is damaging for significant parts of his original fading qualia thought experiment where he explains why he finds the possibility of fading qualia implausible. It is however possible that I've misunderstood either the original paper or his brief reply to Bishop, so I'd definitely like to see him address this point in more detail.
Replies from: None, wedrifid↑ comment by [deleted] · 2010-08-22T05:46:27.199Z · LW(p) · GW(p)
Well, this bit seems wrong on Bishop's part:
Bishop responds that mere counterfactual sensitivity can't make a difference to consciousness: surely it's what actually happens to a system that matters, not what would have happened if things had gone differently
This is a false distinction if (as I believe) counterfactual sensitivity is part of what happens. For example, if what happens is that Y causes Z, then part of that is the counterfactual fact that if Y hadn't happened then Z wouldn't have happened. (Maybe this particular example can be nitpicked, but I hope that the fundamental point is made.)
Thus, there must be some sort of "fading qualia" process going on after all, unless either B1 is not conscious to begin with, or B2 is conscious after all.
If counterfactual sensitivity matters - and I think it does - then some sort of fading (I hesitate to call it "fading qualia" specifically - the whole brain is fading, in that its counterfactual sensitivity is gradually going kaput) is going on. And since the self is (by hypothesis) unable to witness what's happening, then this demonstrates how extreme our corrigibility with regard to our own subjective experiences is. Not at all a surprising outcome.
Replies from: allenwang↑ comment by allenwang · 2010-08-22T22:26:23.124Z · LW(p) · GW(p)
I think that something like this must be the case. Especially considering the hypothesis that the brain is a dynamical system that requires rapid feedback among a wide variety counterfactual channels, even the type of calculation in Simplicio's simulation model wouldn't work. Note that this is not just because you don't have enough time to simulate all the moves of the computer algorithm that produces the behavior. You have to be ready to mimic all the possible behaviors that could arise from a different set of inputs, in the same temporal order. I'm sure that somewhere along the way, linear methods of calculation such as your simulation attempts, must break down.
In other words, your simulation is just a dressed up version of the wind up system from a dynamical system point of view. The analogy runs like this: The simulation model is to the real consciousness what the wind-up model is to a simulation, in that it supports much fewer degrees of freedom. It seems that you have to have the right kind of hardware to support such processes, hardware that probably has criteria much closer to our biological, multilateral processing channels than a linear binary logic computer. Note that even though Turing machines supposedly can represent any kind of algorithm, they cannot support the type of counterfactual channels and especially feedback loops necessary for consciousness. The number of calculations necessary to recreate the physical process is probably beyond the linearly possible with such apparatuses.
Replies from: JanetK↑ comment by wedrifid · 2010-08-21T22:58:17.039Z · LW(p) · GW(p)
This puts the computed human in a curious position in as much as she must consider, if philosophising about her existence, whether she is a reductionist version of a 'deceptive demon' that (even more) mystically oriented philosophers have been want to consider. Are her neurons processing stimulus or controlled by their own pattern?
On the other hand she does have some advantages. Because her neuron's responses are initially determined stimulus X and her own cognitive architecture she is free to do whatever experiments are possible within the artificial world X. X will then either present her with a coherent world of the sort humans would be able to comprehend or present her with something that more or less befuddles her mind. After doing experiments to determine how her brain seems to work she knows that either things are what they appear or that the deceptively demonic computationalist overlords are messing with her electronic brain (or potentially any other form of processing). Either by giving her bogus X or making her entire state totally arbitrary. Throw in Boltzman brains as equivalent to 'computationalist overlords' too, as far as she is concerned.
I don't know what points Chalmer's or Bishop were trying to make about 'qualia' because such arguments often make little to no sense to me. This scenario (like most others) looks like just another curious setup in a reductionist universe.
comment by ata · 2010-08-21T20:24:38.151Z · LW(p) · GW(p)
I once took this reductio in the opposite direction and ended up becoming convinced that consciousness is what it feels like inside a logically consistent description of a mind-state, whether or not it is instantiated anywhere. I'm still confused about some of the implications of this, but somewhat less confused about consciousness itself.
Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.
Once again, Simone will claim she's conscious.
...Yeah, I'm sorry, but I just don't believe her.
I don't claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don't even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.
"If I can summon forth a subjective consciousness ex nihilo by making the right blobs of protein throw around the right patterns of electrical impulses and neurotransmitters (which don't even mean anything outside human minds), then we might as well just give up and admit consciousness is magic."
Remember that it doesn't count as a reductio ad absurdum unless the conclusion is logically impossible (or, for the Bayesian analogue, very improbable according to some actual calculation) rather than merely implausible-sounding. I'd rather take Simone's word for it than believe my intuitions about plausibility.
Replies from: simplicio, Mitchell_Porter, orthonormal↑ comment by simplicio · 2010-08-21T20:37:18.302Z · LW(p) · GW(p)
Doesn't this imply that an infinity of different subjective consciousnesses are being simulated right now, if only we knew how to assign inputs and outputs correctly?
Replies from: PaulAlmond, orthonormal, jimrandomh, Dre↑ comment by PaulAlmond · 2010-08-21T22:03:22.659Z · LW(p) · GW(p)
I started a series of articles, which got some criticism on LW in the past, dealing with this issue (among others) and this kind of ontology. In short, if an ontology like this applies, it does not mean that all computations are equal: There would be issues of measure associated with the number (I'm simplifying here) of interpretations that can find any particular computation. I expect to be posting Part 4 of this series, which has been delayed for a long time and which will answer many objections, in a while, but the previous articles are as follows:
Minds, Substrate, Measure and Value, Part 1: Substrate Dependence. http://www.paul-almond.com/Substrate1.pdf.
Minds, Substrate, Measure and Value, Part 2: Extra Information About Substrate Dependence. http://www.paul-almond.com/Substrate2.pdf.
Minds, Substrate, Measure and Value, Part 3: The Problem of Arbitrariness of Interpretation. http://www.paul-almond.com/Substrate3.pdf.
This won't resolve everything, but should show that the kind of ontology you are talking about is not a "random free for all".
↑ comment by orthonormal · 2010-08-24T06:24:18.975Z · LW(p) · GW(p)
This relates to the notion of "joke interpretations" under which a rock can be said to be implementing a given algorithm. There's some discussion of it in Good and Real.
↑ comment by jimrandomh · 2010-08-21T20:56:31.245Z · LW(p) · GW(p)
Yes, it does. And if the universe is spatially infinite, then that implies an infinity of different subjective consciousnesses, too. Neither of these seems like a problem to me.
↑ comment by Dre · 2010-08-21T21:43:15.071Z · LW(p) · GW(p)
Not necessarily. See Chlamer's reply to Hilary Putnam who asserted something similar, especially section 6. Basically, if we require that all of the "internal" structure of the computation be the same in the isomorphism and make a reasonable assumption about the nature consciousness, all of the matter in the Hubble volume wouldn't be close to large enough to simulate a (human) consciousness.
↑ comment by Mitchell_Porter · 2010-08-22T04:15:36.197Z · LW(p) · GW(p)
I once took this reductio in the opposite direction and ended up becoming convinced that consciousness is what it feels like inside a logically consistent description of a mind-state, whether or not it is instantiated anywhere.
Do you think the world outside your body is still there when you're asleep? That objects are still there when you close your eyes?
↑ comment by orthonormal · 2010-08-24T06:24:44.742Z · LW(p) · GW(p)
This.
comment by DuncanS · 2010-08-24T22:18:20.678Z · LW(p) · GW(p)
One of the problems here is that of using our intuition on consciousness as a guide to processes well outside our experience. Why should we believe our common-sense intuition on whether a computer has consciousness, or whether a pencil and paper simulation has consciousness when both are so far beyond our actual experience? It's like applying our common sense understanding of physics to the study of atoms, or black holes. There's no reason to assume we can extrapolate that far intuitively with any real chance of success.
After that, there's a straight choice. Consciousness may be something that arises purely out of a rationally modellable process, or not. If the former, then the biological, computer program and pencil and paper Simone's will all be conscious, genuinely. If not, then there is something to consciousness that lies outside the reach of rational description - this is not inherently impossible in my opinion, but it does suggest that some entities which claim to be conscious actually won't be, and that there will be no rational means to show whether they are or not.
Replies from: pjeby, TheAncientGeek↑ comment by pjeby · 2010-08-24T23:32:05.908Z · LW(p) · GW(p)
One of the problems here is that of using our intuition on consciousness as a guide to processes well outside our experience. Why should we believe our common-sense intuition on whether a computer has consciousness, or whether a pencil and paper simulation has consciousness when both are so far beyond our actual experience? It's like applying our common sense understanding of physics to the study of atoms, or black holes. There's no reason to assume we can extrapolate that far intuitively with any real chance of success.
Upvoted. I'm stealing this for use in future off-LW discussions of consciousness. ;-)
Replies from: Perplexed↑ comment by Perplexed · 2010-08-25T00:41:59.383Z · LW(p) · GW(p)
Another topic that might be discussed is whether consciousness as self-awareness is at all related to moral status as in "Don't you dare pull the plug. That would be murder!". Personally, I don't see any reason why the two should be related. Perhaps we conflate them because both are mysteries and we think that Occam's razor can be used to economize on mysteries.
↑ comment by TheAncientGeek · 2016-12-15T15:46:27.008Z · LW(p) · GW(p)
Why should we believe our common-sense intuition on whether a computer has consciousness, or whether a pencil and paper simulation has consciousness when both are so far beyond our actual experience
Lack of a better alternative?
but it does suggest that some entities which claim to be conscious actually won't be, and that there will be no rational means to show whether they are or not.
Indeed.
comment by JGWeissman · 2010-08-21T20:20:37.541Z · LW(p) · GW(p)
"It's the same thing. Just slower."
You hand calculated simulation is still conscious, and it is the logical relations of cause and effect within that calculation, not "real" geometry, that makes it so, the same as in the computer simulation and biological brains.
Replies from: simplicio↑ comment by simplicio · 2010-08-21T20:46:25.652Z · LW(p) · GW(p)
What makes me balk at this is that, if it's true, nobody actually has to bother doing the calculation at all. There doesn't even have to be a physical process that, if construed right, does the simulation.
It seems to follow that all of the infinity of different potential subjective consciousnesses are running right now. Nobody told me I was signing up for that ontology!
Replies from: Perplexed, Furcas↑ comment by Perplexed · 2010-08-21T21:09:14.770Z · LW(p) · GW(p)
You mean you haven't signed up yet for the "Tegmark Mathematical Universe"? Shame on you. :)
Replies from: None, jacob_cannell↑ comment by [deleted] · 2010-08-22T07:11:23.218Z · LW(p) · GW(p)
Permutation City by Greg Egan has a very similar idea at its heart. According to wikipedia Tegmark has cited the novel, so apparently he agrees about the similarity.
Replies from: Kevin↑ comment by jacob_cannell · 2010-08-26T00:55:12.851Z · LW(p) · GW(p)
I see several problems with Tegmark's MU theory:
What's the utility? What does this actually differentiate? How would we even know if other universes exist or how many exist if there is no causal relationship between them? The multiverse in QM is quite different: there is a causal connection, but the QM multiverse we inhabit is a strict subset of the TMU, from what I understand.
In Permutation City, the beings end up encoding themselves into a new universe simply by finding a suitable place in the TMU. The problem of course is why would they even need to do that? Whatever universe they thought they encoded themselves into should still exist in the TMU regardless.
Also, I don't see the point of the above in any case, as even if such a metamathical mystical trick was possible, it would just amount to a copy, with which your current version would have no causal connection to.
Replies from: Perplexed, jimrandomh↑ comment by jimrandomh · 2010-08-28T15:15:11.237Z · LW(p) · GW(p)
One answer is that in a Tegmark multiverse, all possible universes exist, but not to the same degree; that is, each universe or universe-snapshot has a weight, and that weight is higher if it's causally descended from or simulated inside of other universes with large weight.
↑ comment by Furcas · 2010-08-21T20:59:10.965Z · LW(p) · GW(p)
Oh, I think I see what's confusing you. In the xkcd comic, the pebbles by themselves aren't a universe, it's the pebbles being interpreted by the right interpreter that are the universe. The right interpreter is simply a mechanism that (at its simplest) is caused to do action X by a pebble, and action Y by there not being a pebble, where X never equals Y.
So yes, somebody does actually have to bother doing the calculation, because the calculation is the universe (or consciousness, or whatever).
comment by JamesAndrix · 2010-08-21T20:52:27.000Z · LW(p) · GW(p)
This might be a case where flawed intuition is correct.
The chain of causality leading to the 'yes' is MUCH weaker in the pencil and paper version. You imagine squiggles as mere squiggles, not as signals that inexorably cause you to carry them through a zillion steps of calculation. No human as we know them would be so driven, so it looks like that Simone can't exist as a coherent, caused thing.
But it's very easy and correct to see a high voltage on a wire as a signal which will reliably cause a set of logic gates to carry it through a zillion steps. So that Simone can get to yes without her universe locking up first.
Replies from: orthonormal, AstroCJ↑ comment by orthonormal · 2010-08-24T06:27:22.562Z · LW(p) · GW(p)
Right. Our basic human intuitions do not grok the power of algorithms.
↑ comment by AstroCJ · 2010-08-22T10:54:26.029Z · LW(p) · GW(p)
Disagree. If we allow humans to be deterministic then a "human as we know them" is driven solely by the physical laws of our universe; there is no sense in talking about our emotional motivations until we have decided that we have free will.
I think your argument does assume we have free will.
Replies from: JamesAndrix, Perplexed, Unknowns↑ comment by JamesAndrix · 2010-08-22T14:37:57.920Z · LW(p) · GW(p)
I'm suggesting that the part of our minds that deals with hypotheticals silently rejects the premise that 'self' is a reliable squiggle controlled component in a deterministic machine.
I'm also saying this is a pretty accurate hardwired assumption about humans, because we do few things with very high reliability.
I don't think I'm assuming anything about free will. I don't think about it much, and I forgot how to dissolve it. I think that's a good thing.
↑ comment by Unknowns · 2010-08-22T11:16:22.127Z · LW(p) · GW(p)
On the contrary, he is assuming we do not; he assumes that it is quite impossible that a human being would actually do the necessary work. That's why he said that "Simone can't exist" in this situation.
Replies from: AstroCJ↑ comment by AstroCJ · 2010-08-23T17:29:09.308Z · LW(p) · GW(p)
So his argument is that "a human is not an appropriate tool to do this deterministic thing". So what? Neither is a log flume - but the fact that log flumes can't be used to simulate consciousness doesn't tell us anything about consciousness.
comment by Paul Crowley (ciphergoth) · 2010-08-21T20:32:43.320Z · LW(p) · GW(p)
What difference do you see between this argument and the Chinese Room? I see none.
Replies from: simpliciocomment by Mitchell_Porter · 2010-08-22T04:14:41.478Z · LW(p) · GW(p)
If functionalism is true then dualism is true. You have the same experience E hovering over the different physical situations A, B, and C, even when they are as materially diverse as neurons, transistors, and someone in a Chinese room.
It should already be obvious that an arrangement of atoms in space is not identical to any particular experience you may claim to somehow be inhabiting it, and so it should already be obvious that the standard materialistic approach to consciousness is actually property dualism. But perhaps the observation that the experience is supposed to be exactly the same, even when the arrangement of atoms is really different, will help a few people to grasp this.
Replies from: AlephNeil↑ comment by AlephNeil · 2010-08-22T05:27:30.979Z · LW(p) · GW(p)
Perhaps one can construe functionalism as a form of dualism, but if so then it's a curious state of affairs because then one can be a 'dualist' while still giving typically materialist verdicts on all the familiar questions and thought experiments in the philosophy of mind:
- Artificial intelligence is possible and the 'systems reply' to the Chinese Room thought experiment is substantially correct.
- "Zombies" are impossible (even a priori).
- Libertarian free will is incoherent, or at any rate false.
- There is no 'hard problem of consciousness' qualitatively distinct from the 'easy problems' of figuring out how the brain's structure and functional organization are able to support the various cognitive competences we observe in human behaviour.
- [This isn't part of what "functionalism" is usually taken to mean, but it's hard to see how a thoroughgoing functionalist could avoid it:] There aren't always 'facts of the matter' about persisting subjective identity. For instance, in "cloning and teleportation" thought experiments the question of whether my mind ceases to exist or is 'transferred' to another body, and if so, which body, turns out to be meaningless.
- [As above:] There isn't always a 'fact of the matter' as to whether a being (e.g. a developing foetus) is conscious.
If you guys are prepared to concede all of these and similar bones of contention, I don't think we'd have anything further to argue about - we can all proudly proclaim ourselves dualists, lament the sterile emptiness of the reductionist vision of the world into whose thrall so many otherwise great thinkers have fallen, and sing life-affirming hymns to the richness and mystery of the mind.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-22T05:31:01.657Z · LW(p) · GW(p)
There isn't always a 'fact of the matter' as to whether a being (e.g. a developing foetus) is conscious.
How do you get that from functionalism?
Replies from: AlephNeil, AlephNeil↑ comment by AlephNeil · 2010-08-22T05:47:48.757Z · LW(p) · GW(p)
Continuity: The idea that if you look at what's going on in a developing brain (or, for that matter, a deteriorating brain) there are no - or at least there may not be any - sudden step changes in the patterns of neural activity on which the supposed mental state supervenes.
Or again, one can make the same point about the evolutionary tree. If you consider all of the animal brains there are and ever have been, there won't be any single criterion, even at the level of 'functional organisation', which distinguishes conscious brains from unconscious ones.
This is partly an empirical thesis, insofar as we can actually look and see whether there are such 'step changes' in ontogeny and phylogeny. It's only partly empirical because even if there were, we couldn't verify that those changes were precisely the ones that signified consciousness.
But surely, if we take functionalism seriously then the lack of any plausible candidates for a discrete "on-off" functional property to coincide with consciousness suggests that consciousness itself is not a discrete "on-off" property.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-22T07:41:55.309Z · LW(p) · GW(p)
Doesn't this argument apply to everything else about consciousness as well - whether a particular brain is thinking something, planning something, experiencing something? According to functionalism, being in any specific conscious state should be a matter of your brain possessing some specific causal/functional property. Are you saying that no such properties are ever definitely and absolutely possessed? Because that would seem to imply that no-one is ever definitely in any specific conscious state - i.e. that there are no facts about consciousness at all.
Replies from: AlephNeil, ciphergoth↑ comment by AlephNeil · 2010-08-22T10:36:59.307Z · LW(p) · GW(p)
I think ciphergoth is correct to mention the Sorites paradox.
It always surprises me when people refuse to swallow this idea that "sometimes there's no fact of the matter as to whether something is conscious".
However difficult it is to imagine how it can be true, it's just blindingly obvious that our bodies and minds are built up continuously, without any magic moment when 'the lights switch on'.
If you take the view that, in addition to physical reality, there is a "bank of screens" somewhere (like in the film Aliens) showing everyone's points of view then you'll forever be stuck with the discrete fact that either there is a screen allocated to this particular animal or there isn't. But surely the correct response here is simply to dispense with the idea of a "bank of screens".
We need to understand that consciousness behaves as it does irrespectively of our naive preconceptions, rather than trying to make it analytically true that consciousness conforms to our naive preconceptions and using that to refute materialism.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-22T11:33:51.160Z · LW(p) · GW(p)
I'll stick with the principle
The possibility of exact description of states on both sides [conscious subjectivity, physical brain], and of exactly specifying the mapping between them, must exist in any viable theory of consciousness. Otherwise, it reifies uncertainty in a way that has the same fundamental illogicality as the "particle without a definite position".
So the only way I can countenance the idea
sometimes there's no fact of the matter as to whether something is conscious
is if this arises because of vagueness in our description of consciousness from within. Some things not only exist but "have an inside" (for example, us); some things, one usually supposes, "just exist" (for example, a rock); and perhaps there are intermediate states between having an inside and not having an inside that we don't understand well, or don't understand at all. This would mean that our first-person concept of the difference between conscious and non-conscious was deficient, that it only approximated reality.
But I don't see any sensitivity to that issue in what you write. Your arguments are coming entirely from the third-person, physical description, the view from outside. You think there's a continuum of states between some that are definitely conscious, and some that are definitely not conscious, and so you conclude that there's no sharp boundary between conscious and non-conscious. The first-person description features solely as an idea of a "screen" that we can just "dispense with". Dude, the first-person description describes the life you actually live, and the only reality you ever directly experience!
What would happen if you were to personally pass from a conscious to a non-conscious state? To deny that there's a boundary is to say that there's no fact about what happens to you in that scenario, except that at the start you're conscious, and at the end you're not, and we can't or won't think or say anything very precise about what happens in between - unless it's expressed in terms of neurons and atoms and other safely non-subjective entities, which is missing the point. The loss of consciousness, whether in sleep or in death, is a phenomenon on the first-person side of this divide, which explores and crosses the boundary between conscious and non-conscious. It's a thing that happens to you, to the subject of your experience, and not just to certain not-you objects contemplated by that subject in the third-person, objectifying mode of its experience.
You know, there's not even any profound physical reason to support the argument from continuity. The physical world is full of qualitative transitions.
it's just blindingly obvious that our bodies and minds are built up continuously, without any magic moment when 'the lights switch on'.
Couldn't you make the same argument about literally switching on a light? :-) Obviously the idea that a light is sometimes on and sometimes off is a naive preconception that we should dispense with.
Replies from: ciphergoth, AlephNeil↑ comment by Paul Crowley (ciphergoth) · 2010-08-22T12:00:53.573Z · LW(p) · GW(p)
Couldn't you make the same argument about literally switching on a light? :-) Obviously the idea that a light is sometimes on and sometimes off is a naive preconception that we should dispense with.
Correct - the impression that it is an instantaneous, discontinuous process is an illusion caused by the speed of the transition compared to the speed of our perceptions.
Replies from: AlephNeil, Sniffnoy↑ comment by AlephNeil · 2010-08-22T12:34:22.773Z · LW(p) · GW(p)
Yeah, but I think "mental discretists" can tolerate that kind of very-rapid-but-still-continuous physical change - they just have to say that a mental moment corresponds to (its properties correlate with those of) a smallish patch of spacetime.
I mean, if you believe in unified "mental moments" at all then you've got to believe something like that, just because the brain occupies a macroscopic region of space, and because of the finite speed of light.
But this defense becomes manifestly absurd if we can draw out the grey area sufficiently far (e.g. over the entire lifetime of some not-quite-conscious animal.)
↑ comment by AlephNeil · 2010-08-22T13:28:59.158Z · LW(p) · GW(p)
perhaps there are intermediate states between having an inside and not having an inside that we don't understand well, or don't understand at all. This would mean that our first-person concept of the difference between conscious and non-conscious was deficient, that it only approximated reality.
Well then I'm not sure that we disagree substantively on this issue.
Basically, I've said: "Naive discrete view of consciousness --> Not always determinate whether something is conscious". (Or rather that's what I've meant to say but tended to omit the premise.)
Whereas I think you're saying something like: "At the level of metaphysical reality, there is no such thing as indeterminacy (apparent indeterminacy only arises through vague or otherwise inadequate language) --> Whatever the true nature of subjective experience, the facts about it must be determinate"
Clearly these two views are compatible with one another (as long as I state my premise). (However, there's room to agree with the letter but not the spirit of your view, by taking 'the true nature of subjective experience' to be something ridiculously far away from what we usually think it is and holding that all mentalistic language (as we know it) is irretrievably vague.)
You know, there's not even any profound physical reason to support the argument from continuity. The physical world is full of qualitative transitions.
I'm not sure exactly what you're thinking of here, but I seem to recall that you're sympathetic to the idea that physics is important in the philosophy of mind. Anyway, I think the idea that a tiny 'quantum leap' could make the difference between a person being (determinately) consciousness and (determinately) unconsciousness is an obvious non-starter.
Couldn't you make the same argument about literally switching on a light? :-) Obviously the idea that a light is sometimes on and sometimes off is a naive preconception that we should dispense with.
Well, this is where we actually need to look at the empirical data and see whether a foetus seems to 'switch on' like a light at any point. I've assumed there is no such point, but what I know about embryology could be written on the back of a postage stamp. (But come on, the idea is ridiculous and I see no reason to disingenuously pretend to be agnostic about it.)
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-23T06:27:11.111Z · LW(p) · GW(p)
the idea is ridiculous
Maybe you're familiar with the phenomenon of "waking up". Do you agree that this is a real thing? If so, does it not imply that it once happened to you for the first time?
Whatever the true nature of subjective experience, the facts about it must be determinate
I agree with that.
there's room to agree with the letter but not the spirit of your view, by taking 'the true nature of subjective experience' to be something ridiculously far away from what we usually think it is and holding that all mentalistic language (as we know it) is irretrievably vague.
What do you think you are doing when you use mentalistic language, then? Do you think it bears no relationship to reality?
Replies from: JanetK, AlephNeil↑ comment by JanetK · 2010-08-23T10:11:12.638Z · LW(p) · GW(p)
A little group of neurons in the brain stem starts sending a train of signals to the base of the thalamus. The thalamus 'wakes up' and then sends signals to the cortex and the cortex 'wakes up'. Consciousness is now 'on'. Later, the brain stem stops sending the train of signals, the thalamus 'goes to sleep' and the cortex slowly winds down the 'goes to sleep'. Consciousness is now 'off'. Neither on or off was instantaneous or sharply defined. (Dreaming activated the cortex differently at times during sleep but ignore that for now). Descriptions like this (hopefully more detailed and accurate) are the 'facts of the matter' not semantic arguments. Why is it that science is OK for understanding physics and astronomy but not for understanding consciousness?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-23T10:45:24.345Z · LW(p) · GW(p)
Why is it that science is OK for understanding physics and astronomy but not for understanding consciousness?
Science in some broad sense "is OK... for understanding consciousness", but unless you're a behaviorist, you need to be explaining (and first, you need to be describing) the subjective side of consciousness, not just the physiology of it. It's the facts about subjectivity which make consciousness a different sort of topic from anything in the natural sciences.
Replies from: JanetK↑ comment by JanetK · 2010-08-24T10:48:05.226Z · LW(p) · GW(p)
Yes we will have to describe the subjective side of consciousness but the physiology has to come first. As an illustration: if you didn't know the function of the heart or much about its physiology, it would be useless to try and understand it by how it felt. Hence we would have ideas like 'loving with all my heart', 'my heart is not in it' etc. which come from the pre-biology world. Once we know how and why the heart works the way it does, those feeling are seen differently.
I am certainly not a behaviorist and I do think that consciousness is an extremely important function of the brain/mind. We probably can't understand how cognition works without understanding how consciousness works. I just do not think introspection gets us closer to understanding, nor do I think that introspection gives us any direct knowledge of our own minds - 'direct' being the important word.
↑ comment by AlephNeil · 2010-08-23T08:52:06.692Z · LW(p) · GW(p)
Maybe you're familiar with the phenomenon of "waking up". Do you agree that this is a real thing? If so, does it not imply that it once happened to you for the first time?
Right, people wake up and go to sleep. Waking can be relatively quicker or slower depending on the manner of awakening, but... I'm not sure what you think this establishes.
In any case, a sleeping person is not straightforwardly 'unconscious' - their mind hasn't "disappeared" it's just doing something very different from what it's doing when it's awake. A better example would be someone 'coming round' from a spell of unconsciousness, and here I think you'll find that people remember it being a gradual process.
Your whole line of attack here is odd: all that matters for the wider debate is whether or not there are any smooth, gradual processes between consciousness and unconsciousness, not whether or not there also exist rapid-ish transitions between the two.
What do you think you are doing when you use mentalistic language, then? Do you think it bears no relationship to reality?
There are plenty of instances where language is used in a way where its vagueness cannot possibly be eliminated, and yet manages to be meaningful. E.g. "The Battle Of Britain was won primarily because the Luftwaffe switched the focus of their efforts from knocking out the RAF to bombing major cities." (N.B. I'm not claiming this is true (though it may be) simply that it "bears some relationship to reality".)
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-23T11:24:28.917Z · LW(p) · GW(p)
Your whole line of attack here is odd: all that matters for the wider debate is whether or not there are any smooth, gradual processes between consciousness and unconsciousness, not whether or not there also exist rapid-ish transitions between the two.
I am objecting, first of all, to your assertion that the idea that a fetus might "'switch on' like a light" at some point in its development is "ridiculous". Waking up was supposed to be an example of a rapid change, as well as something real and distinctive which must happen for a first time in the life of an organism. But I can make this counterargument even just from the physiological perspective. Sharp transitions do occur in embryonic development, e.g. when the morphogenetic motion of tissues and cavities produces a topological change in the organism. If we are going to associate the presence of a mind, or the presence of a capacity for consciousness, with the existence of a particular functional organization in the brain, how can there not be a first moment when that organization exists? It could consist in something as simple as the first synaptic coupling of two previously separate neural systems. Before the first synapses joining them, certain computations were not possible; after the synapses had formed, they were possible.
As for the significance of "smooth, gradual" transitions between consciousness and unconsciousness, I will revert to that principle which you expressed thus:
"Whatever the true nature of subjective experience, the facts about it must be determinate"
Among the facts about subjective experience are its relationship to "non-subjective" states or forms of existence. Those facts must also be determinate. The transition from consciousness to non-consciousness, if it is a continuum, cannot only be a continuum on the physical/physiological side. It must also be a continuum on the subjective side, even though one end of the continuum is absence of subjectivity. When you say there can be material systems for which there is no fact about its being conscious - it's not conscious, it's not not-conscious - you are being just as illogical as the people who believe in "the particle without a definite position".
I ask myself why you would even think like this. Why wouldn't you suppose instead that folk psychology can be conceptually refined to the point of being exactly correct? Why the willingness to throw it away, in favor of nothing?
↑ comment by Paul Crowley (ciphergoth) · 2010-08-22T08:19:21.516Z · LW(p) · GW(p)
Sorites error: in your last sentence you leap from there being no discontinuities to there being no facts at all.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-22T08:38:35.322Z · LW(p) · GW(p)
Neil is the one who says that sometimes, there are no facts. How do you get from no facts to facts without a discontinuity?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-08-22T08:56:14.034Z · LW(p) · GW(p)
Maybe I'm missing something, but I can't see in what way this argument is specifically about consciousness, rather than just being a re-hash of the Sorites Paradox - could you spell it out for me?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-22T09:44:56.142Z · LW(p) · GW(p)
If we were just talking about names this wouldn't matter, but we are talking about explanations. Vagueness in a name just means that the applicability of the name is a little undetermined. But there is no such thing as objective vagueness. The objective properties of things are "exact", even when we can only specify them vaguely.
This is what we all object to in the Copenhagen interpretation of quantum mechanics, right? It makes no sense to say that a particle has a position, if it doesn't have a definite position. Either it has a definite position, or the concept of position just doesn't apply. There's no problem in saying that the position is uncertain, or in specifying it only approximately; it's the reification of uncertainty - the particle is somewhere, but not anywhere in particular - which is nonsense. Either it's somewhere particular (or even everywhere, if you're a many-worlder), or it's nowhere.
Neil flirts with reifying vagueness about consciousness in a similarly untenable fashion. We can be vague about how we describe a subjective state of consciousness, we can be vague about how we describe the physical brain. But we cannot identify an exact property of a conscious state with an inherently vague physical predicate. The possibility of exact description of states on both sides, and of exactly specifying the mapping between them, must exist in any viable theory of consciousness. Otherwise, it reifies uncertainty in a way that has the same fundamental illogicality as the "particle without a definite position".
Replies from: AlephNeil↑ comment by AlephNeil · 2010-08-22T21:36:30.549Z · LW(p) · GW(p)
By the way, if you haven't read Dennett's "Real Patterns" then I can recommend it as an excellent explanation of how fuzzily defined, 'not-always-a-fact-of-the-matter-whether-they're-present' patterns, of which folk-psychological states like beliefs and desires are just a special case, can meaningfully find a place in a physicalist universe.
↑ comment by AlephNeil · 2010-08-23T09:36:25.482Z · LW(p) · GW(p)
There's an aspect of this which I haven't yet mentioned, which is the following:
We can imagine different strains of functionalism. The weakest would just be: "A person's mental state supervenes on their (multiply realizable) 'functional state'." This leaves the nature of the relation between functional state and mental state utterly mysterious, and thereby leaves the 'hard problem' looking as 'hard' as it ever did.
But I think a 'thoroughgoing functionalist' wants to go further, and say that a person's mental state is somehow constituted by (or reduces to) the functional state of their brain. It's not a trivial project to flesh out this idea - not simply to clarify what it means, but to begin to sketch out the functional properties that constitute consciousness - but it's one that various thinkers (like Minsky and Dennett) have actually taken up.
And if one ends up hypothesising that what's important for whether a system is 'conscious' is (say) whether it represents information a certain way, has a certain kind of 'higher-order' access to its own state, or whatever - functional properties which can be scaled up and down in scope and complexity without any obvious 'thresholds' being encountered that might correspond to the appearance of consciousness - then one has grounds for saying that there isn't always a 'fact of the matter' as to whether a being is conscious.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-23T11:32:53.335Z · LW(p) · GW(p)
I think a 'thoroughgoing functionalist' wants to go further, and say that a person's mental state is somehow constituted by (or reduces to) the functional state of their brain.
Then it's time to return to the rest of your comment - the whole discussion so far has just been about that one claim, that something can be neither conscious nor not-conscious. So now I'll quote myself:
Replies from: torekpThe property dualism I'm talking about occurs when basic sensory qualities like color are identified with such computational properties. Either you end up saying "seeing the color is how it feels" - and "feeling" is the extra, dual property - or you say there's no "feeling" at all - which is denial that consciousness exists. It would be better to be able to assert identity, but then the elements of a conscious experience can't really be coarse-grained states of neuronal ensembles, etc - that would restore the dualism.
↑ comment by torekp · 2010-08-24T00:57:53.677Z · LW(p) · GW(p)
It would be better to be able to assert identity, but then the elements of a conscious experience can't really be coarse-grained states of neuronal ensembles, etc - that would restore the dualism.
By "coarse-grained states" do you mean that, say, "pain" stands to the many particular neuronal ensembles that could embody pain, in something like the way "human being" stands to all the actual individual human beings? How would that restore a dualism, and what kind of dualism is that?
comment by cousin_it · 2010-08-22T07:08:15.978Z · LW(p) · GW(p)
The thought experiments proposed in the post and the comments hint at at a strictly simpler problem that we need to solve before tackling consciousness anyway: what is "algorithmicness"? What constitutes a "causal implementation" of an algorithm and distinguishes it from a video feed replay? How can we remove the need for vague "bridging laws" between algorithmicness and physical reality?
Replies from: Wei_Dai, Strange7↑ comment by Wei Dai (Wei_Dai) · 2010-08-22T10:15:31.017Z · LW(p) · GW(p)
I think UDT manages to sidestep this question. Would you agree? (To be more explicit, UDT manages to make decisions without having to explicitly determine whether something in the world is a "causal implementation" of itself. It just makes logical deductions about the world from statements like "S outputs X" where S is a code string that is its own source code, and that seems to be enough.)
But unfortunately I can't see how to similarly sidestep the problem of consciousness, if we humans are to make use of UDT in a formal way. The problem is that we don't have access to our own source code, so we can't write down S directly. All we have are access to subjective sensations and memories, and it seems like we need a theory of consciousness to tell us how to write down the description of an object (or a class of objects), given its subjective sensations and memories.
Replies from: cousin_it↑ comment by cousin_it · 2010-08-22T14:39:43.877Z · LW(p) · GW(p)
The situation with UDT is mysterious.
A UDT agent is a sort of ethereal thing, a class of logically-equivalent algorithms (up to rewriting and such) that can never believe it "sees" one universe - only the equivalence class of universes that gave it equivalent sensory inputs up to now. Okay, I can agree that it's meaningless to ask "where" you are in the universe. But it doesn't seem meaningless to ask you for your beliefs about your future sensory input #11, given sensory inputs #1-#10. Unfortunately, it's hard to see how you can define such credences - the naive idea is to count different instantiations of the algorithm within the world program, but we just threw away our concept of what counts as an "instance".
The equivalence class of algorithms is wider than one might think. For example, if (by way of some tricky mathematical fact) the algorithm's output is in fact independent from the value of one of the inputs, say input #11, then the algorithm cannot "perceive" that input. In other words, you cannot register any sensation that doesn't end up affecting your actions in the future. Weird, huh.
↑ comment by Perplexed · 2010-08-22T15:18:53.809Z · LW(p) · GW(p)
if (by way of some tricky mathematical fact) the algorithm's output is in fact independent from the value of one of the inputs, say input #11, then the algorithm cannot "perceive" that input. In other words, you cannot register any sensation that doesn't end up affecting your actions in the future. Weird, huh.
You all may be interested in some recent (since 1990 or so) work in theoretical computer science dealing roughly with "what is observationally equivalent with what". Google for strings including the keywords "bisimulation", "process algebra", and "observational equivalence". Or maybe not - it is unclear to me what you think the problem really is.
↑ comment by Wei Dai (Wei_Dai) · 2010-08-22T19:18:45.961Z · LW(p) · GW(p)
UDT sidesteps that question as well, because while it makes decisions, it never needs to compute things like "beliefs about your future sensory input #11, given sensory inputs #1-#10". I would say that an UDT agent doesn't have such beliefs.
Not quite sure what this part has to do with what I wrote. If you still think it's relevant, can you explain how?
↑ comment by cousin_it · 2010-08-23T05:40:12.671Z · LW(p) · GW(p)
Your answers have showed me that my original comment was wrong: the question of "algorithmicness" is uninteresting unless we imagine that algorithms can have "subjective experience", which brings us back to consciousness again. Oh well, another line of attack goes dead.
↑ comment by Vladimir_Nesov · 2010-08-22T15:21:44.250Z · LW(p) · GW(p)
A UDT agent is a program (axioms), not algorithm (theory). The way in which something is specified matters to the way it decides how to behave. If you are only talking about behavior, and not underlying decision-making, then you can abstract from the detail of how it's generated, but then you presuppose that condition.
↑ comment by Strange7 · 2010-08-22T09:41:25.469Z · LW(p) · GW(p)
A real algorithm keeps doing interesting things when presented with input it's creator didn't expect, while a lookup table can only return bland errors.
A well-designed algorithm takes that a step further, actually doing something useful.
comment by torekp · 2010-08-21T22:38:23.098Z · LW(p) · GW(p)
I'm close to your conclusion, but I don't accept your Searle-esque argument. I accept Chalmers's reasoning, roughly, on the fading qualia argument, and agree with you that it doesn't justify the usual conception of the joys of uploading.
And I think that's the whole core of what needs to be said on the topic. That is, we have a good argument for attributing consciousness-as-we-know-it to a fine-grained functional duplicate of ourselves. And that's all. We don't have any reason to believe that a coarse-grained functional duplicate - a being that gives similar behavioral outputs for a given input, but uses different structures and processes - would have a subjectivity like ours. ("Fine-grained" is an apropos choice of terminology by Chalmers.)
Our terms for subjective experiences, like pain, joy, the sensation of sweetness, and so on, ultimately have ostensive definitions. They're this, this, and this. And for concepts like that, it matters what the actual physical structures and processes are, that underly the actual phenomena we were attending to when we introduced the terms. (I don't think the generic term "consciousness" works this way, though. I'll avoid that subject for now and stick to some classic examples of qualia.)
This has great significance for uploading if, as I expect, human-like computer intelligence is developed not by directly simulating the human brain at a detailed level, but by taking advantage of the distinctive features of silicon and successor technologies. In that case, "uploading" looks to be the prudential equivalent of suicide.
Replies from: PaulAlmond↑ comment by PaulAlmond · 2010-08-21T22:44:13.187Z · LW(p) · GW(p)
That exactly seems quite close to Searle to me, in that you are both imposing specific requirements for the substrate - which is all that Searle does really. There is the possible difference that you might be more generous than Searle about what constitutes a valid substrate (though Searle isn't really too clear on that issue anyway).
Replies from: torekp↑ comment by torekp · 2010-08-22T15:08:57.498Z · LW(p) · GW(p)
Unlike Searle, and like Sharvy, I believe it ain't the meat, it's the motion (see the Sharvy reference at the bottom). Sharvy presents a fading qualia argument much like the one Chalmers offers in the link simplicio provides, only, to my recollection, without Chalmers's wise caveat that the functional isomorphism should be fine-grained.
comment by CarlShulman · 2010-08-21T20:13:11.009Z · LW(p) · GW(p)
Why do you cite Chalmers for fading qualia, but not Searle for the rephrased Chinese Room?
Replies from: simplicio↑ comment by simplicio · 2010-08-21T20:16:14.108Z · LW(p) · GW(p)
I was under the impression the Chinese room was an argument against intelligence simulation, not consciousness. I think you're right actually. Will edit.
Replies from: PaulAlmond↑ comment by PaulAlmond · 2010-08-21T20:40:04.954Z · LW(p) · GW(p)
This seems like pretty much Professor John Searle's argument, to me. Your argument about the algorithm being subject to interpretation and observer dependent has been made by Searle who refers to it as "universal realizability".
See;
Searle, J. R., 1997. The Mystery of Consciousness. London: Granta Books. Chapter 1, pp.14-17. (Originally Published: 1997. New York: The New York Review of Books. Also published by Granta Books in 1997.)
Searle, J. R., 2002. The Rediscovery of the Mind. Cambridge, Massachusetts: The MIT Press. 9th Edition. Chapter 9, pp.207-212. (Originally Published: 1992. Cambridge, Massachusetts: The MIT Press.)
comment by Johnicholas · 2010-08-23T13:22:54.080Z · LW(p) · GW(p)
Here's a thought experiment that helps me think about uploading (which I perceive as the real, observable-consequences-having issue here):
Suppose that you believed in souls (it is not that hard to get into that mindset - lots of people can do it). Also suppose that you believed in transmigration or reincarnation of souls. Finally, suppose that you believe that souls move around between bodies during the night, when people are asleep. Despite your belief in souls, you know that memories, skills, personality, goals are all located in the brain, not the soul.
Why do you go to sleep? Your consciousness will go out like a light! However, your soul will continue to exist, it will just go on to a different body. Your body has various goals and plans, that it worked on during the day, but it will get another soul tomorrow, and it's pretty experienced at this kind of juggling, guarding its goals and plans from harm while it is unconscious, and picking up the threads when it becomes conscious again.
Now consider (destructive) teleportation. Why allow your body to be destructively scanned and reconstructed? Well, if you (your body) has the same degree of trust in the equipment that you (your body) has in the process of going to sleep at night, then the two are exactly parallel. The new body will become conscious, and pick up its threads of memory, personality, skills, goals, probably with a different soul, but bodies are used to that.
Now consider (destructive) transmutation. If the reconstructed body used silicon instead of carbon, is anything different?
As far as I can tell, Tegmark's mathematical universe is "true" but hard to think with. You overwhelm yourself with images of bigness and variety and parallel, nearly-identical copies, but it has to add up to normality at the end. If you're trying to do something (think about something) difficult, maintaining the imagery can be a drain on your attention.
comment by Mass_Driver · 2010-08-23T05:02:28.993Z · LW(p) · GW(p)
When we simulate a brain on a general purpose computer, however, there is no physically similar pattern of energy/matter flow. If I had to guess, I suspect this is the rub: you must need a certain physical pattern of energy flow to get consciousness.
I invite you to evaluate the procedural integrity of your reasoning.
Do you really expect that "a certain physical pattern of energy flow" causes consciousness? Why? Can you even begin to articulate what that pattern might consist of? What is it about a computer model that fails to adequately account for the physical energy flow? Didn't you stipulate earlier that our model will "stick to an atom-by-atom (or complex amplitudes) approach"? Is there a difference between complex amplitudes and patterns of energy flow?
Replies from: torekp↑ comment by torekp · 2010-08-24T01:24:28.115Z · LW(p) · GW(p)
Do you really expect that "a certain physical pattern of energy flow" causes consciousness? Why? Can you even begin to articulate what that pattern might consist of?
The third question seems to call for advances in neuropsychology. And if that's correct, the first two questions probably face a similar need.
We know redness or sweetness when we see it, but we are in no position to define the processes that regularly explain these experiences. If we can find a property of neural processes that always leads to sweet sensations, and underlies all sweet sensations, then we'll know what (or whether) patterns of energy flow matter for that sensation.
Replies from: Perplexed, Mass_Driver↑ comment by Perplexed · 2010-08-24T02:10:05.633Z · LW(p) · GW(p)
We know redness or sweetness when we see it.
This is not intended as a criticism. But it sometimes seems to me the philosophical practice of choosing simple examples of a concept often strips away all hope of learning something about the concept from the example.
For example, it the above had been written "We know puceness or umaminess when we see it", we might have some hope of connecting the concept of perceiving the qualia with the concept of learning the name of the qualia.
↑ comment by Mass_Driver · 2010-08-24T03:54:53.027Z · LW(p) · GW(p)
I guess my concern is that you have not indicated your reason(s) for promoting the hypothesis that "energy flow" causes consciousness.
As for "advances in neuropsychology," what do you mean by "neuropsychology" besides a field that includes the study of consciousness? I certainly agree with you that further advances in the study of consciousness would be useful in identifying the causes of consciousness, but why would you assert that consciousness is caused by energy flow? If I understand you correctly, you are confident that the study of consciousness will lead researchers to conclude that it is caused by energy flow. Why?
Replies from: torekpcomment by rabidchicken · 2010-08-21T20:50:38.176Z · LW(p) · GW(p)
I think that the clarification you want is pointless. When I write a difficult program (or section of a program), the first thing I do is write the algorithm out on paper in words, a flow chart, or whatever makes sense at the time. Then I play around with it to make sure it can handle any possible input so it will not crash. the reason I do it that way is so I only have to worry about problems with the steps I am following, not issues like syntax, but whether I draw the data flow on paper, visualize it in my mind, run it on my computer, etc, etc. it is ALWAYS the same algorithm. Steps which take an input, interpret it, and then find the result. Consciousness generated by millions of ants carrying stones around in an infinite desert, or from you writing on scraps of paper may not look like much, but it is still consciousness.
comment by Kingreaper · 2010-08-22T11:48:32.912Z · LW(p) · GW(p)
Basically, the simulated consciousness was isomorphic to biological consciousness in a similar way to how my shadow is isomorphic to me. Just like the simulation, if I spoke ASL I could get my shadow to claim conscious awareness, but it wouldn't mean much.
The simulation is "all the information you contain (and then possibly some more)" running through an algorithm at least as complex as your own.
The shadow is "a very very small subset of the information about you, none of which is even particularly relevant to consciousness", and isn't being run at all.
So, I would disagree fundamentally with your claim that they are in any way similar.
Replies from: Kingreaper↑ comment by Kingreaper · 2010-08-22T11:52:44.084Z · LW(p) · GW(p)
More thought is needed in clarifying the exact difference between saying "consciousness arises from patterns of energy flow in the brain," and "consciousness arises from patterns of graphite on paper." I think there is definitely a big difference, but it's not crystal clear to me in what exactly it consists.
This may be a point of divergence. You're thinking of the brain as active, and the Graphite-Paper-Person simulator as passive.
If you talk about "patterns of energy flow in the brain" the analogous statement for the GPP is "patterns of marking creation/destruction on the paper"
If you talk about "patterns of graphite on paper" the analogous statement for brains is "patterns of electrochemical potential in cells"
comment by Kevin · 2010-08-22T07:29:56.896Z · LW(p) · GW(p)
Upvoted for changing your mind
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-08-22T08:10:47.890Z · LW(p) · GW(p)
It's laudable that simplico changed their mind and said so in plain terms, but I would encourage you to upvote only those articles which are the sort you'd like to see more of on LW.
Replies from: Kevin↑ comment by Kevin · 2010-08-22T08:33:42.956Z · LW(p) · GW(p)
Yes, I would like to see more posts of people that are wrong and realize it. Additional bad posts on Less Wrong is good compared to an atmosphere where people are afraid to be publicly wrong.
I also sometimes treat karma relatively . This post should be somewhere between -1 and 0 and it was at -2 when I upvoted.
comment by Furcas · 2010-08-21T20:42:48.450Z · LW(p) · GW(p)
When we simulate a brain on a general purpose computer, however, there is no physically similar pattern of energy/matter flow.
There isn't?
(Not a rhetorical question)
Replies from: rabidchicken↑ comment by rabidchicken · 2010-08-21T20:49:53.885Z · LW(p) · GW(p)
There is. you can look at the blueprints of a CPU or GPU, and it is quite clear that everything needs to be connected in a certain way to work.
Replies from: Furcascomment by timtyler · 2010-08-21T20:29:47.229Z · LW(p) · GW(p)
First:
A major claim on which the desirability of uploading (among other things) depends, is that the upload would be conscious (as distinct from intelligent). I think I found a reductio of this claim at about 4:00 last night while staring up at my bedroom ceiling.
...but then...
If I had to guess, I suspect this is the rub: you must need a certain physical pattern of energy flow to get consciousness.
A strong claim in the headline - but then a feeble one in the supporting argument.
comment by PhilGoetz · 2010-08-23T16:59:29.923Z · LW(p) · GW(p)
This is John Searle's Chinese room argument. Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences 3 (3): 417–457. Get the original article and the many refutations of it appended after the article. I don't remember if 457 is the last page of Searle's article, or of the entire collection.
comment by MartinB · 2010-08-23T14:44:55.540Z · LW(p) · GW(p)
Upvoted and disagreed.
There is no particular difference between a simulation that uses true physics[tm] or at least the abstraction necessary and the 'real' action.
The person that you are is also not implied by the matter or the actual hardware you happen to run on, but by the informational link between all the things that are currently implemented in your brain. (Memory and connections - to simplify that.) But there is no difference between a solution in hardware or software. One is easier to maintain and change, but it can easily behave the same from the outside. An upload could still be running the same things your brain does. Giving the same results. It just does not seam right because there is no physical body lying around.
I actually have problems with accepting the concept of Qualia in the first place. But why they should go away just because you replace parts of your hardware with identical items fails me. Simone is real - all 3 of them. And while you do not perceive the paper version as really interacting with you, she surely does experience her self. If you stop calculating her, you basically freeze her in time.
My solution would be to take away the term of consciousness altogether. Or to find a way to actually test for it. An AI that claims to be un-conscious would be a weird experience, and i have no clue for how to make sure she actually is not conscious. The term gets used so much in various media, that it really seems like a magic marker like emergency or complexity.
Maybe the impression of human conscious arrives because we have memories, we can think internally, and because on average there is a tendency to behave consistently. But i also have enough experience that makes me doubt the consciousness of specific people.
We all just operate on piles and piles of environmental data. And that you can do in wetware, electronics or on paper.
comment by JanetK · 2010-08-22T09:23:34.872Z · LW(p) · GW(p)
I have no doubt in my mind that some time in the future nervous systems with be simulated with all their functions including consciousness. Perhaps not a particular person's nervous system at a particular time, but a somewhat close approximation, a very similar nervous system with consciousness but no magic. However, I definitely doubt that it will be done on a general purpose computer running algorithms. I doubt that step-by-step calculations will be the way that the simulation will be done. Here is why:
1.The brain is massively parallel and complex feedback loops are difficult to calculate (not impossible but difficult). The easiest way to simulate a massively parallel system is to build it in hardware rather than use stepwise software.
2.There are effects of fields to consider – not just electrical and magnetic but also chemical. Like massive numbers of feedback loops, the fields would be difficult to calculate as the same elements that are reacting to the fields are also creating them.
3.There are many critical timing effects in the system and these would have to be duplicated or scaled, another difficulty of calculation.
I believe that it is far simpler to take advantage of the architecture of the brain which appears to have a lot of repetition of small units of a few thousand cells and build good models of these in hardware, including correct timing and ways to simulate fields etc. Then take advantage of the larger (sort of functional) divisions of the brain to construct larger modules. It gets very complicated fairly quickly but not as complicated as stepwise calculations. In essence it resembles the replacement of neurons one at a time with chips but the chips would have to be more more than just fancy logic components as they would have to sense their surroundings as well as communicate with other neurons or chips. The boundaries need to be at the natural joints to make it simpler, but the idea is the same. I can imagine this actually being built and having consciousness. The computer running algorithms or the person with a pencil creating consciousness is a lot harder to imagine (and needs a lot of 'in principles', too many for me).
Replies from: rwallace↑ comment by rwallace · 2010-08-22T11:31:28.141Z · LW(p) · GW(p)
Hardware might ultimately be more efficient than software for this kind of thing, but software is a lot easier to tune and debug. There are reasons neural network chips never took off.
I can plausibly imagine the first upload running in software, orders of magnitude slower than real time, on enough computers to cover a city block and require a dedicated power station, cooperating with a team of engineers and neuroscientists by answering one test question per day; 10 years later, the debugged version implemented in hardware, requiring only a roomful of equipment per upload, and running at a substantial fraction of real-time speed; and another 10 years later, new process technology specifically designed for that hardware, allowing a mass-market version that runs at full real-time speed, fits in desktop form factor and plugs into a standard power socket.
Replies from: JanetK↑ comment by JanetK · 2010-08-22T20:02:15.207Z · LW(p) · GW(p)
You may be right but my imagination has a problem with it. If there is a way to do analog computing using software in a non step-by-step procedure, then I could imagine a software solution. It is the algorithm that is my problem and not the physical form of the 'ware'.
Replies from: rwallace↑ comment by rwallace · 2010-08-23T03:21:57.444Z · LW(p) · GW(p)
I may not be understanding your objection in that case. Are you saying that there's no way software, being a digital phenomenon, can simulate continuous analog phenomena? If so, I will point to the many cases where we successfully use software to simulate analog phenomena to sufficient precision. If not, can you perhaps rephrase?
Replies from: JanetK↑ comment by JanetK · 2010-08-23T09:36:34.548Z · LW(p) · GW(p)
I may not be expressing my self well here. I am try to express what I can and cannot imagine - I do not presume to say that because I cannot imagine something, it is impossible. In fact I believe that it would be possible to simulate the nervous system with digital algorithms in principle, just extremely difficult in practice. So difficult I think that I cannot imagine it happening. It is not the 'software' or the 'digital' that is my block, it is the 'algorithm', the stepwise processes that I am having trouble with. How do you imagine the enormous amount and varied nature of feedback in the brain can be simulated by step-by-step logic? I take it that you can imagine how it could be done - so how?
Replies from: PaulAlmond↑ comment by PaulAlmond · 2010-08-23T09:49:28.030Z · LW(p) · GW(p)
with a lot of steps.
Replies from: JanetK↑ comment by JanetK · 2010-08-23T10:48:48.747Z · LW(p) · GW(p)
with a lot of steps
I guess that is the conversation stopper. We agree that it takes a lot of steps. We disagree on whether the number makes it only possible in principle or not.
Replies from: rwallace↑ comment by rwallace · 2010-08-23T11:47:20.716Z · LW(p) · GW(p)
Ah, I was about to reply with a proof of concept explanation in terms of molecular modeling (which of course would be hopelessly intractable in practice but should illustrate the principle), until I saw you say 'only possible in principle'; are you saying then that your objection is that you think even the most efficient software-based techniques would take, say, a million years of supercomputer time to run a few seconds of consciousness?
Replies from: JanetK↑ comment by JanetK · 2010-08-24T10:30:34.106Z · LW(p) · GW(p)
Well, maybe not that long, but a long, long time to do the 'lot of little steps'. It does not seem the appropriate tool to me. After all, the much slower component parts of a brain do a sort of unit of perception in about a third of a second. I believe that is because it is not done step-wise but something like this: the enormous number of overlapping feedback loops can only stabilize in a sort of 'best fit scenario' and it takes very little time for the whole network to hone in on the final perception. (Vaguely that sort of thing)
Replies from: rwallace↑ comment by rwallace · 2010-08-24T10:44:49.061Z · LW(p) · GW(p)
Right, fair enough, then it's a quantitative question on which our intuitions differ, and the answer depends both on a lot of specific facts about the brain, and on what sort of progress Moore's Law ends up making over the next few decades. Let's give Blue Brain another decade or two and see what things look like then.
Replies from: JanetK↑ comment by JanetK · 2010-08-24T11:22:28.402Z · LW(p) · GW(p)
Personally I have great hopes for Blue Brain. If it figures out how a single cortex unit works ( which they seem to be on the way to). If they can then figure out how to convert that into a chip and put oodles of those clips in the right environment of inputs and interactions with other parts of the brain (thalamus and basal ganglia especially) and then.....
A lot of work but it has a good chance as long as it avoids the step-by-step algorithm trap.