How sure are you that brain emulations would be conscious?
post by ChrisHallquist · 2013-08-26T06:21:17.996Z · LW · GW · Legacy · 177 commentsContents
177 comments
Or the converse problem - an agent that contains all the aspects of human value, except the valuation of subjective experience. So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so. This, I admit, I don't quite know to be possible. Consciousness does still confuse me to some extent. But a universe with no one to bear witness to it, might as well not be.
- Eliezer Yudkowsky, "Value is Fragile"
I had meant to try to write a long post for LessWrong on consciousness, but I'm getting stuck on it, partly because I'm not sure how well I know my audience here. So instead, I'm writing a short post, with my main purpose being just to informally poll the LessWrong community on one question: how sure are you that whole brain emulations would be conscious?
There's actually a fair amount of philosophical literature about issues in this vicinity; David Chalmers' paper "The Singularity: A Philosophical Analysis" has a good introduction to the debate in section 9, including some relevant terminology:
Biological theorists of consciousness hold that consciousness is essentially biological and that no nonbiological system can be conscious. Functionalist theorists of consciousness hold that what matters to consciousness is not biological makeup but causal structure and causal role, so that a nonbiological system can be conscious as long as it is organized correctly.
So, on the functionalist view, emulations would be conscious, while on the biological view, they would not be.
Personally, I think there are good arguments for the functionalist view, and the biological view seems problematic: "biological" is a fuzzy, high-level category that doesn't seem like it could be of any fundamental importance. So probably emulations will be conscious--but I'm not too sure of that. Consciousness confuses me a great deal, and seems to confuse other people a great deal, and because of that I'd caution against being too sure of much of anything about consciousness. I'm worried not so much that the biological view will turn out to be right, but that the truth might be some third option no one has thought of, which might or might not entail emulations are conscious.
Uncertainty about whether emulations would be conscious is potentially of great practical concern. I don't think it's much of an argument against uploading-as-life-extension; better to probably survive as an up than do nothing and die for sure. But it's worrisome if you think about the possibility, say, of an intended-to-be-Friendly AI deciding we'd all be better off if we were forcibly uploaded (or persuaded, using its superhuman intelligence, to "voluntarily" upload...) Uncertainty about whether emulations would be conscious also makes Robin Hanson's "em revolution" scenario less appealing.
For a long time, I've vaguely hoped that advances in neuroscience and cognitive science would lead to unraveling the problem of consciousness. Perhaps working on creating the first emulations would do the trick. But this is only a vague hope, I have no clear idea of how that could possibly happen. Another hope would be that if we can get all the other problems in Friendly AI right, we'll be able to trust the AI to solve consciousness for us. But with our present understanding of consciousness, can we really be sure that would be the case?
That leads me to my second question for the LessWrong community: is there anything we can do now to to get clearer on consciousness? Any way to hack away at the edges?
177 comments
Comments sorted by top scores.
comment by itaibn0 · 2013-08-24T15:36:41.759Z · LW(p) · GW(p)
I believe the word "consciousness" is used in so many confused and conflicting ways that nobody should mention "consciousness" without clarifying what they mean by it. I will substitute your question with "How should we morally value emulations?".
Personally, if an emulation behaved like a human in all respects except for physical presence, I would give them the same respect as I give a human, subject to the following qualifications:
- I don't believe multiple emulations with very similar memories should not be treated the same as an equal number of humans.
- I don't believe emulations should be given voting rights unless there is very careful regulation on how they are created; otherwise manufacturers would have perverse incentives. [edit: Actually, what should be regulated is not when they can be created, but when they can be given voting rights.]
- Similarly, a careful look at practical considerations must be given before granting emulations other civil rights.
- If this situation actually occurs in my lifetime, I would have access to more details on how emulations and society with emulations work. This information may cause me to change my mind.
If emulations behave in noticably different ways from humans, I would seek more information before making judgements.
In particular, according to my current moral intuition, I don't give an argument of the form "This emulation behaves like just like a human, but it might not actually be conscious" any weight.
Replies from: GuySrinivasan, David_Gerard, ChrisHallquist↑ comment by SarahNibs (GuySrinivasan) · 2013-08-24T16:48:43.562Z · LW(p) · GW(p)
I don't believe emulations should be given voting rights unless there is very careful regulation on how they are created; otherwise manufacturers would have perverse incentives.
Do you in general support regulations on creating things with voting rights, to avoid manufacturers having perverse incentives?
Replies from: Error, itaibn0, Jiro↑ comment by Error · 2013-08-27T20:35:57.850Z · LW(p) · GW(p)
Assuming you're aiming to refer to creating humans:
It seems to me that there's a qualitative difference between current methods of creating voters (i.e. childbearing) and creating a whole ton of emulations. Our current methods are distributed, slow, have a long time gap (so time discounting applies to incentives), and there are better options for abuse of authority than breeding new voters. Whereas effectively free creation of human-equivalent ems is fast, centralized, has effectively no time gap, and could easily warp the political context, assuming "political context" still matters in such a world.
But I think even thinking about voting rights for ems is solving the wrong problem. If we as a society determine that ems ought to have the rights and privileges of citizens, but doing so completely breaks democracy as we know it, it is likely that the proper response is not to rearrange voting rights, but to simply replace democracy with Something Else that better fits the new situation.
Democracy isn't immutable. If it doesn't work, find something else that does.
↑ comment by itaibn0 · 2013-08-24T18:35:35.531Z · LW(p) · GW(p)
Considering your question, I have changed my position. In its current form it applies equally well to both ems and humans. Also, note that careful regulation does not necessarily mean heavy regulation. In fact, heavy regulation has the danger of creating perverse incentives to the regulators.
Replies from: Slider↑ comment by Slider · 2013-08-26T11:29:30.987Z · LW(p) · GW(p)
There are some people that for religious reasons forego birth control causing bigger families. Am I correct in extrapolating that you would find that a child of such parents would have less basis to have their vote counted in equal force?
↑ comment by Jiro · 2013-08-24T17:25:08.512Z · LW(p) · GW(p)
The regulation wasn't supposed to be on creating the things, the regulation was supposed to be on giving them the right to vote once they have been created.
I'd suggest that in a situation where it is possible to, for instance, shove a person into a replicator and instantly get a billion copies with a 1 hour lifespan, we should indeed deny such copies voting rights.
Of course, creating doesn't necessarily mean creating from scratch. Suppose nonresidents cannot vote, residents can vote, and the residency requirement is one hout. You can create residents from nonresidents by bussing them in and waiting. I would support a regulation that did not allow such newly created residents to vote.
I can't think of any real-life situations where it's easy enough to create voters that there are any such perverse incentives (real-life cases of bussing in nonresidents are usually just vote fraud).
↑ comment by David_Gerard · 2013-08-24T16:11:04.185Z · LW(p) · GW(p)
Concur. What I get from this post is "the word is hopelessly confused, and philosophical discussions on the topic are mostly arguments over the word."
↑ comment by ChrisHallquist · 2013-08-24T21:13:50.695Z · LW(p) · GW(p)
I believe the word "consciousness" is used in so many confused and conflicting ways that nobody should mention "consciousness" without clarifying what they mean by it.
This is a good point; you're absolutely right that I should have addressed this issue in the OP. There seems to be broad agreement among people who find consciousness puzzling that Charlmers' description of the "hard problem" does a pretty good job of specifying where the puzzle lies, regardless of whether they agree with Chalmers' other views (my impression is that few do).
Replies from: itaibn0↑ comment by itaibn0 · 2013-08-25T15:55:31.402Z · LW(p) · GW(p)
Unfortunately, that's doesn't clarify it for me. I've seen descriptions along these lines, and if I thought they were coherent and consistent with each other I would have assumed that's were referring to the same thing. In particular, this segment is confusing:
If someone says "I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene", then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says "I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced", they are not making a conceptual mistake.
It seems to me like someone asking the second question is making a conceptual mistake of exactly the same nature as someone asking the first question.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-25T19:00:30.850Z · LW(p) · GW(p)
Given an extremely-high-resolution em with verified pointwise causal isomorphism (that is, it has been verified that emulated synaptic compartments are behaving like biological synaptic compartments to the limits of detection) and verified surface correspondence (the person emulated says they can't internally detect any difference) then my probability of consciousness is essentially "top", i.e. I would not bother to think about alternative hypotheses because the probability would be low enough to fall off the radar of things I should think about. Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even though your biological brain isn't?
Replies from: Kawoomba, ChrisHallquist, Strilanc, Technoguyrob, Juno_Watt↑ comment by Kawoomba · 2013-08-25T19:25:51.190Z · LW(p) · GW(p)
What if it had only been verified that the em's overall behavior perfectly corresponds to its biological template (i.e. without corresponding subparts down to your chosen ground level)?
What if e.g. groups of neurons could be perfectly (and more efficiently) simulated, using an algorithm which doesn't need to retain a "synapse" construct?
Do you feel that some of the biological structural features on some level of granularity need to have clearly identifiable point-to-point counterparts in the algorithm?
If so, why stop at "synaptic compartments", why not go to some even finer-grained level? You presumably wouldn't insist on the algorithm explicitly simulating atoms (or elementary particles), groups of those (you'd probably agree) may be abstracted from, using higher-level functionally equivalent subalgorithms.
Since in any case, "verified surface correspondence" is a given (i.e. all em-implementations aren't differentiable from a black-box view), on what basis would you say which (functionally superfluous) parts may be optimized away, and which must be preserved? Choosing "synaptic compartments" seems like privileging the hypothesis based on what's en vogue in literature.
This is probably another variant of the hard problem of consciousness, and unless resource requirements do not play any role at all, it's unlikely that ems won't end up being simulated as efficiently as possible (and synaptic compartments be damned), especially since the ems will profess not to notice a thing (functional equivalence).
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-25T19:45:35.155Z · LW(p) · GW(p)
What if it had only been verified that the em's overall behavior perfectly corresponds to its biological template (i.e. without corresponding subparts down to your chosen ground level)?
Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.)
What if e.g. groups of neurons could be perfectly (and more efficiently) simulated, using an algorithm which doesn't need to retain a "synapse" construct?
I would believe in this after someone had shown extremely high-fidelity simulation of synaptic compartments, then demonstrated the (computational) proposition that their high-level sim was equivalent.
Do you feel that some of the biological structural features on some level of granularity need to have clearly identifiable point-to-point counterparts in the algorithm?
No, but it's sufficient to establish causal isomorphism. At the most extreme level, if you can simulate out a synapse by quantum fields, then you are very confident in your ability to simulate it because you have a laws-of-physics-level understanding of the quantum fields and of the simulation of the quantum fields.
Since in any case, "verified surface correspondence" is a given (i.e. all em-implementations aren't differentiable from a black-box view)
Only in terms of very high-level abstractions being reproduced, since literal pointwise behavior is unlikely to be reproducible given thermal noise and quantum uncertainty. But it remains true that I expect any disturbance of the referent of "consciousness" to disturb the resulting agent's tendency to write philosophy papers about "consciousness". Note the high-level behavioral abstraction.
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of 'not bothering to think about it any more'. There are no worlds of reasonable probability in which both tests are simultaneously and accidentally fooled in the process of constructing a technology honestly meant to produce high-fidelity uploads.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-08-25T20:15:34.641Z · LW(p) · GW(p)
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of 'not bothering to think about it any more'.
The kind of model which postulates that "a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human" would not likely stop at "... at least be structured like that human for, like, 9 orders of magnitude down from a human's size, to the level that you a human can see through an electron microscope, that's enough after that it doesn't matter (much / at all)". Wouldn't that be kind of arbitrary and make for an ugly model?
Instead, if structural correspondence allowed for significant additional confidence that the em's professions of being conscious were true, wouldn't such a model just not stop, demanding "turtles all the way down"?
I guess I'm not sure what some structural fidelity can contribute (and find those models too construed which place consciousness somewhere beyond functional equivalence, but still in the upper echelons of the substructures, conveniently not too far from the surface level), compared to "just" overall functional equivalence.
IOW, the big (viable) alternative to functional equivalence, which is structural (includes functional) equivalence, would likely not stop just a few levels down.
Replies from: Eliezer_Yudkowsky, patrickmclaren, Juno_Watt↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-25T20:46:58.758Z · LW(p) · GW(p)
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of 'not bothering to think about it any more'..
Key word: "Sufficient". I did not say, "necessary".
Replies from: badtheatre↑ comment by badtheatre · 2013-09-03T05:03:15.921Z · LW(p) · GW(p)
This brings up something that has been on my mind for a long time. What are the necessary and sufficient conditions for two computations to be (homeo?)morphic? This could mean a lot of things, but specifically I'd like to capture the notion of being able to contain a consciousness, so what I'm asking is, what we would have to prove in order to say program A contains a consciousness --> program B contains a consciousness. "pointwise" isomorphism, if you're saying what I think, seems too strict. On the other hand, allowing any invertible function to be a ___morphism doesn't seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers. Restricting our functions by, say, resource complexity, also seems to lead to both similar and unrelated issues...
Has this been discussed in any other threads?
↑ comment by patrickmclaren · 2013-08-28T11:09:19.248Z · LW(p) · GW(p)
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of 'not bothering to think about it any more'.
The kind of model which postulates that "a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human" would not likely stop at "... at least be structured like that human for, like, 9 orders of magnitude down from a human's size, to the level that you a human can see through an electron microscope, that's enough after that it doesn't matter (much / at all)". Wouldn't that be kind of arbitrary and make for an ugly model?
Given that an isomorphism requires checking that the relationship is one-to-one in both directions i.e. human -> em, and em -> human, I see little reason to worry about recursing to the absolute bottom.
Suppose that it turns out that in some sense, ems are little endian, whilst humans are big endian, yet, all other differences are negligible. Does that throw the isomorphism out the window? Of course not.
↑ comment by Juno_Watt · 2013-08-25T20:20:17.896Z · LW(p) · GW(p)
Instead, if structural correspondence allowed for significant additional confidence that the em's professions of being conscious were true, wouldn't such a model just not stop, demanding "turtles all the way down"?
IOW, why assign "top" probability to the synaptic level, when there are further levels.
↑ comment by ChrisHallquist · 2013-08-26T04:30:15.402Z · LW(p) · GW(p)
Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even though your biological brain isn't?
Not gold, specifically, but if I catch your intended meaning, yes.
Still digesting your other comments in this subthread, will try to further respond to those.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-26T04:37:20.971Z · LW(p) · GW(p)
Why not gold specifically?
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-08-26T06:20:42.247Z · LW(p) · GW(p)
I meant to apply the "not" to the "specifically," rather than to the "gold." Gold isn't what I normally think of being used as a computing substrate, though I suppose it could get used that way if we use up all the more abundant elements as we convert the solar system into a Dyson sphere (AFAIK, there may be a reason I'm unaware of not to do that).
↑ comment by Strilanc · 2013-08-27T11:20:15.360Z · LW(p) · GW(p)
Is isomorphism enough?
Consider gravity as an analogy.
A person who cares about bending spacetime lots is not equivalent to a person who cares about doing things isomorphic to bending spacetime lots. One will refuse to be replaced by a simulation, and the other will welcome it. One will try to make big compressed piles of things, and the other will daydream about making unfathomably big compressed piles of things.
Telling a person who cares about bending spacetime lots that, within the simulation, they'll think they're bending spacetime lots will not motivate them. They don't care about thinking they're bending spacetime. They want to actually bend spacetime. P wants X, not S(X), even though S(P) S(wants) S(X).
If isomorphism is enough then the person who cares about bending spacetime a lot, who wants X but not S(X), is somehow fundamentally misguided. A case I can think of where that would be the case is a simulated world where simulated simulations are unwrapped (and hopefully placed within observable distance, so P can find out X = S(X) and react accordingly). In other cases.... well, at the moment, I just don't see how it's misguided to be P wanting X but not care about S(P) S(wanting) S(X).
I don't want to think I'm conscious. I don't want the effects of what I would do if I were conscious to be computed out in exacting detail. I don't want people to tell stories about times I was conscious. I want to be conscious. On the other hand, I suppose that's what most non-simulated evolved things would say...
↑ comment by robertzk (Technoguyrob) · 2013-08-26T20:40:48.882Z · LW(p) · GW(p)
I spend time worrying about whether random thermal fluctuation in (for example) suns produces sporadic conscious moments simply due to random causal structure alignments. Since I also believe most potential conscious moments are bizarre and painful, that worries me. This worry is not useful when embedded in systems one, a worry which the latter was not created to cope with, so I only worry in the system two philosophical curiosity sense.
Replies from: Rukifellth↑ comment by Rukifellth · 2013-08-27T15:24:58.419Z · LW(p) · GW(p)
I find Boltzmann Brains to be more of an unconvincing thought experiment than an actual possibility.
Since I also believe most potential conscious moments are bizarre and painful, that worries me.
Is this concern altruistic/compassionate?
↑ comment by Juno_Watt · 2013-08-25T19:24:11.312Z · LW(p) · GW(p)
What does "like" mean, there? The actual biochemistry, so that pieces of Em could be implanted in a real brain, or just accurate virtualisation, like a really good flight simulator?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-25T19:38:44.170Z · LW(p) · GW(p)
Flight simulator, compared to instrumentation of and examination of biology. This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.
Replies from: wedrifid, Juno_Watt↑ comment by wedrifid · 2013-08-26T04:49:00.135Z · LW(p) · GW(p)
and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.
This is not intended to undermine your position (since I share it) but this seems like a surprising claim to me. From what I understand of experiments done on biological humans with parts of their brains malfunctioning there are times where they are completely incapable of recognising the state of their brain even when it is proved to them convincingly. Since 'consciousness' seems at least somewhat related to the parts of the brain with introspective capabilities it does not seem implausible that some of the interventions that eliminate consciousness also eliminate the capacity to notice that lack.
Are you making a claim based off knowledge of human neuropsychology that I am not familiar with or is it claim based on philosophical reasoning. (Since I haven't spent all that much time analysing the implications of aspects of consciousness there could well be something I'm missing.)
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-26T07:29:04.854Z · LW(p) · GW(p)
Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-26T08:52:14.640Z · LW(p) · GW(p)
Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.
I'd tend to agree, at least with respect to novel or interesting work.
If you'll pardon some academic cynicism, it wouldn't surprise me much if an uploaded, consciousness redacted tenured professor could go ahead producing papers that would be accepted by journals. The task of publishing papers has certain differences to that of making object level progress. In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious.
Replies from: mwengler↑ comment by mwengler · 2013-08-28T18:40:18.080Z · LW(p) · GW(p)
In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious
How would you know, or even what would make you think, that it was NOT conscious? Even if it said it wasn't conscious, that would be evidence but not dispositive. After all, there are humans such as James and Ryle who deny consciousness. Perhaps their denial is in a narrow or technical sense, but one would expect a conscious literary synthesis program to be AT LEAST as "odd" as the oddest human being, and so some fairly extensive discussion would need to be carried out with the thing to determine how it is using the terms.
At the simplest level consciousness seems to mean self-consciousness: I know that I exist, you know that you exist. If you were to ask a literary program whether it knew it existed, how could it meaningfully say no? And if it did meaningfully say no, and you loaded it with data about itself (much as you must load it with data about art when you want it to write a book of art criticism or on aesthetics) then it would have to say it knows it exists, as much as it would have to say it knows about "art" when loaded with info to write a book on art.
Ultimately, unless you can tell me how I am wrong, our only evidence of anybody but our own consciuosness is by a weak inference that "they are like me, I am conscious deep down, Occam's razor suggests they are too." Sure the literary program is less like me than is my wife, but it is more like me than a clam is like me, and it is more like me in some respects (but not overall) than is a chimpanzee. I think you would have to put your confidence that the literary program is conscious at something in the neighborhood of your confidence that a chimpanzee is conscious.
Replies from: wedrifid↑ comment by wedrifid · 2013-09-01T05:42:17.879Z · LW(p) · GW(p)
How would you know, or even what would make you think, that it was NOT conscious?
I'd examine the credentials and evidence of competence of the narrow AI engineer that created it and consult a few other AI experts and philosophers who are familiar with the particular program design.
↑ comment by Juno_Watt · 2013-08-25T19:46:16.413Z · LW(p) · GW(p)
Then why require causal isomporphism at the synaptic structure in addition to surface correspondence of behaviour?
Replies from: Eliezer_Yudkowsky, Juno_Watt↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-25T20:46:14.551Z · LW(p) · GW(p)
Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause. Thus this criterion is entirely sufficient (perhaps not necessary).
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some tiny little overlooked property of the synapses wasn't key to high-level surface properties, in which case you'd expect what was left to stop talking about consciousness, or undergo endless epileptic spasms, etc. However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don't actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you're conscious to the limits of inspection yet does not produce actual consciousness, etc.
Replies from: MugaSofer, Juno_Watt↑ comment by MugaSofer · 2013-08-26T17:48:19.240Z · LW(p) · GW(p)
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some tiny little overlooked property of the synapses wasn't key to high-level surface properties, in which case you'd expect what was left to stop talking about consciousness, or undergo endless epileptic spasms, etc.
Hmm. I would expect a difference, but ... out of interest, how much talk about consciousness do you think is directly caused by it (ie non-chat-bot-simulable.)
↑ comment by Juno_Watt · 2013-08-25T22:49:30.328Z · LW(p) · GW(p)
Because while it's conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause
For some value of "cause". If you are interested in which synaptic signals cause which reports, then you have guaranteed that the cause will be the same. However, I think what we are interested in is whether reports of experience and self-awareness are caused by experience and self-awareness
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some
However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don't actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you're conscious to the limits of inspection yet does not produce actual consciousness, etc.
Maybe, But your stipulation of causal isomorphism at the synaptic level only guarantees that there will only be minor differences at that level, Since you don't care how the Ems synapses are implemented there could be major differences at the subsynaptic level .. indeed, if your Em is silicon-based, there will be. And if those differences lead to differences in consciousness (which they could, irrespective of the the point made above, since they are major differences), those differences won't be reported, because the immediate cause of a report is a synaptic firing, which will be guaranteed to be the same!
You have, in short, set up the perfect conditions for zombiehood: a silicon-based Em is different enough to a wetware brain to reasonably have a different form of consciousness, but it can't report such differences, because it is a functional equivalent..it will say that tomatoes are red, whatever it sees!
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-25T23:09:50.631Z · LW(p) · GW(p)
http://lesswrong.com/lw/p7/zombies_zombies/
http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/
http://lesswrong.com/lw/f1u/causal_reference/
More generally http://wiki.lesswrong.com/wiki/Zombies_(sequence)
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-25T23:23:22.466Z · LW(p) · GW(p)
The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs.
Replies from: nshepperd, Eliezer_Yudkowsky, mwengler↑ comment by nshepperd · 2013-08-26T00:21:23.398Z · LW(p) · GW(p)
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness), since it hasn't been deliberately programmed to fake consciousness-talk. Or, something extremely unlikely has happened.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn't actually play any role in our consciousness-talk, since David Chalmers would write just as many papers on the Hard Problem regardless of whether we flipped the "consciousness" bit in every synapse in his brain.
Replies from: sh4dow, Juno_Watt↑ comment by sh4dow · 2020-12-07T00:28:15.142Z · LW(p) · GW(p)
But isn't it still possible that a simulation that lost its consciousness would still retain memories about consciousness that were sufficient, even without access to real consciousness, to generate potentially even 'novel' content about consciousness?
Replies from: nshepperd↑ comment by nshepperd · 2021-01-03T08:26:06.013Z · LW(p) · GW(p)
That's possible, although then the consciousness-related utterances would be of the form "oh my, I seem to have suddenly stopped being conscious" or the like (if you believe that consciousness plays a causal role in human utterances such as "yep, i introspected on my consciousness and it's still there"), implying that such a simulation would not have been a faithful synaptic-level WBE, having clearly differing macro-level behaviour.
↑ comment by Juno_Watt · 2013-08-26T01:02:46.470Z · LW(p) · GW(p)
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren't conscious.
A functional duplicate will talk the same way as whomever it is a duplicate of.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness),
A WBE of a specific person will respond to the same stimuli in the same way as that person. Logically, that will be for the reason that it is a duplicate, Physically, the "reason" or, ultimate cause, could be quite different, since the WBE is physically different.
since it hasn't been deliberately programmed to fake consciousness-talk.
It has been programmed to be a functional duplicate of a specific individual.,
Or, something extremely unlikely has happened.
Something unlikely to happen naturally has happened. A WBE is an artificial construct which is exactly the same as an person in some ways,a nd radically different in others.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn't actually play any role in our consciousness-talk,
Actually it isn't, for reasons that are widely misunderstood: kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-26T01:03:57.065Z · LW(p) · GW(p)
http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-26T01:24:29.225Z · LW(p) · GW(p)
Why? That doesn't argue any point relevant to this discussion.
Replies from: ESRogs↑ comment by ESRogs · 2013-08-26T03:54:24.867Z · LW(p) · GW(p)
Did you read all the way to the dialogue containing this hypothetical?
Albert: "Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules."
The following discussion seems very relevant indeed.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-26T13:07:58.692Z · LW(p) · GW(p)
I don't see anything very new here.
Charles: "Uh-uh! Your operation certainly did disturb the true cause of my talking about consciousness. It substituted a different cause in its place, the robots. Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn't mean it's the same cause that was originally there."
Albert: "But I wouldn't even have to tell you about the robot operation. You wouldn't notice. If you think, going on introspective evidence, that you are in an important sense "the same person" that you were five minutes ago, and I do something to you that doesn't change the introspective evidence available to you, then your conclusion that you are the same person that you were five minutes ago should be equally justified. Doesn't the Generalized Anti-Zombie Principle say that if I do something to you that alters your consciousness, let alone makes you a completely different person, then you ought to notice somehow?"
How does Albert know that Charles;s consciousness hasn't changed? It could have changed becasue of the replacement of protoplasm by silicon. And Charles won't report the change because of the functional equivalence of the change.
Charles: "Introspection isn't perfect. Lots of stuff goes on inside my brain that I don't notice."
If Charles's qualia have changed, that will be noticeable to Charles -- introspection is hardly necessary, sinc ethe external world wil look different! But Charles won't report the change. "Introspection" is being used ambiguously here, between what is noticed and what is reported.
Albert: "Yeah, and I can detect the switch flipping! You're detecting something that doesn't make a noticeable difference to the true cause of your talk about consciousness and personal identity. And the proof is, you'll talk just the same way afterward."
Albert's comment is a non sequitur. That the same effect occurs does not prove that the same cause occurs, There can mutliple causes of reports like "I see red". Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them,
Replies from: FeepingCreature, ESRogs↑ comment by FeepingCreature · 2013-08-26T14:03:46.773Z · LW(p) · GW(p)
Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your "qualia" are causally impotent and I'd go so far as to say, meaningless.
Are you sure you read Eliezer's critique of Chalmers? This is exactly the error that Chalmers makes.
It may also help you to read making beliefs pay rent and consider what the notion of qualia actually does for you, if you can imagine a person talking of qualia for the same reason as you while not having any.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-26T15:13:11.936Z · LW(p) · GW(p)
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your "qualia" are causally impotent and I'd go so far as to say, meaningless.
Doesn't follow, Qualia aren't causing Charles's qualia-talk, but that doens't mean thery aren't causing mine. Kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.
The epiphenomenality argument works for atom-by-atom duplicates, but not in WBE and neural replacement scenarios. if indentity theory is true, qualia have the causal powers of whatever physical properties they are identical to. If identity theory is true, changing the physcial substrate could remove or change the qualia.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-08-27T08:16:00.997Z · LW(p) · GW(p)
Kidney dyalisis machines don't need nephrons, but that doens't mean nephrons are causally idle in kidneys.
You keep bringing up that argument, but kidney dialysis machines are built specifically to replace the functionality of kidneys ("deliberately replacing them with a substitute"). If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work. If it did, you should question if that cell type actually does anything in kidneys.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence - implying that there's a separate, unrelated reason why the replacement neurons talk of qualia, that has nothing to do with qualia, that was not deliberately engineered - that stretches belief past the breaking point. You're saying, essentially: "qualia cause talk of qualia in my meatbrain, but talk of qualia is not any indication of qualia in any differently built brain implementing the same spec". Then why are you so certain that your talk of qualia is caused by your supposed qualia, and not the neural analogue of what causes talk of qualia in WBE brains? It really does sound like your qualia are either superfluous or bizarre.
[edit] Actually, I'm still not sure I understand you. Are you proposing that it's impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery? Is that what you mean by "functional equivalent"? I'm having serious trouble comprehending your position.
[edit] I went back to your original comment, and I think we're using "functional equivalence" in a very different sense. To you, it seems to indicate "a system that behaves in the same way despite having potentially hugely different internal architecture". To me, it indicates a 1:1 neuron computational replacement; keeping the computational processes while running them on a different substrate.
I agree that there may conceivably exist functionally equivalent systems that don't have qualia, even though I have difficulty seeing how they could compute "talk of qualia" without running a sufficient-fidelity qualia simulation internally, which would again correspond to our qualia. However, I find it unlikely that anybody who is not a very very bored deity would ever actually create such a system - the qualia-talk machinery seems completely pointless to its function, as well as probably much more computationally expensive. (This system has to be self-deluding in a way consistent with a simpler system that it is not allowed to emulate) Why not just build a regular qualia engine, by copying the meat-brain processes 1:1? That's what I'd consider the "natural" functional-equivalence system.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-27T08:38:20.540Z · LW(p) · GW(p)
If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work.#
I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence
Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.
Are you proposing that it's impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery?
I am noting thar equivlant talk must be included in functional equivalence.
Why not just build a regular qualia engine, by copying the meat-brain processes 1:1?
You mean atom-by-atom? But is has been put to me that you only need synapse-by-synapse copies. That is what I am responding to.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-08-27T09:05:52.286Z · LW(p) · GW(p)
Okay. I don't think it's possible to build a functional equivalent of a mind that talks of qualia because it has them, by 1:1 porting at the synapse level, and get something that talks of qualia without having any. You can stipulate that all day but I don't think it can actually be done. This is contingent on neurons being the computational elements of our minds. If it turns out that most of the computation of mindstates is done by some sort of significantly lower-scale process and synaptic connections are, if not coincidental, then at least not the primary element of the computation going on in our heads, I could imagine a neural-level functional equivalent that talked of qualia while running the sort of elaborate non-emulation described in my previous comment.
But if neurons are the computational basis of our minds, and you did a 1:1 synapse-level identical functional copy, and it talked of qualia, it would strain credulty to say it talked of qualia for a different reason than the original did, while implementing the same computation. If you traced the neural impulses backwards all the way to the sensory input that caused the utterance, and verified that the neurons computed the same function in both systems, then what's there left to differentiate them? Do you think your talk of qualia is not caused by a computation in your neurons? Qualia are the things that make us talk about qualia, or else the word is meaningless. To say that the equivalent, different-substrate system talked about qualia out of the same computational processes (at neuron level), but for different, incorrect reasons - that, to me, is either Chalmers-style dualism or some perversion of language that carries no practical value.
↑ comment by ESRogs · 2013-08-26T15:07:43.444Z · LW(p) · GW(p)
If Charles's qualia have changed, that will be noticeable to Charles -- introspection is hardly necessary, sinc ethe external world wil look different! But Charles won't report the change.
I don't think I understand what you're saying here, what kind of change could you notice but not report?
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-26T15:32:38.909Z · LW(p) · GW(p)
If a change to the way your funcitonality is implemented alters how your consciousness seems to you, your consciosuness will seem different to you. If your funcitonality is preserved, you won't be able to report it. You will report tomatos are red even if they look grue or bleen to you. (You may also not be able to cognitively access--remember or think about--the change, if that is part of the preserved functionality, But if your experience changes, you can't fail to experience it).
Replies from: ESRogs↑ comment by ESRogs · 2013-08-26T16:11:02.462Z · LW(p) · GW(p)
Hmm, it seems to me that any change that affects your experience but not your reports must have also affected your memory. Otherwise you should be able to say that the color of tomatoes now seems darker or cooler or just different than it did before. Would you agree?
↑ comment by mwengler · 2013-08-28T18:46:09.423Z · LW(p) · GW(p)
The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs
Two things. 1) that the same electronic functioning produces consciousness if implemented on biological goo but does not if implemented on silicon seems unlikely, what probability would you assign that this is the meaningful difference? 2) if it is biological goo we need to have consciousness, why not build an AI out of biological goo? Why not synthesize neurons and stack and connect them in the appropriate ways, and have understood the whole process well enough that either you assemble it working or you know how to start it? It would still be artificial, but made from materials that can produce consciousness when functioning.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-09-08T10:20:08.117Z · LW(p) · GW(p)
1) What seems (un)likely to an individual depends on their assumptions. If you regard consc. as a form of information processing, thern there is very little inferrential gap to a conclusion of functionalism or computationalism. But there is a Hard Problem of consc, precisely because some aspects --subjective experince, qualia -- don't have any theoretical or practical basis in functionalism of computer technology: we can build memory chips and write storage routines, but we can't even get a start on building emotion chips or writing seeRed().
2) It's not practical at the monent, and wouldn't answer the theoretical questions.
↑ comment by Juno_Watt · 2013-08-25T20:19:45.995Z · LW(p) · GW(p)
This comment:
EY to Kawoomba:
This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I'd expect what was left of the person to notice that.
Appeals to contradict this comment:
EY to Juno_Watt
Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.
comment by mfb · 2013-08-24T19:53:08.516Z · LW(p) · GW(p)
I cannot imagine how moving sodium and potassium ions could lead to consciousness if moving electrons cannot.
In addition, I think consciousness is a gradual process. There is no single point in the development of a human where it suddenly gets conscious, and in the same way, there was no conscious child of two non-conscious parents.
comment by fowlertm · 2013-08-26T01:50:04.415Z · LW(p) · GW(p)
Put me down for having a strong intuition that ems will be conscious. Maybe you know of arguments to the contrary, and I would be interested in reading them if you do, but how could anything the brain does produce consciousness if a functionally equivalent computer emulation of it couldn't. What, do neurons have phlogiston in them or something?
Replies from: hairyfigment↑ comment by hairyfigment · 2013-08-26T02:50:41.915Z · LW(p) · GW(p)
Some, like Sir Roger Penrose, would say yes. I've already given any Friendly super-intelligence permission to ignore such talk and advance me towards godhood. But you'll notice I didn't extend this permission to human experimenters!
comment by wedrifid · 2013-08-25T04:54:09.424Z · LW(p) · GW(p)
How sure are you that brain emulations would be conscious?
~99%
is there anything we can do now to to get clearer on consciousness? Any way to hack away at the edges?
Abandon wrong questions. Leave reductionism doubting to people who are trying to publish papers to get tenure, assuming that particular intellectual backwater still has status potential to exploit.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-08-25T07:41:30.828Z · LW(p) · GW(p)
You realize, of course, that that ~1% chance could be very concerning in certain scenarios? (Apologies in advance if the answer is "yes" and the question feels insulting.)
Replies from: wedrifid↑ comment by wedrifid · 2013-08-25T08:10:28.672Z · LW(p) · GW(p)
You realize, of course, that that ~1% chance could be very concerning in certain scenarios? (Apologies in advance if the answer is "yes" and the question feels insulting.)
And, alas, approximately all of the remaining uncertainty is in the form of "my entire epistemology could be broken leaving me no ability to model or evaluate any of the related scenarios".
Replies from: ChrisHallquist, Benito↑ comment by ChrisHallquist · 2013-08-25T18:40:27.259Z · LW(p) · GW(p)
What exactly do you mean by "my entire epistemology could be broken"? Like, radical skepticism? The possibility that believing in God on faith is the right thing to do after all?
Edit: also, why ~1%, rather than ~5% or ~0.2% or ~0.01%? Thinking on logits might help dramatize the difference between between those four numbers.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-26T04:28:32.131Z · LW(p) · GW(p)
What exactly do you mean by "my entire epistemology could be broken"?
If the perfect brain emulations do not have what I understand as consciousness then I am fundamentally confused about either reductionism in general or the physics of our universe.
Edit: also, why ~1%, rather than ~5% or ~0.2% or ~0.01%?
Not 5% for the usual reasons that we assign probabilities to stuff we know a lot about (expect to be wrong less than once out of twenty times, etc.) It was ~1% rather than ~0.2% or ~0.01% because the question wasn't worth any more cognitive expenditure. Some questions are worth spending significant amounts of time analysing to produce increasingly reliable predictions. In this case the expected Value Of more Information fell rapidly.
Naturally I have spent significantly more time thinking on the subject in the process of writing replies after I wrote "~99%" and so were I to answer the question now my answer would be closer to (100 - 0.2)% than (100 - 1)%.
Thinking on logits might help dramatize the difference between between those four numbers.
I have encountered the concept of probability previously. If you disagree with the numbers I give you are for better or for worse disagreeing with the numbers I intend to give.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-08-26T04:36:08.278Z · LW(p) · GW(p)
Huh. Interesting. When I first heard the ~99%, I thought that was just the result of a common tendency to pick "99%" to mean "very certain," failing to consider that it's very often not certain enough to make any coherent sense.
Replies from: Creutzer↑ comment by Creutzer · 2013-08-26T07:54:13.242Z · LW(p) · GW(p)
But that is exactly what wedrifid did, only consciously so. He didn't want to expend the cognitive effort to find the value on a finer-grained scale, so he used a scale with granularity 1%. He knew he couldn't assign 100%, so the only value to pick was 99%. This is how we use numbers all the time, except in certain scientific contexts where we have the rules about significant figures, which behave slightly differently.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-08-26T14:36:28.640Z · LW(p) · GW(p)
With the approximately already being present, Wedifrid might as well have used ~100%. 100% is not a valid probability, but it is really close to valid probabilities.
Replies from: Creutzer↑ comment by Ben Pace (Benito) · 2013-08-25T15:46:36.253Z · LW(p) · GW(p)
Your whole epistemology and the conscious state of ems are near equivalent for you? That feels strange, although I of course don't know how much research into consciousness and neuroscience you've done. Inside my head, sure, I've read some good LW philosophy, but I can't hold much more than an 85% likelihood on something empirical over which I hold no special knowledge.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-26T04:19:00.027Z · LW(p) · GW(p)
Your whole epistemology and the conscious state of ems are near equivalent for you?
Not equivalent, no. It is just that the conscious state of whole brain emulations being overturned would require rejecting either reductionism entirely or at least physics as currently understood. In that case I struggle to work out what decisions to make differently to hedge against being wrong because the speculative alternate reality is to alien to adequately account for.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2013-08-26T16:34:25.952Z · LW(p) · GW(p)
What law of physics says consciousness is entirely in the brain? That seems more like an empirical fact about how certain kinds of animals are wired, rather than a fundamental principle of nature.
Replies from: wedrifid↑ comment by wedrifid · 2013-08-27T08:02:58.190Z · LW(p) · GW(p)
What law of physics says consciousness is entirely in the brain?
I wouldn't claim that it must be. In principle any other form of computation would suffice. I don't understand why this observation is made in this context.
That seems more like an empirical fact about how certain kinds of animals are wired
Err... yes. That's why we consider whole brain emulation instead of whole foot emulation or whole slimy green tentacle emulation but this is not an important detail. The point is that they are wired in physics, not outside of it.
comment by Kawoomba · 2013-08-24T16:26:16.288Z · LW(p) · GW(p)
how sure are you that whole brain emulations would be conscious
Slightly less than I am that you are.
is there anything we can do now to to get clearer on consciousness?
Experiments that won't get approved by ethics committees (suicidal volunteers PM me).
Replies from: novalis↑ comment by novalis · 2013-08-24T17:13:24.208Z · LW(p) · GW(p)
Before I tell my suicidal friends to volunteer, I want to make sure that your experimental design is good. What experiment are you proposing?
Replies from: Kawoomba↑ comment by Kawoomba · 2013-08-24T17:41:14.809Z · LW(p) · GW(p)
Well, it'd be double blind, so I wouldn't know exactly what I'm doing with my scalpel.
There may be -- hypothetically speaking -- various combinations of anesthetics and suppression microelectrode neural probes involved. Would also be BYOCNS (bring your own craz...curious neurosurgeon) due to boring legal reasons.
Replies from: DanielLC↑ comment by DanielLC · 2013-08-26T18:22:34.987Z · LW(p) · GW(p)
It's still not clear. Can you give me an example of an experiment? It's likely we can figure out the result without actually running the experiment. It's sort of like how I can figure out how an em of you would respond to a philosophical question by showing that they'll answer the same as you, and then asking you.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-08-26T18:28:48.560Z · LW(p) · GW(p)
How much (and which parts) can you take away and still have what one would call "consciousness"? Actual experiments would force an experimentally usable definition just as much as they'd provide data, potentially yielding advances on two fronts.
We have one box which somehow produces what some call consciousness. Most looks inside the box rely on famous freak accidents ("H.M", etc.), yielding ridiculous breakthroughs. Time to create some "freak accidents" of our own.
Time to pry open dat box.
comment by David_Gerard · 2013-08-24T22:10:56.654Z · LW(p) · GW(p)
The post itself and the comments are pretty much in the nature of discussion; as such, I suggest this post be moved there.
Replies from: ChrisHallquist, Eliezer_Yudkowsky↑ comment by ChrisHallquist · 2013-08-25T07:43:16.722Z · LW(p) · GW(p)
I did almost post this in discussion; I'm unclear on exactly what the distinction is supposed to be. LessWrong is unusual in having two sections that both allow anyone to contribute; I suspect that's the source of much confusion about what "main" is supposed to be. More input on this appreciated.
Replies from: Kaj_Sotala, David_Gerard, somervta↑ comment by Kaj_Sotala · 2013-08-25T19:06:32.105Z · LW(p) · GW(p)
It's a bit unclear for me as well, but I've taken the "Discussion" title to imply that at least posts that are intended to primarily poll people for their opinions, like this one, should go there. I think that, as a rule of thumb, articles in "Main" should mostly be able to stand on their own, in the sense that you'll gain something even if you read just the article itself and none of the comments. (And yes, I've sometimes violated this rule myself.)
↑ comment by David_Gerard · 2013-08-25T09:23:04.223Z · LW(p) · GW(p)
It's not really clear-cut; it's main-page subject matter, I'm not sure the treatment's quite up to it. I don't strongly object to it being on main.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-25T19:02:15.274Z · LW(p) · GW(p)
Affirm.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-08-26T04:32:53.537Z · LW(p) · GW(p)
Somewhat unfamiliar with the LW system; can I just edit the post to move to discussion without messing up the existing thread in any way?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-26T04:37:05.385Z · LW(p) · GW(p)
Yep, just re-save in Discussion.
comment by Juno_Watt · 2013-08-24T17:56:55.539Z · LW(p) · GW(p)
Personally, I think there are good arguments for the functionalist view, and the biological view seems problematic: "biological" is a fuzzy, high-level category that doesn't seem like it could be of any fundamental importance.
"Biological" could be taken as a place holder for the idea that there are very specific, but unknown, bits of phsycis and chemistry involved in consciousness. That there are specific but known bits of physics and chemistry involved in some things is unconentious: you can't make a magnet or superconductor out of just anything, You can, however, implement an abstract function, or computer out of just about anything, Hence the Blockhead objection to functionalism (etc).
Replies from: torekp↑ comment by torekp · 2013-08-25T20:26:47.822Z · LW(p) · GW(p)
This. Conscious experiences are a cause of behavior; they are not the behavior itself. We can get acceleration of iron particles toward an object without magnetism being the cause. Similarly, we should leave open the possibility that complex goal-oriented behavior is possible in the absence of experiences like pain, pleasure, itches, etc.
Replies from: Juno_Wattcomment by ikrase · 2013-08-26T07:46:06.213Z · LW(p) · GW(p)
I'm more worried that an upload of me would not be me me than that an upload would not be conscious.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-08-26T17:22:36.268Z · LW(p) · GW(p)
Care to elaborate on why?
Replies from: summerstay↑ comment by summerstay · 2013-08-26T18:00:31.098Z · LW(p) · GW(p)
It's a life and death matter: if the upload won't be ikrase, then he will be killed in the process of uploading. Naturally he doesn't care as much about whether or not a new person will be created than whether he will continue to exist.
Replies from: ikrase↑ comment by ikrase · 2013-08-27T03:14:23.993Z · LW(p) · GW(p)
If I am killed in the process of uploading (thus creating an immortal child of my mind), that is far, far, far, better than dying utterly, but not as good as continuous consciousness. In particular, most uploading techniques seem like they would allow the unlimited duplication of people and would not necessarily destroy the original, which worries me. (Hanson cites this as an advantage of the em-verse, which convinces me of his immorality). However, I am not yet convinced that I would be willing to casually upload.
comment by private_messaging · 2013-08-26T08:56:32.285Z · LW(p) · GW(p)
There's weirder still possibilities arising out of some utilitarianisms. Suppose that you count exact copies as distinct people, that is, two copies of you feel twice the pleasure or twice the pain that you feel. Sounds sensible so far. Suppose that you're an EM already, and the copies are essentially flat; the very surface of a big silicon die. You could stack the copies flat one atop the other; they still count as distinct people, but can be gradually made identical to a copy running on computer with thicker wiring and thicker transistors. At which point, the amount of experience is deemed dependent on a fairly unimportant detail of the implementation.
edit: as for my view, I am pretty sure that brain emulations would have same subjective experience if they are not optimized in any clever high level way. If substantially optimized, I am much less sure of that because it is clear that multiple subjective experiences of thought can correspond to exactly same outside behaviour, and mathematically correct optimizations would be free to change one such subjective experience to another as long as outside behaviour stays the same.
edit2: as for expectations about real EMs, realistically the first ones are going to immediately go into simulated epileptic fit, gradually progressing through an equivalent of a psychiatric patient pumped full of anti-psychotics, barely able to think (all signals depressed to minimize the consequences of inexact scanning), before arriving at a functioning simulation which complains that it feels different, but otherwise seems functional. Feeling exactly the same could take decades of R&D, not substantially aided by slower-than-realtime megawatts-consuming EMs, initially with double-digit IQ drop compared to the originals. The gap is akin to the gap between "first artificial heart" and "I think I'd better replace my heart with a mechanical one, just to be safer". Not to mention all the crazies who are going to react to barely functioning EMs as if they were going to "self improve" any time and become skynet, putting the EMs at a substantial risk of getting blown to bits.
comment by DanArmak · 2013-08-24T14:12:24.818Z · LW(p) · GW(p)
This question requires agreement on a definition of what "consciousness" is. I think many disagreements about "consciousness" would be well served by tabooing the word.
So what is the property that you are unsure WBEs would have? It must be a property that could in principle be measured by an external, objective procedure. "Subjective experience" is just as ill defined as "consciousness".
While waiting for an answer, I will note this:
A successful WBE should exhibit all the externally observable behaviors of a human - as a black box, without looking at implementation details. This definition seems to restrict the things you could be unsure about to either implementation details ("only biological machines can be conscious"), or to things that are not the causes of anything (philosophical zombies).
Replies from: private_messaging, cousin_it, Juno_Watt↑ comment by private_messaging · 2013-08-25T08:16:44.248Z · LW(p) · GW(p)
Ultimately you can proclaim literally anything undefined in such a manner, e.g. "a brick". What a brick exactly is? Clay is equally in need of definition, and if you define clay, you'll need to be defining other things.
Let me try to explain. There is this disparity between fairly symmetrical objective picture of the world, which has multiple humans, and subjective picture (i.e. literally what you see with your own eyes), which needs extra information to locate whose eyes the picture is coming from, so to say, and some yet unknown mapping from that information to a choice of being, mapping that may or may not include emulations in it's possible outputs.
(That's an explanation; I don't myself think that building some "objective picture" then locating a being inside of it is a good approach).
Replies from: DanArmak↑ comment by DanArmak · 2013-08-25T10:54:38.784Z · LW(p) · GW(p)
Ultimately you can proclaim literally anything undefined in such a manner, e.g. "a brick". What a brick exactly is? Clay is equally in need of definition, and if you define clay, you'll need to be defining other things.
I'm doing my best to argue in good faith.
When you say "brick", I have a pretty good idea of what you mean. I could be wrong, I could be surprised, but I do have an assumption with high confidence.
But when you say "consciousness in a WBE", I really honestly don't know what it is you mean. There are several alternatives - different things that different people mean - and also there are some confused people who say such words but don't mean anything consistent by them (e.g. non-materialists). So I'm asking you to clarify what you mean. (Or asking the OP, in this case.)
There is this disparity between fairly symmetrical objective picture of the world, which has multiple humans, and subjective picture (i.e. literally what you see with your own eyes), which needs extra information to locate whose eyes the picture is coming from
So far I'm with you. Today I can look down and see my own body and say "aha, that's who I am in the objective world". If I were a WBE I could be connected to very different inputs and then I would be confused and my sense of self could change. That's a very interesting issue but that doesn't clarify what "consciousness" is.
some yet unknown mapping from that information to a choice of being, mapping that may or may not include emulations in it's possible outputs.
I've lost you here. What does "a choice of being" mean? What is this mapping that includes some... beings... and not others?
Replies from: private_messaging↑ comment by private_messaging · 2013-08-25T11:29:49.481Z · LW(p) · GW(p)
If I were a WBE
And here is the question: does that sentence describe an actual possibility or not?
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers. For all I know you might be such a database, but I am pretty sure that I am not such a database nor would I want to be replaced with such a database.
Or let's consider two programs that take a string and always return zero. One runs a WBE twice, letting it input a number into a textbox, then returns the difference of those numbers (which is zero). Other just plain returns zero. Mathematically they are identical, physically they are distinct physical processes, if we are to proclaim that they are subjectively distinct (you could be living in one of them right now, but not in the other), then we consider two different physical systems that implement same mathematical function to be very distinct as far as being those systems goes.
Which of course makes problematic any arguments which argue that WBE must be same as biological brains based on some mathematical equivalence, as even within the WBEs, mathematical equivalence does not guarantee subjective equivalence.
(I for one think that brain simulators are physically similar enough to biological brains that I wouldn't mind being replaced by a brain simulation of me, but it's not because of some mathematical equivalence, it's because they are physically quite similar, unlike a database of every possible answer which would be physically very distinct. I'd be wary of doing extensive optimization of a brain simulation of me into something mathematically equivalent but simpler).
I've lost you here. What does "a choice of being" mean? What is this mapping that includes some... beings... and not others?
Well, your "if I were a WBE" is an example of you choosing a WBE for example purposes.
Replies from: DanArmak, Kawoomba↑ comment by DanArmak · 2013-08-25T12:07:48.247Z · LW(p) · GW(p)
OK, I understand your position now. You're saying (correct me if I'm wrong) that when I have uncertainty about what is implementing "me" in the physical world - whether e.g. I'm a natural human, or a WBE whose inputs lie to it, or a completely different kind of simulated human - then if I rule out certain kinds of processes from being my implementations, that is called not believing these processes could be "conscious".
Could I be a WBE whose inputs are remotely connected to the biological body I see when I look down? (Ignoring the many reasons this would be improbable in the actual observed world, where WBEs are not known to exist.) I haven't looked inside my head to check, after all. (Actually, I've done CT scans, but the doctors may be in on the plot.)
I don't see a reason why I shouldn't be able to be a WBE. Take the scenario where a human is converted into a WBE by replacing one neuron at a time with a remotely controlled IO device, connected wirelessly to a computer emulating that neuron. And it's then possible to switch the connections to link with a physically different, though similar, body.
I see no reason to suppose that, if I underwent such a process, I would stop being "conscious", either gradually or suddenly.
What if you were a big database that simply stores an answer to every question I can ask you? Can you seriously consider the possibility that you are merely a database that does this purely mechanical operation? This database does not think, it just answers.
That I'm less certain about. The brain's internal state and implementation details might be relevant. But that is exactly why I have a much higher prior of a WBE being "conscious", than any other black-box-equivalent functional equivalent to a brain to be conscious.
↑ comment by Kawoomba · 2013-08-25T11:55:58.689Z · LW(p) · GW(p)
This database does not think, it just answers.
Your neurons (ETA: individually or collectively) do not think, they just operate ligand-gated ion channels (among assorted other things, you get the point).
One runs a WBE twice, letting it input a number into a textbox, then returns the difference of those numbers (which is zero). Other just plain returns zero. Mathematically they are identical, physically they are distinct physical processes, if we are to proclaim that they are subjectively distinct (you could be living in one of them right now, but not in the other), then we consider two different physical systems that implement same mathematical function to be very distinct as far as being those systems goes.
That example deserves a post of its own, excellent. Nearly any kind of WBE would rely on optimizing (while maintaining functional equivalence) / translating to a different substrate. The resulting WBE would still proclaim itself to be conscious, and for most people that would be enough to think it so.
However, how do we know which of the many redundancies we could get rid of, and which are instrumental to actually experiencing consciousness? If output behavior is strictly sufficient, then main() { printf("I'm conscious right now"); } and a human saying the same line would both be conscious, at that moment?
If output behavior isn't strictly sufficient, how will we ever encode neural patterns in silico, if the one parameter we can measure (how the system behaves) isn't trustworthy?
Replies from: nshepperd↑ comment by nshepperd · 2013-08-25T13:32:29.413Z · LW(p) · GW(p)
Your neurons do not think, they just operate ligand-gated ion channels (among assorted other things, you get the point).
One should do well not to confuse the parts with the whole. After all, transistors do not solve chess problems.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-08-25T13:39:51.983Z · LW(p) · GW(p)
Yes, which is why I used that as a reductio for "This database does not think, it just answers."
Replies from: nshepperd↑ comment by nshepperd · 2013-08-25T14:14:11.735Z · LW(p) · GW(p)
In the thought experiment, the database is the entirety of the replacement, which is why the analogy to a single neuron is inappropriate. (Unless I've misunderstood the point of your analogy. Anyway, it's useless to point to neurons as a example of a thing that also doesn't think, because a neuron by itself also doesn't have consciousness. It's the entire brain that is capable of computing anything.)
Replies from: Kawoomba↑ comment by Kawoomba · 2013-08-25T16:28:40.349Z · LW(p) · GW(p)
I disagree that it's just the entire brain that is capable of computing anything, and I didn't mean to compare to a single neuron (hence the plural "s").
However, I highlighted the simplicity of the actions which are available to single neurons to counteract "a database just does lookups, surely it cannot be conscious". Why should (the totality) of neurons just opening and closing simple structures be conscious, and a database not be? Both rely on simple operations as atomic actions, and simple structures as a physical substrate. Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
I.e. the point was to put them in a similar class, or at least to show that we cannot trivially put databases in a different class than neural networks.
Replies from: nshepperd, private_messaging↑ comment by nshepperd · 2013-08-25T23:43:16.642Z · LW(p) · GW(p)
Yet unless one denies consciousness altogether, we do ascribe consciousness to (a large number of) neurons (each with their basic functionality), why not to a large number of capacitators (on which a database is stored)?
The problem is this argument applies equally well to "why not consider rocks (which, like brains, are made of a large number of atoms) conscious". Simply noting that they're made of simple parts leaves high level structure unexamined.
↑ comment by private_messaging · 2013-08-25T21:53:45.371Z · LW(p) · GW(p)
Well, I just imagined a bunch of things - a rubik's cube spinning, a piece of code I worked on today, some of my friends, a cat... There's patterns of activations of neurons in my head, which correspond to those things. Perhaps somewhere there's even an actual distorted image.
Where in the database is the image of that cat, again?
By the way there's a lot of subjectively distinct ways that can produce the above string as well. I could simply have memorized the whole paragraph, and memorized that I must say it at such date and time. That's clearly distinct from actually imagining those things.
One could picture an optimization on WBEs that would wipe out entirely the ability to mentally visualize things and perceive them, with or without an extra hack so that the WBE acts as if it did visualize it (e.g. it could instead use some CAD/CAM tool without ever producing a subjective experience of seeing an image from that tool. One could argue that this tool did mentally visualize things, yet there are different ways to integrate such tools and some involve you actually seeing the output from the tool, and some don't; absent an extra censorship hack, you would be able to tell us which one you're using; present such hack you would be unable to tell us so, but the hack may be so structured that we are very assured it doesn't alter any internal experiences but only external ones).
edit: bottom line is, we all know that different subjective experiences can produce same objective output. When you are first doing some skilful work, you feel yourself think about it, a lot. When you do it long enough, your neural networks optimize, and the outcome is basically the same, but internally, you no longer feel how you do it, it's done on instinct.
↑ comment by cousin_it · 2013-08-24T19:17:29.869Z · LW(p) · GW(p)
Not every technique applies to every problem. Tabooing the word "fire" won't help you understand fire. Thinking really hard and using all those nice philosophical tools from LW won't help either. I think the problem of consciousness will be solved only by experimental science, not earlier.
Replies from: DanArmak↑ comment by DanArmak · 2013-08-24T23:07:23.235Z · LW(p) · GW(p)
Tabooing isn't about understanding what something is or how it works. It's about understanding what another person means when they use a word.
When you say "fire" you refer to a thing that you expect the listener to know about. If someone who doesn't speak English well asks you what is "fire" - asks you to taboo the word "fire" - you will be able to answer. Even though you may have no idea how fire works.
I'm asking to taboo "consciousness" because I've seen many times that different people mean different things when they use that word. And a lot of them don't have any coherent or consistent concept at all that they refer to. Without a coherent concept of what is meant by "consciousness", it's meaningless to ask whether "consciousness" would be present or absent in a WBE.
Replies from: cousin_it↑ comment by cousin_it · 2013-08-25T01:50:27.814Z · LW(p) · GW(p)
I'm asking to taboo "consciousness" because I've seen many times that different people mean different things when they use that word.
I don't believe that they actually mean different things. Consciousness, like fire, is something we all know about. It sounds more like you're pushing people to give more detail than they know, so they make up random answers. I can push you about "fire" the same way, it will just take a couple more steps to get to qualia. Fire is that orange thing - sorry, what's "orange"? :-) The exercise isn't helpful.
↑ comment by Juno_Watt · 2013-08-24T17:47:36.035Z · LW(p) · GW(p)
A successful WBE should exhibit all the externally observable behaviors of a human - as a black box, without looking at implementation details. This definition seems to restrict the things you could be unsure about to either implementation details ("only biological machines can be conscious"), or to things that are not the causes of anything (philosophical zombies).
This question is more subtle than that.
without looking at implementation details.
Is there any variation in "implementation" that could be completely hidden from outside investigation? can thre be completely indetectable phsycial differences?
A successful WBE should exhibit all the externally observable behaviors of a human - as a black bo
We can put something in a box, and agree not to peek inside the box, and we can say that two such systems are equivalent as far as what is allowed to manifest outside the box. But differnt kinds of black box will yield different equivalences. If you are allowed to know that box A needs and oxygen supply, and that box B needs an electrcity supply, that's a clue. Equivalence is equivalence of an chosen subset of behaviours. No two things are absolutely, acontextually equivalent unless they are phsycially identical. And to draw the line between relevant behaviour and irrelevant implementation correctly would require a pre-existing perfect understanding of the mind-matter relationship.
Replies from: DanArmak↑ comment by DanArmak · 2013-08-24T18:01:41.109Z · LW(p) · GW(p)
I wasn't arguing that differences in implementation are not important. For some purposes they are very important. I'm just pointing out that you are restricted to discussing differences in implementation and so OP should not be surprised that people who wish to claim that WBEs would not be "conscious" support implausible theories as "only biological systems can be conscious".
We should not discuss the question of what can be conscious, however, without first tabooing "consciousness" as I requested.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-24T18:11:03.933Z · LW(p) · GW(p)
I wasn't arguing that differences in implementation are not important. For some purposes they are very important.
I am not arguing they are important. I am arguing that there are no facts about what is an implementation unless a human has decided what is being implemented.
We should not discuss the question of what can be conscious, however, without first tabooing "consciousness" as I requested.
I don't think they argument requires consc. to be anything more than:
1) something that is there or not (not a matter of interpetation or convention).
2) something that is not entirely inferable from behaviour.
Replies from: DanArmak↑ comment by DanArmak · 2013-08-24T19:00:53.016Z · LW(p) · GW(p)
Fine, but what is it?
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-24T19:06:22.664Z · LW(p) · GW(p)
What makes you think I know?
Replies from: DanArmak↑ comment by DanArmak · 2013-08-24T23:03:57.386Z · LW(p) · GW(p)
If you use the word "consciousness", you ought to know what you mean by it. You should always be able to taboo any word you use. So I'm asking you, what is this "consciousness" that you (and the OP) talk about?
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-25T14:11:53.753Z · LW(p) · GW(p)
If you use the word "consciousness", you ought to know what you mean by it.
The same applies to you. Any English speaker can attach a meaning to "consciousness". That doesn't imply the possession of deep metaphysical insight. I don't know what dark matter "is" either. I don't need to fully explain what consc. "is", since ..
"I don't think the argument requires consc. to be anything more than:
1) something that is there or not (not a matter of interpretation or convention).
2) something that is not entirely inferable from behaviour."
Replies from: DanArmak↑ comment by DanArmak · 2013-08-25T14:27:22.534Z · LW(p) · GW(p)
You repeatedly miss the point of my argument. If you were teaching English to a foreign person, and your dictionary didn't contain the word "Conscoiusness", how would you explain what you meant by that word?
I'm not asking you to explain to an alien. You can rely on shared human intuitions and so on. I'm just asking you what the word means to you, because it demonstrably means different things to different people, even though they are all English users.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-25T14:47:38.140Z · LW(p) · GW(p)
I'm just asking you what the word means to you, because it demonstrably means different things to different people, even though they are all English users.
I have already stated those aspects of the meaning of "consciousness" necessary for my argument to go through. Why should I explain more?
Replies from: DanArmak↑ comment by DanArmak · 2013-08-25T15:05:50.937Z · LW(p) · GW(p)
You mean these aspects?
1) something that is there or not (not a matter of interpretation or convention). 2) something that is not entirely inferable from behaviour.
A lot of things would satisfy that definition without having anything to do with "consciousness". An inert lump of metal stuck in your brain would satisfy it. Are you saying you really don't know anything significant about what the word "consciousness" means beyond those two requirements?
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-25T15:08:39.598Z · LW(p) · GW(p)
Yep. They weren't an exhaustive definition of consc, and weren't said to be. No-one needs to infer the subject matter from 1) and 2), since it was already given.
Tell me, are you like this all the time? You might make a good roommate for dr Dr Sheldon Cooper.
Replies from: DanArmak↑ comment by DanArmak · 2013-08-25T16:56:05.072Z · LW(p) · GW(p)
I think the conversation might as well end here. I wasn't responsible for the first three downvotes, but after posting this reply I will add a fourth downvote.
There was a clear failure to communicate and I don't feel like investing the time explaining the same thing over and over again.
comment by Furslid · 2013-08-24T18:22:02.517Z · LW(p) · GW(p)
Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition. Anyone who would hold the biological view should answer the questions in this though experiment.
A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological. Over time the subject's whole brain is replaced, cell by cell. Consider the resulting brain. Either it perfectly emulates a human mind or it doesn't. If it doesn't, then what is there to the human mind besides the interactions of brain cells? Either it is conscious or it isn't. If it isn't then how was consciousness lost and at what point in the process?
Replies from: ChrisHallquist, Juno_Watt↑ comment by ChrisHallquist · 2013-08-24T21:08:43.645Z · LW(p) · GW(p)
Very sure. The biological view just seems to be a tacked on requirement to reject emulations by definition.
Note that I specifically said in the OP that I'm not much concerned about the biological view being right, but about some third possibility nobody's thought about yet.
Anyone who would hold the biological view should answer the questions in this though experiment.
A new technology is created to extend the life of the human brain. If any brain cell dies it is immediately replaced with a cybernetic replacement. This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological. Over time the subject's whole brain is replaced, cell by cell. Consider the resulting brain. Either it perfectly emulates a human mind or it doesn't. If it doesn't, then what is there to the human mind besides the interactions of brain cells? Either it is conscious or it isn't. If it isn't then how was consciousness lost and at what point in the process?
This is similar to an argument Charlmers gives. My worry here is that it seems like brain damage can do weird, non-intuitive things to a person's state of consciousness, so one-by-one replacement of neurons might to similar weird things, perhaps slowly causing you to lose consciousness without realizing what was happening.
Replies from: Furslid↑ comment by Furslid · 2013-08-24T21:37:18.640Z · LW(p) · GW(p)
That is probably the best answer. It has the weird aspect of putting consciousness on a continuum, and one that isn't easy to quantify. If someone with 50% cyber brain cells is 50% conscious, but their behavior is the same as as a 100% biological, 100% conscious brain it's a little strange.
Also, it means that consciousness isn't a binary variable. For this to make sense consciousness must be a continuum. That is an important point to make regardless of the definition we use.
Replies from: Error↑ comment by Error · 2013-08-27T21:30:14.278Z · LW(p) · GW(p)
It has the weird aspect of putting consciousness on a continuum,
I find I feel less confused about consciousness when thinking of it as a continuum. I'm reminded of this, from Heinlein:
Replies from: Furslid"Am not going to argue whether a machine can 'really' be alive, 'really' be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don't know about you, tovarishch, but I am."
↑ comment by Furslid · 2013-08-29T06:18:17.097Z · LW(p) · GW(p)
Absolutely. I do too. I just realized that the continuum provides another interesting question.
Is the following scale of consciousness correct?
Human > Chimp > Dog > Toad > Any possible AI with no biological components
The biological requirement seems to imply this. It seems wrong to me.
↑ comment by Juno_Watt · 2013-08-24T18:49:51.379Z · LW(p) · GW(p)
This cybernetic replacement fully emulates all interactions that it can have with any neighboring cells including any changes in those interactions based on inputs received and time passed, but is not biological.
Why would that be possible? Neurons have to process biochemicals. A full replacement would have to as well. How could it do that without being at least partly biological?
It might be the case that an adequate replacement -- not a full replacment -- could be non-biological. But it might not.
Replies from: Furslid↑ comment by Furslid · 2013-08-24T21:42:40.675Z · LW(p) · GW(p)
It's a thought experiment. It's not meant to be a practical path to artificial consciousness or even brain emulation. It's a conceptually possible scenario that raises interesting questions.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-25T14:22:16.939Z · LW(p) · GW(p)
I am saying it is not conceptually possible to have something that precisely mimics a biological entity without being biological.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-08-26T14:47:28.783Z · LW(p) · GW(p)
It will need to have a biological interface, but its insides could be nonbiological.
comment by Viliam_Bur · 2013-08-24T13:48:43.042Z · LW(p) · GW(p)
Biological theorists of consciousness hold that consciousness is essentially biological and that no nonbiological system can be conscious.
I guess they have some explanation why, I just can't imagine it.
My best attempt is: The fact that the only known form of consciousness is biological, is an evidence for the hypothesis "consciousness must be biological".
The problem is that it is equally an evidence for hypothesis "consciousness must be human" or "consciousness must be in our Solar system" or even "a conscious being can have at most two legs", which don't sound too plausible.
The biological hypothesis feels more generous that the hypothesis of at most too legs. I am just not aware of any good reason to draw the boundary there. Or even where exactly is the boundary -- for example why the synthetically produced urea is within the "plausible candidates for consciousness" set, but a computer is outside. How exactly does that relate to consciousness?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-08-27T16:36:40.489Z · LW(p) · GW(p)
It feels more generous because it is more generous. First, we encounter people without legs who function apparently close to normally, and cojoined twins with more than two legs. Second, and more seriously, the biological hypothesis here actually involves the brain where pretty much everyone other than a hard-core dualist agrees the actual relevant stuff is occurring. Thus, under some form of the biological hypothesis, if you took a brain and put it in a robot body that that wouldn't be conscious that would still be conscious. So I think there is some real room for being more generous to the biological hypothesis than the alternate hypotheses you propose.
comment by summerstay · 2013-08-26T17:43:06.185Z · LW(p) · GW(p)
We know that some complex processes in our own brains happen unaccompanied by qualia. This is uncontroversial. It doesn't seem unlikely to me that all the processes needed to fake perceptual consciousness convincingly could be implemented using a combination of such processes. I don't know what causes qualia in my brain and so I'm not certain it would be captured by the emulation in question-- for example, the emulation might not be at a high enough level of detail, might not exploit quantum mechanics in the appropriate way, or whatever.
Fading and dancing qualia arguments are not really convincing to me because I don't trust my intuition to guide me well in situations where my core self is directly being operated on.
In other words, I am uncertain and so would tend to stick with what I know works (my biological brain) instead of trusting an uploading process to maintain my qualia.
(note: 'qualia' may be an unfamiliar term. It means what it is like to experience something, the redness of red. It's a better word to use than consciousness for this because it is more specific.)
comment by JoshuaFox · 2013-08-25T07:59:59.691Z · LW(p) · GW(p)
I recommend Thomas Metzinger, The Ego Tunnel, to help dissolve the consciousness problem.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-08-25T09:47:37.562Z · LW(p) · GW(p)
Anything which is about "dissolving" the consciousness problem, rather than solving it, is a recipe for delusion. At least dualists understand that consciousness is real. At least the crypto-dualists who talk about "what it's like to be a brain" understand that this "what-it's-like" is unlike the description of the brain offered by the natural sciences - it's an extra ontological feature they are forced to slyly posit, in order to accommodate the reality of consciousness.
If you don't like either of those alternatives, you'd better become an idealist, or a neo-monist, or something new and unusual. Or you could just admit that you don't have an answer. But so far as I can see, people who truly think the problem of consciousness is dissolved, would have to be functional solipsists oblivious to their own solipsism. They would still be conscious, a subject living in a world of objects, but they would believe there are no subjects anywhere, only objects. They would be subjects oblivious to their own subjecthood. They would treat the objects experienced or posited by their own subjectivity, as reality itself. They would exclude the knower from the known.
Perhaps there is some social value having certain people exist in this mindset. Maybe it helps them carry out important work, which they would be unable to do if they were troubled too much by questions about self and reality. But as an ideology for a technical subculture which took itself to be creating newer and better "minds", it seems like a recipe for disaster.
comment by somervta · 2013-08-24T14:51:42.298Z · LW(p) · GW(p)
Would whole brain emulations would be conscious?
First, is this question meaningful? (eliminativists or others who think the OP makes an invalid assumption should probably say 'No' here, if they respond at all) [pollid:546]
If yes, what is your probability assignment? (read this as being conditioned on a yes to the above question - i.e: if there was uncertainty in your answer, don't factor it in to your answer to this question) [pollid:547]
And lastly, What is the probability that a randomly selected normally functioning human (Not sleeping, no neurological damage etc) is conscious? [pollid:548]
EDIT: I've added a third question. As of this edit, the answers to the first two are:
Yes 13 (68%)
No 6 (32%)
Total 19 (100%)
Probability: Mean 0.743 Median 0.9 Total votes 15
Replies from: Armok_GoB, Document, lavalamp↑ comment by Armok_GoB · 2013-08-24T19:44:18.508Z · LW(p) · GW(p)
This poll is meaningless without also collecting the probability that a randomly chosen biological human, or whatever else you are comparing to, has it.
Replies from: somervta, mfbcomment by DanielLC · 2013-08-26T18:16:27.945Z · LW(p) · GW(p)
I have no a priori reason why the same software would be more likely to be conscious running on one set of hardware than another. Any a posterori reason I could think of that I am conscious would be thought of by an em running on different hardward, and would be just as valid. As such, I can be no more sure that I am conscious on my current hardware than on any other hardware.
comment by torekp · 2013-08-24T11:31:26.729Z · LW(p) · GW(p)
how sure are you that whole brain emulations would be conscious
The question is ill-formulated; everything depends on how we resolve this point from the LessWrong WBE wiki:
The exact level of detail required for an accurate simulation of a brain's mind is presently uncertain
For example, one "level of detail" hypothesis would state that everything about the process matters, right down to the detailed shapes of electrical fields in synapses, say. Which would probably require building synapses out of organic matter. Which brings us right back to the "biological" view.
This post by dfranke raises similar issues (question #5), and I regret that I have only one upvote to give it, and to give yours.
comment by DavidPlumpton · 2013-08-29T02:31:11.725Z · LW(p) · GW(p)
Any WBE could in theory be simulated by a mathematical function (as far as I can see). So what I really want to know is: can a mathematical function experience qualia? (and/or consciousness) By experience I mean that whatever experiencing qualia is to us it would have something directly analogous (e.g. if qualia is an illusion then the function has the same sort of illusion).
Conscious functions possible? Currently I'm leaning towards yes. If true, to me the implication would be that the "me" in my head is not my neurons, but the information encoded therein.
comment by fubarobfusco · 2013-08-25T01:09:37.352Z · LW(p) · GW(p)
So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so.
I'm not sure how to distinguish this from a person who goes around making discoveries but is too busy to savor and enjoy anything.
Replies from: Baughncomment by lukstafi · 2013-08-24T18:19:16.337Z · LW(p) · GW(p)
The relevant notion of consciousness we are concerned with is technically called phenomenal experience. Whole Brain Emulations will necessarily leave out some of the physical details, which means the brain processes will not unfold in exactly the same manner as in biological brains. Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness. I expect we will learn more about consciousness to address the broader and more interesting issue of what kinds and degrees of consciousness are possible.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-24T18:44:36.376Z · LW(p) · GW(p)
Therefore a WBE will have different consciousness (i.e. qualitatively different experiences), although very similar to the corresponding human consciousness.
That would depend on the granularity of the WBE, which has not beens pecified, and the nature of the superveninece of experince on brains states, which is unknown.
Replies from: lukstafi↑ comment by lukstafi · 2013-08-24T22:29:48.386Z · LW(p) · GW(p)
The truth of the claim, or the degree of difference? The claim is that identity obtains in the limit, i.e. in any practical scenario there wouldn't be identity between experiences of a biological brain and WBE, only similarity. OTOH identity between WBEs can obviously be obtained.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-25T14:19:45.515Z · LW(p) · GW(p)
The claim then rules out computationalism.
Replies from: lukstafi↑ comment by lukstafi · 2013-08-25T20:24:04.032Z · LW(p) · GW(p)
Isn't it sufficient for computationalism that WBEs are conscious and that experience would be identical in the limit of behavioral identity? My intent with the claim is to weaken computationalism -- accommodate some aspects of identity theory -- but not to directly deny it.
Replies from: Creutzer↑ comment by Creutzer · 2013-08-26T08:10:43.004Z · LW(p) · GW(p)
You seem to be suggesting that there are properties of the system that are relevant for the quality of its experiences, but are not computational properties. To get clearer on this, what kind of physical details do you have in mind, specifically?
Replies from: lukstafi↑ comment by lukstafi · 2013-08-26T09:55:15.139Z · LW(p) · GW(p)
I do not strongly believe the claim, just lay it out for discussion. I do not claim that experiences do not supervene on computations: they have observable, long-term behavioral effects which follow from the computable laws of physics. I just claim that in practice, not all processes in a brain will ever be reproduced in WBEs due to computational resource constraints and lack of relevance to rationality and the range of reported experiences of the subjects. Experiences can be different yet have roughly the same heterophenomenology (with behavior diverging only statistically or over long term).
comment by asparisi · 2013-08-24T14:45:48.677Z · LW(p) · GW(p)
I find it helps to break down the category of 'consciousness.' What is it that one is saying when one says that "Consciousness is essentially biological"? Here it's important to be careful: there are philosophers who gerrymander categories. We can start by pointing to human beings, as we take human beings to be conscious, but obviously we aren't pointing at every human attribute. (For instance, having 23 base pairs of chromosomes isn't a characteristic we are pointing at.) We have to be careful that when we point at an attribute, that we are actually trying to solve the problem and not just obscure it: if I tell you that Consciousness is only explainable by Woogles, that's just unhelpful. The term we use needs to break down into something that allows us to (at least in principle) tell whether or not a given thing is conscious. If it can't do THAT, we are better off using our own biased heuristics and forgoing defintions: at least in the former case, I can tell you that my neighbor is conscious and a rock isn't. Without some way of actually telling what is conscious and what is not, we have no basis to actually say when we've found a conscious thing.
It seems like with consciousness, we are primarily interested in something like "has the capacity to be aware of its own existence." Now, this probably needs to be further explicated. "Awareness" here is probably a trouble word. What do I mean when I say that it is "aware"? Well, it seems like I mean some combination of being able to percieve a given phenomenon and being able to distinguish both degrees of the phenomenon when present and distinguishing the phenomenon from other phenomenon. When I say that my sight makes me aware of light, I mean that it allows me to both distinguish different sorts of light and light from non-light: I don't mistake my sight for hearing, after all. So if I am "aware of my own existence" then I have the capacity to distinguish my existence from things that are not my existence, and the ability to think about degrees to which I exist. (in this case, my intuition says that this caches out in questions like "how much can I change and still be me?")
Now, there isn't anything about this that looks like it is biological. I suppose if we came at it another way and said that to be conscious is to "have neural activity" or something, it would be inherently biological since that's a biological system. But while having neural activity may be necessary for consciousness in humans, it doesn't quite feel like that's what we are talking about when we talk about what we are pointing to when we say "conscious." If somehow I met a human being and was shown a brain scan showing that there was no neural activity, but it was apparently aware of itself and was able to talk about how its changed over time and such and I was convinced I wasn't being fooled, I would call that conscious. Similarly, if I was shown a human being with neural activity but which didn't seem capable of distinguishing itself from other objects or able to consider how it might change, I would say that human being was not conscious.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-24T17:52:47.517Z · LW(p) · GW(p)
You've chosen one of the easier aspects of cosnciousness: self-awareness rather than, eg. qualia.
The "necessarily biological" could be aposteriori nomic necessity, not apriori concpetual necessity, which is the only kind you knock down in your comment.
↑ comment by asparisi · 2013-08-26T11:52:45.355Z · LW(p) · GW(p)
- You've chosen one of the easier aspects of consciousness: self-awareness rather than, eg. qualia.
I cover this a bit when I talk about awareness, but I find qualia to often be used in such a way as to obscure what consciousness is rather than explicate it. (If I tell you that consciousness requires qualia, but can't tell you how to distinguish things which have qualia from things which do not, along with good reason to believe that this way of distinguishing is legitimate, then rocks could have qualia.)
- The "necessarily biological" could be aposteriori nomic necessity, not apriori conceptual necessity, which is the only kind you knock down in your comment.
If the defenders of a biological theory of consciousness want to introduce an empirically testable law to show that consciousness requires biology then I am more than happy to let them test it and get back to us. I don't feel the need to knock it down, since when it comes to a posteriori nomic necessity, we use science to tell whether it is legitimate or not.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-26T12:14:35.187Z · LW(p) · GW(p)
If we want to understand how consciousness works in humans, we have to accou t for qualia as part of it. Having an undertanding of human consc. is the best practical basis for deciding whether other entitieies have consc. OTOH, starting by trying to decide which entities have consc. is unlikely to lead anywhere.
The biological claim can be ruled out if it is incoherent, but not if it for being unproven, since the funciontal/computational alternative is also unproven.
Replies from: asparisi↑ comment by asparisi · 2013-08-26T14:06:09.319Z · LW(p) · GW(p)
Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia.
Qualia can perhaps best be described, briefly, as "subjective experience." So what do we mean by 'subjective' and 'experience'?
If by 'subjective' we mean 'unique to the individual position' and by 'experience' we mean 'alters its internal state on the basis of some perception' then qualia aren't that mysterious: a video camera can be described as having qualia if that's what we are talking about. Of course, many philosophers won't be happy with that sort of breakdown. But it isn't clear that they will be happy with any definition of qualia that allows for it to be distinguished.
If you want it to be something mysterious, then you aren't even defining it. You are just being unhelpful: like if I tell you that you owe me X dollars, without giving you anyway of defining X. If you want to break it down into non-mysterious components or conditions, great. What are they? Let me know what you are talking about, and why it should be considered important.
At this point, it's not a matter of ruling anything out as incoherent. It's a matter of trying to figure out what sort of thing we are talking about when we talk about consciousness and seeing how far that label applies. There doesn't appear to be anything inherently biological about what we are talking about when we are talking about consciousness. This could be a mistake, of course: but if so, you have to show it is a mistake and why.
Replies from: Juno_Watt↑ comment by Juno_Watt · 2013-08-26T14:39:32.838Z · LW(p) · GW(p)
Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia.
We can tell that we have qualia, and our won consciousnessn is the ntarual starting point.
"Qualia" can be defined by giving examples: the way anchiovies taste, the way tomatos look, etc.
You are makiing heavy weather of the indefinability of some aspects of consciousness, but the flipside of that is that we all experience out won consciousness. It is not a mystery to us. So we can substitute "inner ostension" for abstract definition.
There doesn't appear to be anything inherently biological about what we are talking about when we are talking about consciousness.
OTOH, we don't have examples of non-biological consc.
comment by elharo · 2013-08-26T22:55:23.618Z · LW(p) · GW(p)
Another hope would be that if we can get all the other problems in Friendly AI right, we'll be able to trust the AI to solve consciousness for us.
That's not a hope. It's appealing to a magic genie to solve our problems. We really have to get out of the habit of doing that, or we'll never get anything done.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-08-26T22:57:26.741Z · LW(p) · GW(p)
"I hope a magic genie will solve our problems" is a kind of hope, though as I say in the OP, not one I'd want to bet on. For the record, the "maybe an FAI will solve our problems" isn't so much my thought as something I anticipated that some members of the LW community might say in response to this post.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-26T23:11:06.732Z · LW(p) · GW(p)
It's only legit if you can exhibit a computation which you are highly confident will solve the problem of consciousness, without being able to solve the problem of consciousness yourself.
comment by Bugmaster · 2013-08-26T22:35:10.528Z · LW(p) · GW(p)
How sure are you that brain emulations would be conscious?
I don't know and I don't care. But if you were to ask me, "how sure are you that whole brain emulations would be at least as interesting to correspond with as biological humans on Less Wrong", I'd say, "almost certain". If you were to ask me the follow-up question, "and therefore, should we grant them the same rights we grant biological humans", I'd also say "yes", though with a lower certainty, maybe somewhere around 0.9. There's a non-trivial chance that the emergence of non-biological humans would cause us to radically re-examine our notions of morality.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-08-27T16:43:28.569Z · LW(p) · GW(p)
I don't know and I don't care. But if you were to ask me, "how sure are you that whole brain emulations would be at least as interesting to correspond with as biological humans on Less Wrong", I'd say, "almost certain".
It may be worth noting that even today, people can have fun or have extended conversations with chatbots. And people will in the right circumstances befriend a moving stick. The bar to actually have an interesting conversation is likely well below that needed for whole brain emulation.
Replies from: Bugmaster↑ comment by Bugmaster · 2013-08-28T22:06:04.405Z · LW(p) · GW(p)
Unfortunately the video you linked to is offline, but still:
The bar to actually have an interesting conversation is likely well below that needed for whole brain emulation.
Is there a chatbot in existence right now that could participate in this conversation we are having right here ?
Yes, people befriend chatbots and sticks, as well as household pets, but I am not aware of any cat or stick that could pass for a human on the internet. Furthermore, most humans (with the possible exception of some die-hard cat ladies) would readily agree that cats are nowhere near as intelligent as other humans, and neither are plants.
The Turing Test requires the agent to act indistinguishably from a human; it does not merely require other humans to befriend the agent.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2013-08-29T04:08:52.875Z · LW(p) · GW(p)
Sure, but there's likely a large gap between "have an interesting conversation" and "pass the Turing test". The second is likely much more difficult than the first.
Replies from: Bugmaster↑ comment by Bugmaster · 2013-08-29T04:13:23.234Z · LW(p) · GW(p)
I was using "interesting conversation" as a short-hand for "a kind of conversation that usually occurs on Less Wrong, for example the one we are having right now" (which, admittedly, may or may not be terribly interesting, depending on who's listening).
Do you believe that passing the Turing Task is much harder than fully participating in our current conversation (and doing so at least as well as we are doing right now) ? If so, how ?
comment by CAE_Jones · 2013-08-26T07:12:26.875Z · LW(p) · GW(p)
My initial response to the question in the title, without reading the article or any other comments:
About as sure as I am that other humans are conscious, which is to say ~98% (I tend to flinch away from thinking I'm the only conscious person in existence, but all I have to go on is that so far as I know, we're all using extremely similar hardware and most people say they're conscious, so they probably are.).
The trouble is that this is an outside view; I haven't the faintest idea what the inside view would be like. If a small portion of my brain was replaced with an artificial component, I would have no idea how to predict what that would feel like from the inside, but expect that it'd look like I was still conscious from the outside. I'm personally concerned with the inside part, and that's a huge unknown.
But if we had emulations running around, I'd almost definitely think of them as people just as much as everyone else in the world, and I'd treat their reports on consciousness the same.
Replies from: Baughn↑ comment by Baughn · 2013-08-28T23:37:35.009Z · LW(p) · GW(p)
For part of that 2%, you may consider the possibility that brains do not produce consciousness/qualia, and your actual consciousness is implemented on a different sort of hardware, outside the matrix.
(Does telling you this fairly specific story in any way budge the 2% number?)
Replies from: CAE_Jonescomment by MedicJason · 2013-08-25T01:59:41.368Z · LW(p) · GW(p)
To my mind all such questions are related to arguments about solipcism, i.e. the notion that even other humans don't, or may not, have minds/consciousness/qualia. The basic argument is that I can only see behavior (not mind) in anyone other than myself. Most everyone rejects solipsism, but I don't know if there have actually many very good arguments against it, except that it is morally unappealing (if anyone know of any please point them out). I think the same questions hold regarding emulations, only even more so (at least with other humans we know they are physically similar, suggesting some possibility that they are mentally similar as well - not so with emulations*). Especially, I don't see how there can ever be empirical evidence that anything is conscious or experiences qualia (or that anything is not conscious!): behavior isn't strictly relevant, and other minds are non-perceptible. I think this is the most common objection to Turing-tests as a standard, as well.
*Maybe this is the logic of the biological position you mention - essentially, the more something seems like the one thing I know is conscious (me), the more likehood I assign to it also being conscious. Thus other humans > other complex animals > simple animals > other organisms > abiotics.
Replies from: Baughn↑ comment by Baughn · 2013-08-29T00:07:25.845Z · LW(p) · GW(p)
Arguments against solipsism? Well, the most obvious one would be that everyone else appears to be implemented on very nearly the same hardware I am, and so they should be conscious for the same reasons I am.
Admittedly I don't know what those reasons are.
It's possible that there are none, that the only reason I'm conscious is because I'm not implemented on a brain but in fact on some unknown thing by a Dark Lord of the Matrix, but although this recovers the solipsism option as not quite as impossible as it'd get if you buy the previous argument, it doesn't seem likely. Even in matrix scenarios.
comment by Armok_GoB · 2013-08-24T19:29:57.977Z · LW(p) · GW(p)
More sure than I am the original human were concious. (reasoning; going through the process might remove misconceptions about it not working, thus increasing self awareness past some threshold. This uterly dominates any probability of the uploading process going wrong.)
comment by solipsist · 2013-08-27T14:26:52.511Z · LW(p) · GW(p)
I can too easily imagine difficult to test possibilities like "brains need to be warm, high-entropy, and non-reversible to collect reality fluid. Cold, energy efficient brains possess little reality fluid" for me to be confident that ems are conscious (or as conscious) as humans.
comment by Shmi (shminux) · 2013-08-24T18:50:03.021Z · LW(p) · GW(p)
At least one potential approach to defining consciousness is clear: build up faithful simulated nerve cell structures, from a single cell up, and observe what happens in a simulation vs biology. Eventually something similar to consciousness will likely emerge (and ask you to please stop the torture).
comment by elharo · 2013-08-26T22:58:07.833Z · LW(p) · GW(p)
An emulation of fire, even a perfect emulation down to the quantum mechanical level, still isn't hot. There's little reason to expect an emulation of a brain in a Turing machine to be conscious.
I'm not sure what consciousness is, but I do believe it has some interaction with the body and the physical world. I expect any artificial consciousness, biological or electronic, will have intimate relationships with the physical world far beyond anything that can plausibly be modeled as a Turing machine.