Posts
Comments
She can understand the sequence of chemical reactions that comprises the Calvin cycle just as she can understand what neural impulses occur when red light strikes retinal rods, but she can't form the memory of either one occurring within her body.
They're computationally equivalent by hypothesis. The thesis of substrate independence is that as far as consciousness is concerned the side effects don't matter and that capturing the essential sameness of the "AND" computation is all that does. If you're having trouble understanding this, I can't blame you in the slightest, because it's that bizarre.
Yes, I agree that this kind of atomism is silly, and by implication that things like Drescher's gensym analogy are even sillier. Nonetheless, the black box needs a label if we want to do something besides point at it and grunt.
I should have predicted that somebody here was going to call me on that. I accept the correction.
Maybe this analogy is helpful: saying "qualia" isn't giving us insight into consciousness any more than saying "phlogiston" is giving us insight into combustion. However, that doesn't mean that qualia don't exist or that any reference to them is nonsensical. Phlogiston exists. However, in our better state of knowledge, we've discarded the term and now we call it "hydrocarbons".
My conclusion in the Mary's room thought experiment doesn't challenge either of these versions: something new happens when she steps outside, and there's a perfectly good purely physical explanation of what and why. It is nothing more than an artifact of how human brains are built that Mary is unable to make the same physical thing happen, with the same result, without the assistance of either red light or appropriate surgical tools. A slightly more evolved Mary with a few extra neurons leading into her hippocampus would have no such difficulty.
Can you state what that version is? Whatever it is, it's nothing I subscribe to, and I call myself a physicalist.
When she steps outside, something physical happens in her brain that has never happened before. Maybe something "non-physical" (huh?) also happens, maybe it doesn't. We have gained no insight.
She is specifically not supposed to be pre-equipped with experiential knowledge, which means her brain is in one of the physical states of a brain that has never seen colour.
Well, then when she steps outside, her brain will be put into a physical state that it's never been in before, and as a result she will feel enlightened. This conclusion gives us no insight whatsoever into what exactly goes on during that state-change or why it's so special, which is why I think it's a stupid thought-experiment.
The very premise of "Mary is supposed to have that kind of knowledge" implies that her brain is already in the requisite configuration that the surgery would produce. But if it's not already in that configuration, she's not going to be able to get it into that configuration just by looking at the right sequence of squiggles on paper. All knowledge can be represented by a bunch of 1's and 0's, and Mary can interpret those 1's and 0's as a HOWTO for a surgical procedure. But the knowledge itself consists of a certain configuration of neurons, not 1's and 0's.
To say that the surgery is required is to say that there is knowledge not conveyed by third persons descriptions, and that is a problem for sweeping claims of physicalism.
No it isn't. All it says is that the parts of our brain that interpret written language are hooked up to different parts of our hippocampus than our visual cortex is, and that no set of signals on one input port will ever cause the hippocampus to react in the same way that signals on the other port will.
I think that the "Mary's Room" thought experiment leads our intuitions astray in a direction completely orthogonal to any remotely interesting question. The confusion can be clarified by taking a biological view of what "knowledge" means. When we talk about our "knowledge" of red, what we're talking about is what experiencing the sensation of red did to our hippocampus. In principle, you could perform surgery on Mary's brain that would give her the same kind of memory of red that anyone else has, and given the appropriate technology she could perform the same surgery on herself. However, in the absence of any source of red light, the surgery is required. No amount of simple book study is ever going to impact her brain the same way the surgery would, and this distinction is what leads our intuitions astray. Clarifying this, however, does not bring us any closer to solving the central mystery, which is just what the heck is going on in our brain during the sensation of red.
Plausible? What does that mean, exactly?
What subjective probability would you assign to it?
Not every substance can perform every sub-part role in a consciousness producing computation, so there's a limit to "independence". Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I'm not sure what you mean.
I don't know what the "usual" point of contention is, but this isn't the one I'm taking a position in opposition to Bostrom on. Look again at my original post and how Bostrom defined substrate-independence and how I paraphrased it. Both Bostrom's definition and mine mean that xkcd's desert and certain Giant Look-Up Tables are conscious.
This sounds an awful lot like "making the same argument that I am, merely in different vocabulary". You say po-tay-to, I say po-tah-to, you say "computations", I say "physical phenomena". Take the example of the spark-plug brain from my earlier post. If the computer-with-spark-plugs-attached is conscious but the computer alone is not, do you still consider this confirmation of substrate independence? If so, then I think you're using an even weaker definition of the term than I am. How about xkcd's desert? If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it's plausible that anything in that system experiences human-like consciousness? If you say "no", then I don't know whether we're disagreeing on anything.
The most important difference between Level 1 and Level 2 actions is that Level 1 actions tend to be additive, while Level 2 actions tend to be multiplicative. If you do ten hours of work at McDonald's, you'll get paid ten times as much as if you did one hour; the benefits of the hours add together. However, if you take ten typing classes, each one of which improves your ability by 20%, you'll be 1.2^10 = 6.2 times better at the end than at the beginning: the benefits of the classes multiply (assuming independence).
I'm trying to think of anything in life that actually works this way and I can't. If I start out being able to type at 20 WPM, taking 100 typing classes is not going to improve that to 1.6 billion WPM; neither is taking 1000 classes or 10000. These sorts of payoffs tend to be roughly logarithmic, not exponential.
Detecting the similarity of two patterns is something that happens in your brain, not something that's part of reality.
If I'm correctly understanding what you mean by "part of reality" here, then I agree. This kind of "similarity" is another unnatural category. When I made reference in my original post to the level of granularity "sufficient in order model all the essential features of human consciousness", I didn't mean this as a binary proposition; just for it to be sufficient that if while you slept somebody made changes to your brain at any smaller level, you wouldn't wake up thinking "I feel weird".
As for how this bears on Bostrom's simulation argument: I'm not familiarized with it properly, but how much of its force does it lose by not being able to appeal to consciousness-based reference classes and the like? I can't see how that would make simulations impossible; nearest I can guess is that it harms his conclusion that we are probably in a simulation?
Right. All the probabilistic reasoning breaks down, and if your re-explanation patches things at all I don't understand how. Without reference to consciousness I don't know how to make sense of the "our" in "our experiences". Who is the observer who is sampling himself out of a pool of identical copies?
Anthropics is confusing enough to me that it's possible that I'm making an argument whose conclusion doesn't depend on its hypothesis, and that the argument I should actually be making is that this part of Bostrom's reasoning is nonsense regardless of whether you believe in qualia or not.
I'm not trying to hold you to any Platonic claim that there's any unique set of computational primitives that are more ontologically privileged than others. It's of course perfectly equivalent to say that it's NOR gates that are primitive, or that you should be using gates with three-state rather than two state inputs, or whatever. But whatever set of primitives you settle on, you need to settle on something, and I don't think there's any such something which invalidates my claim about K-complexity when expressed in formal language familiar to physics.
There are no specifically philosophical truths, only specifically philosophical questions. Philosophy is the precursor to science; its job is to help us state our hypotheses clearly enough that we can test them scientifically. ETA: For example, if you want to determine how many angels can dance on the head of a pin, it's philosophy's job to either clarify or reject as nonsensical the concept of an angel, and then in the former case to hand off to science the problem of tracking down some angels to participate in a pin-dancing study.
Those early experimenters with electricity were still taking a position whether they knew it or not: namely, that "will this conduct?" is a productive question to ask -- that if p is the subjective probability that it will, then p\(1-p)* is a sufficiently large value that the experiment is worth their time.
I didn't list this position because it's out of scope for the topic I'm addressing. I'm not trying to address every position on the simulation hypothesis; I'm trying to address computationalist positions. If you think we are completely in the dark on the matter, you can't be endorsing computationalists, who claim to know something.
I agree, and furthermore this is a true statement regardless of whether you classify the problem as philosophical or scientific. You can't do science without picking some hypotheses to test.
I'll save my defense of these answers for my next post, but here are my answers:
- Both of them.
- Yes. The way I understand these words, this is a tautology.
- No. Actually, hell no.
- N/A
- Yes; a. I'm not quite sure how to make sense of "probability" here, but something strictly between 0 and 1; b. Yes.
- Negligibly larger than 0.
- 1, tautologically.
- For the purposes of this discussion, "No". In an unrelated discussion about epistemology, "No, with caveats."
- This question is nonsense.
- No.
- If I answered "yes" to this, it would imply that I did not think question 11 was nonsense, leading to contradiction.
I haven't read that other thread; can I ask what your opinions are? Briefly of course, and while I can't speak for everyone else, I promise to read them as thumbnails and not absolute statements to be used against you. You could point to writers (Searle? Penrose?) if you like.
Searle, to a zeroth approximation. His claims need some surgical repair, but you can do that surgery without killing the patient. See my original post for some "first aid".
I also think there is a big difference between c) "nonsensical" and c) "irrelevant".
I didn't mean to imply otherwise. I meant the "or" there as a logical inclusive or, not a claim of synonymy.
I'm not sure what you mean by an abstract machine (and please excuse me if that's a formal term)
I'd certainly regard anything defined within the framework of automata theory as an abstract machine. I'd probably accept substitution of a broader definition.
s/are not zombies/have qualia/ and you'll get a little more accurate. A zombie, supposing such a thing is possible (which I doubt for all the reasons given in http://lesswrong.com/lw/p7/zombies_zombies ), is still a real, physical object. The objects of a simulation don't even rise to zombie status.
No, rather:
A) "We are not living in a simulation" = P(living in a simulation) < ε.
B) "we cannot be living in a simulation" = P(living in a simulation) = 0.
I believe A but not B. Think of it analogously to weak vs. strong atheism. I'm a weak atheist with respect to both simulations and God.
The claim that the simulated universe is real even though its physics are independent of our own seem to imply a very broad definition of "real" that comes close to Tegmarck IV. I've posted a followup to my article to the discussion section: Eight questions for computationalists. Please to reply to it so I can better understand your position.
This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.
Brains, like PCs, aren't actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they'd need something equivalent to a Turing machine's infinite tape. There's nothing analogous to Rice's theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn't mean tractable.
I just hit reload at sufficiently fortuitous times that I was able to see all my comments drop by exactly one point within a minute or so of each other, then later see the same thing happen to exactly those comments that it didn't happen to before.
The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a "qualia cell". You clearly already understand this concept, so is my particular choice of analogy so terribly important that it's necessary to nitpick over this?
Try reading it as "the probability that we are living in a simulation is negligibly higher than zero".
Do you mean, "know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?". No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
My own experience with analytic philosophy is that it is not particularly effective in shutting down pointless speculation.
Oh, certainly not. Not in the least. Think of it this way. Pre-analytic philosophy is like a monkey throwing darts at a dartboard. Analytic philosophy is like a human throwing them. There's no guarantee that he'll hit the board, much less the bullseye, but at least he understands where he's supposed to aim.
we cannot be in a simulation
We are not living in a simulation
These things are not identical.
I'm pretty sure you don't think that qualia are reified in the brain-- that a surgeon could go in with tongs and pull out a little lump of qualia
I do think that qualia are reified in the brain. I do not think that a surgeon could go in with tongs and remove them any more than he could in with tongs and remove your recognition of your grandmother.
If qualia and other mental phenomena are not computational, then what are they?
They're a physical effect caused by the operation of a brain, just as gravity is a physical effect of mass and temperature is a physical effect of Brownian motion. See here and here for one reason why I think the computational view falls somewhere in between problematic and not-even-wrong, inclusive.
ETA: The "grandmother cell" might have been a poorly chosen counterexample, since apparently there's some research that sort of actually supports that notion with respect to face recognition. I learned the phrase as identifying a fallacy. Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.
I can't figure out whether you're trying to agree with me or disagree with me. You comment sounds argumentative, yet you seem to be directly paraphrasing my critique of Searle.
Even granting you all of your premises, everything we know about brains and qualia we know by observing it in this universe. If this universe is in fact a simulation, then what we know about brains and qualia is false. At the very most, your argument shows that we cannot create a simulation. It does not prove that we cannot be in a simulation, because we have no idea what the physics of the real world would be like.
Like pjeby, you're attacking a claim much stronger than the one I've asserted. I didn't claim we cannot be in a simulation. I claimed that if we are in a simulation, then the simulator must be of a sort that Bostrom's argument provides us no reason to suppose is likely to exist.
In general, John Searle has some serious problems when it comes to trying to answer essentially empirical questions with a priori reasoning.
There's nothing wrong with trying to answer empirical questions with deductive reasoning if your priors are well-grounded. Deductive logic allows me to reliably predict that a banjo will fall if I drop it, even if I have never before observed a falling banjo, because I start with the empirically-acquired prior that, in general, dropped objects fall.
I don't think you read my post very carefully. I didn't claim that qualia are a phenomenon unique to human brains. I claimed that human-like qualia are a phenomenon unique to human brains. Computers might very well experience qualia; so might a lump of coal. But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.
The guy who downvoted that one downvoted all the rest of my comments in this thread at the same time. Actually, he downvoted most of them earlier, then picked that one up in a second sweep of those comments that I had posted since he did his first pass. So, your assumption that the downvote had anything to do with the content of that particular comment is probably misguided.
On further reflection, I'm not certain that your position and mine are incompatible. I'm a personal identity skeptic in roughly the same sense that you're a qualia skeptic. Yet, if somebody points out that a door is open when it was previously closed, and reasons "someone must have opened it", I don't consider that reasoning invalid. I just think the need to modify the word "someone" if they want to be absolutely pedantically correct about what occurred. Similarly, your skepticism about qualia doesn't really contradict my claim that the objects of a computer simulation would have no (or improper ) qualia; at worst it means that I ought to slightly modify my description of what it is that those objects wouldn't have.
The interpretation that you deem uncharitable is the one I intended.
I did a better job of phrasing my question in the edit I made to my original post than I did in my reply to Sideways that you responded to. Are you able to rephrase your response so that it answers the better version of the question? I can't figure out how to do so.
I apologize if this is recapitulating earlier comments -- I haven't read this entire discussion -- and feel free to point me to a different thread if you've covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like "pleasant" and "unpleasant" and "indifferent"? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by "taste" and understand the gist of "like chicken"?
I'm not certain what you mean by "could a simulation of me do X". I'll read it as "could a simulator of me of do X". And my answer is yes, a computer program could make those judgements without actually experiencing any of those qualia, just like it could make judgements about what trajectory the computer hardware would follow if it were in orbit around Jupiter, without it having to actually be there.
Philosophical speculation regarding cognition in our present state of ignorance is just about as useful as would be disputation by medieval philosophers confronted with a 21st century TV newscast - wondering whether the disembodied talking heads appearing there experience pain.
I don't think this is quite fair. The concept that medieval philosophers were missing was analytic philosophy, not cathode rays. If the works of Quine and Popper and Wittgenstein fell through a time warp, it'd be plausible that medieval philosophers could have made legitimate headway on such a question.
Ok, I've really misunderstood you then. I didn't realize that you were taking a devil's advocate position in the other thread. I maintain the arguments I've made in both threads in challenge to all those commenters who do claim that qualia are computation.
I'm trying to understand your objection, but it seems like a quibble to me. You seem to be saying that the analogy between qualia and gensyms isn't perfect because gensyms are leaky abstractions. But I don't think it has to be to convey the essential idea. Analogies rarely are perfect.
You haven't responded to the broader part of my point. If you want to claim that qualia are computations, then you either need to specify a particular computer architecture, or you need to describe them in a way that's independent of any such choice. In the the first case, then the architecture you want is probably "the universe", in which case you're defining an algorithm by specifying its physical implementation and you've affirmed my thesis. In the latter case, all you get to talk about is inputs and outputs, not algorithms.
But qualia are not any of those things! They are not epiphenomenal! They can be compared. I can classify them into categories like "pleasant", "unpleasant" and "indifferent". I can tell you that certain meat tastes like chicken, and you can understand what I mean by "taste", and understand the gist of "like chicken" even if the taste is not perfectly indistinguishable from that of chicken. I suppose that I would be unable to describe what it's like to have qualia to something that has no qualia whatsoever, but even that I think is just a failure of creativity rather than a theoretical impossibility -- [ETA: indeed, before I could create a conscious AI, I'd in some sense have to figure out how to provide exactly such a description to a computer.]