We are not living in a simulation

post by dfranke · 2011-04-12T01:55:42.680Z · LW · GW · Legacy · 218 comments

Contents

218 comments

The aim of this post is to challenge Nick Bostrom's simulation argument by attacking the premise of substrate-independence. Quoting Bostrom in full, this premise is explained as follows:

A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.

Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.

The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) -- just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.

Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).

I contend that this premise, in even its weakest formulation, is utterly, unsalvageably false.

Since Bostrom never precisely defines what a "simulator" is, I will apply the following working definition: a simulator is a physical device which assists a human (or posthuman) observer with deriving information about the states and behavior of a hypothetical physical system. A simulator is "perfect" if it can respond to any query about the state of any point or volume of simulated spacetime with an answer that is correct according to some formal mathematical model of the laws of physics, with both the query and the response encoded in a language that it is easily comprehensible to the simulator's [post]human operator. We can now formulate the substrate independence hypothesis as follows: any perfect simulator of a conscious being experiences the same qualia as that being.

Let us make a couple observations about these definitions. First: if the motivation for our hypothetical post-Singularity civilization to simulate our universe is to study it, then any perfect simulator should provide them with everything necessary toward that end. Second: the substrate independence hypothesis as I have defined it is much weaker than any version which Bostrom proposes, for any device which perfectly simulates a human must necessarily be able to answer queries about the state of the human's brain, such as what synapses are firing at what time, as well as any other structural question right down to the Planck level.

Much of the ground I am about to cover has been tread in the past by John Searle. I will explain later in this post where it is that I differ with him.

Let's consider a "hello universe" example of a perfect simulator. Suppose an essentially Newtonian universe in which matter is homogeneous at all sufficiently small scales; i.e., there are either no quanta, or quanta simply behave like billiard balls. Gravity obeys the familiar inverse-square law. The only objects in this universe are two large spheres orbiting each other. Since the two-body problem has an easy closed-form solution, it is hypothetically straightforward to program a Turing machine to act as a perfect simulator of this universe, and furthermore an ordinary present-day PC can be an adequate stand-in for a Turing machine so long as we don't ask it to make its answers precise to more decimal places than fit in memory. It would pose no difficulty to actually implement this simulator.

If you ran this simulator with Jupiter-sized spheres, it would reason perfectly about the gravitational effects of those spheres. Yet, the computer would not actually produce any more gravity than it would while powered off. You would not be sucked toward your CPU and have your body smeared evenly across its surface. In order for that happen, the simulator would have to mimic the simulated system in physical form, not merely computational rules. That is, it would have to actually have two enormous spheres inside of it. Such a machine could still be a "simulator" in the sense that I've defined the term — but in colloquial usage, we would stop calling this a simulator and instead call it the real thing.

This observation is an instance of a general principle that ought be very, very obvious: reasoning about a physical phenomenon is not the same as causing a physical phenomenon. You cannot create new territory by sketching a map of it, no matter how much detail you include in your map.

Qualia are physical phenomena. I dearly wish that this statement were uncontroversial. However, if you don't agree with it, then you can reject the simulation argument on far simpler grounds: if experiencing qualia requires a "nonphysical" "soul" or whatnot (I don't know how to make sense out of either of those words), then there is no reason to suppose that any man-made simulator is imbued with a soul and therefore no reason to suppose that it would be conscious. However, provided that you agree that qualia are physical phenomena, then to suppose that they are any kind of exception to the principle I've just stated is simply bizarre magical thinking. A simulator which reasons perfectly about a human being, even including correctly determining what qualia a human would experience, does not necessarily experience those qualia, any more than a simulator that reasons perfectly about high gravity necessarily produces high gravity.

Hence, the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator. A machine which walks the way a human walks must have the form of a human leg. A machine which grips the way a human grips must have the form of a human hand. And a machine which experiences the way a human experiences must have the form of a human brain.

For an example of my claim, let us suppose like Bostrom does that a simulation which correctly models brain activity down to the level of individual synaptic discharges is sufficient in order model all the essential features of human consciousness. What does that tell us about what would be required in order to build an artificial human? Here is one design that would work: first, write a computer program, running on (sufficiently fast) conventional hardware, which correctly simulates synaptic activity in a human brain. Then, assemble millions of tiny spark plugs, one per dendrite, into the physical configuration of a human brain. Run a cable from the computer to the spark plug array, and have the program fire the spark plugs in the same sequence that it predicts that synapses would occur in a biological human brain. As these firings occurred, the array would experience human-like qualia. The same qualia would not result if the simulator merely computed what plugs ought to fire without actually firing them.

Alternatively, what if granularity right down to the Planck level turned out to be necessary? In that case, the only way to build an artificial brain would to be to actually build, particle-for-particle, a brain — since due to speed-of-light limitations, no other design could possibly model everything it needed to model in real time.

I think that actual requisite granularity is probably somewhere in between. The spark plug design seems too crude to work, while Planck-level correspondence is certainly overkill, because otherwise, the tiniest fluctuation in our surrounding environment, such as a .01 degree change in room temperature, would have a profound impact on our mental state.

Now, from here on is where I depart from Searle if I have not already. Consider the following questions:

  1. If a tree falls in the forest and nobody hears it, does it make an acoustic vibration?
  2. If a tree falls in the forest and nobody hears it, does it make an auditory sensation?
  3. If a tree falls in the forest and nobody hears it, does it make a sound?
  4. Can the Chinese Room (.pdf link) pass a Turing test administered in Chinese?
  5. Does the Chinese Room experience the same qualia that a Chinese-speaking human would experience when replying to a letter written in Chinese?
  6. Does the Chinese Room understand Chinese?
  7. Is the Chinese Room intelligent?
  8. Does the Chinese Room think?

Here is the answer key:

  1. Yes.
  2. No.
  3. What do you mean?
  4. Yes.
  5. No.
  6. What do you mean?
  7. What do you mean?
  8. What do you mean?

The problem with Searle is his lack of any clear answer to "What do you mean?". Most technically-minded people, myself included, think of 6–8 as all meaning something similar to 4. Personally, I think of them as meaning something even weaker than 4, and have no objection to describing, e.g., Google, or even a Bayesian spam filter, as "intelligent". Searle seems to want them to mean the same as 5, or maybe some conjunction of 4 and 5. But in counterintuitive edge cases like the Chinese Room, they don't mean anything at all until you assign definitions to them.

I am not certain whether or not Searle would agree with my belief that it is possible for a Turing machine to correctly answer questions about what qualia a human is experiencing, given a complete physical description of that human. If he takes the negative position on this, then this is a serious disagreement that goes beyond semantics, but I cannot tell that he has ever committed himself to either stance.

Now, there remains a possible argument that might seem to save the simulation hypothesis even in the absence of substrate-independence. "Okay," you say, "you've persuaded me that a human-simulator built of silicon chips would not experience the same qualia as the human it simulates. But you can't tell me that it doesn't experience any qualia. For all you or I know, a lump of coal experiences qualia of some sort. So, let's say you're in fact living in a simulation implemented in silicon. You're experiencing qualia, but those qualia are all wrong compared to what you as a carbon-based bag of meat ought to be experiencing. How would you know anything is wrong? How, other than by life experience, do you know what the right qualia for a bag of meat actually are?"

The answer is that I know my qualia are right because they make sense. Qualia are not pure "outputs": they feed back on the rest of the world. If I step outside on a scorching summer day, then I feel hot, and this unpleasant quale causes me to go back inside, and I am able to understand and articulate this cause and effect. If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.

So, I think I have now established that to any extent we can be said to be living in a simulation, the simulator must physically incorporate a human brain. I have not precluded the possibility of a simulation in the vein of "The Matrix", with a brain-in-a-vat being fed artificial sensory inputs. I think this kind of simulation is indeed possible in principle. However, nothing claimed in Bostrom's simulation argument would suggest that it is at all likely.

ETA: A question that I've put to Sideways can be similarly put to many other commenters on this thread.  "Similar in number", i.e., two apples, two oranges, etc., is, similarly to "embodying the same computation", an abstract concept which can be realized by a wide variety of physical media.  Yet, if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved.  If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?

218 comments

Comments sorted by top scores.

comment by [deleted] · 2011-04-12T04:36:40.399Z · LW(p) · GW(p)

Qualia are physical phenomena. I dearly wish that this statement were uncontroversial. However, if you don't agree with it, then you can reject the simulation argument on far simpler grounds: if experiencing qualia requires a "nonphysical" "soul" or whatnot (I don't know how to make sense out of either of those words), then there is no reason to suppose that any man-made simulator is imbued with a soul and therefore no reason to suppose that it would be conscious. However, provided that you agree that qualia are physical phenomena, then to suppose that they are any kind of exception to the principle I've just stated is simply bizarre magical thinking. A simulator which reasons perfectly about a human being, even including correctly determining what qualia a human would experience, does not necessarily experience those qualia, any more than a simulator that reasons perfectly about high gravity necessarily produces high gravity.

Let's replace qualia with some other phenomenon closely associated with the mind but less confusing. How about this: a poem. A really good poem, the sort of poem that we have not seen from anyone but the greatest human poets working at the peak of their art. So, let's rewrite the above but replacing qualia with really good poems.

Really good poems are physical phenomena. I dearly wish that this statement were uncontroversial. However, if you don't agree with it, then you can reject the simulation argument on far simpler grounds: if a really good poem requires a "nonphysical" "soul" or whatnot (I don't know how to make sense out of either of those words), then there is no reason to suppose that any man-made simulator is imbued with a soul and therefore no reason to suppose that really good poems could arise therein. However, provided that you agree that really good poems are physical phenomena, then to suppose that they are any kind of exception to the principle I've just stated is simply bizarre magical thinking. A simulator which reasons perfectly about really good poems, even including correctly determining what really good poems a human would write, does not necessarily create really good poems, any more than a simulator that reasons perfectly about high gravity necessarily produces high gravity.

Let's discuss this. Your argument presupposes that the item of interest (qualia in your version) is either a physical phenomenon, or else "soul" stuff (what I'll call supernatural). First of all, are really good poems either physical phenomena or supernatural? Are those our only two options? Really good poems don't have mass. They don't have velocity. They don't have a specific number of atoms making them up. You could take a really good poem written in pencil on paper and then write it again by carving it into a stone. All this suggests that we probably don't want to call really good poems "physical phenomena". But then neither do we want to call them supernatural ("soul"-based). There's nothing supernatural about really good poems. A really good poem is just a specific text, and a text is - I would personally be inclined to say - neither a physical phenomenon, or supernatural.

So then why can't the same be true of qualia? Texts seem to fall into a third category apart from physical phenomena or supernatural phenomena. Why not qualia?

Or maybe you are inclined to say that texts are physical, meaning that specific instantiation of a text can supervene on physical phenomena. That's the problem with words like "physical" in the context of a philosophical argument: you can never quite tell what the other guy means by them. So on this alternative interpretation of "physical phenomena" I can ask: why can't the same be true of qualia? Qualia supervene on physical phenomena, but just as the exact same text can supervene on a very wide range of physical phenomena (e.g. it can be carved in stone, written on paper, spoken aloud, encrypted and sent on microwaves, and so on in enormous variety), the exact same qualia for all we know can supervene on a very wide range of physical phenomena.

Can a simulation produce a really good poem? Well, you've stipulated that the simulation can "reason perfectly" about the subject (you said qualia, which I switched to really good poems). I don't see anything barring the simulator from producing really good poems. So why not qualia?

Let's go further in your text. You write:

Hence, the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator. A machine which walks the way a human walks must have the form of a human leg. A machine which grips the way a human grips must have the form of a human hand. And a machine which experiences the way a human experiences must have the form of a human brain.

Let's switch out "qualia" here and switch in "really good poems". Does the type of really good poem depend crucially on the actual physical form of its physical instantiation? If I take a Shakespeare sonnet, and write it once in pencil, once in ink, and once in smoke from a skywriting plane, did the type of poem change at all? Which of these three instances is not a sonnet?

Replies from: dfranke
comment by dfranke · 2011-04-12T12:08:36.987Z · LW(p) · GW(p)

The poem doesn't exist -- or, depending on what word games you want to play with "exist", it exists before it's written. Before anyone can experience the poem, you need to put it into a medium: pencil, ink, smoke, whatever. Their experience of it is different depending on what medium you choose. Your thesis seems to be that when we're talking about qualia, rather than poems, that the "information" in it is all that matters. In response I refer you to the "ETA" at the bottom of my post.

Replies from: None, Psychohistorian
comment by [deleted] · 2011-04-12T13:31:19.008Z · LW(p) · GW(p)

The poem doesn't exist -- or, depending on what word games you want to play with "exist", it exists before it's written.

The poem is the type, the specific instance of the poem is the token. Types do, in a sense, exist before the first token appears, but this hardly renders instances of poems different from, say, apples, or brains. Everything has a type. Apples have a type.

The point remains: the type "Shakespeare's first sonnet" can be instantiated in ink or in pencil. This has nothing to do with the fact that the type "Shakespeare's first sonnet" exists before it's instantiated - because all types do (in the relevant sense).

Before anyone can experience the poem, you need to put it into a medium: pencil, ink, smoke, whatever. Their experience of it is different depending on what medium you choose.

Only because these media are distinguishable. I could write the poem down in india ink, or in, say, watercolor carefully done to look exactly like india ink, and as long as the two instances of the poem are indistinguishable, the reader's experience need not be any different.

How can we tell what the written poem looks like to the reader? We can ask the reader! We can ask him, "what does it look like", and on one occasion he might say, "it looks like ink", and on anther occasion he might say, "it looks like smoke". But we can do the same with the simulated person reading a simulated ink copy of Shakespeare's first sonnet. Assuming we have some way to contact him, we can ask him, "what does it look like," and he might say, "it looks like ink".

comment by Psychohistorian · 2011-04-12T16:47:37.346Z · LW(p) · GW(p)

Their experience of it is different depending on what medium you choose.

Maybe. But the central experience is the same. Maybe there's a difference between experiencing consciousness as implemented on a real brain versus consciousness as implemented inside a simulator. So long as it is possible to implement consciousness in different media, simulations make sense. If you're really a simulator's subroutine and not a physical brain, you wouldn't feel the difference, because you wouldn't know the feeling of having a real brain.

comment by orthonormal · 2011-04-12T03:52:41.593Z · LW(p) · GW(p)

A general principle: if you find that a certain premise is just so obvious that you can't reduce it any further, and yet other smart people, exposed to the same background sources, don't agree that it's obvious (or think that it's false)... that's a signal that you haven't yet reduced the premise well enough to rely on it in practical matters.

Qualia are very confusing for people to talk and think about, and so using your intuitions about them as a knock-down argument for any other conclusion is probably ill-advised.

comment by byrnema · 2011-04-12T03:24:49.820Z · LW(p) · GW(p)

Qualia are physical phenomena.

Yes, qualia are physical. But what does physical mean??

Physical means 'interacting with us in the simulation'.

To us, the simulated Jupiters are not physical -- they do not exert a real gravitational force -- because we are not there with them in the simulation. However, if you add a moon to your simulation, and simulate its motion towards the spheres, the simulated moon would experience the real, physical gravity of the moons.

For a moment, my intuition argued that it isn't 'real' gravity because the steps of the algorithm are so arbitrary -- there are so many ways to model the motion of the moon towards the spheres why should any one chosen way be privileged as 'real'? But then, think of it from the point of view of the moon. However the moon's position is encoded, it must move toward the spheres. Because this is hard-coded into the algorithm. From the point of view of the moon (and the spheres, incidentally) this path and this interaction is entirely immutable. This is what 'real', and what 'physical', feels like.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-12T10:07:40.396Z · LW(p) · GW(p)

But then, think of it from the point of view of the moon.

That would be to grant the assumption that the moon does have a point of view. That's the issue being debated, so we can't prove it by just assuming it.

To "simulate" (i.e. compute everything about) a really really simple Newtonian solar system, all we really need is knowledge of a few numbers (e.g. mass, position) and a few equations.

Does writing those numbers and equations down on a paper mean that I've now created a simulated universe that has "its own point of view"? I certainly don't need a computer to simulate that system, one would be able to do the calculations of it in one's head. And the moon wouldn't even need the head doing the calculations, it would be perfectly defined by the equations and the numbers -- that would ofcourse not even need to be written down on a paper.

This is then Tegmark IV: once you grant that a simulation of a thing has by necessity its own point of view, then that simulation doesn't need any physical component, it's sustained by the math alone.

Replies from: byrnema
comment by byrnema · 2011-04-12T13:21:53.612Z · LW(p) · GW(p)

That would be to grant the assumption that the moon does have a point of view. That's the issue being debated, so we can't prove it by just assuming it.

Oops, I didn't mean that the moon should have a point of view. I find it natural to use anthropomorphisms such as these, but don't intend them literally.

I certainly don't need a computer to simulate that system, one would be able to do the calculations of it in one's head.

Yes, this made me pause. Even while simulating the motion of a moon towards the spheres, there are so many abstract ways to model the moon's position, could they all be equally real? (In which case, each time you simulate something quite concretely, how many abstract things have you unintentionally made real?) But then I decided that even if 'position' and 'motion' are quite abstract, it is real ... though now I have trouble describing why without using a concept like, "from the moon's point of view" or "if the moon observes" which means I was packing something into that. I should think about this more.

This is then Tegmark IV: once you grant that a simulation of a thing has by necessity its own point of view, then that simulation doesn't need any physical component, it's sustained by the math alone.

Perhaps. I'm not sure. The idea that all mathematical possibilities are real is intriguing (I saw this with the Ultimate Ensemble) theory here, but I have a doubt that I will describe here.

It seems to be the case, in this universe anyway, that things need to be causally entangled in order to be real. So setting up a simulation in which a moon is a position on a lattice that moves toward another position on a lattice would model 'real' motion because the motion is the causal result of the lines of code you wrote. However, there are cases when things are not causally entangled and then they are not real.

Consider the case of mental thoughts. I can imagine something that is not real: A leprechaun throws a ball up in the air and it stays up. Of course, my thought are real, and are causally entangled with my neurons. But the two thoughts 'he throws a ball up' and 'it stays up' are not themselves causally entangled. They are just sequential and connected by the word 'and'. I have not created a world where there is no gravity. This is reassuring, since I can also imagine mathematical impossibilities, like a moebius strip in 2D or something inconsistent before I'm aware of the inconsistency.

comment by TheOtherDave · 2011-04-12T04:15:32.810Z · LW(p) · GW(p)

Mostly, discussions of this subject always feel to me like an exercise in redirecting attention, like doing stage magic.

Some things are computations, like calculating the product of 1234 and 5678. Computations are substrate-independent.

I am willing to grant that what the mass of Jupiter does when I'm attracted to it is not a mere computation. (I gather that people like Tegmark would disagree, but I don't even really understand what they mean by their disagreement.)

I certainly agree that if what my brain does when I experience something is not a mere computation, then the idea of instantiating that computation on a different substrate is incoherent and certainly does not reproduce what my brain does. (I don't care whether we call that thing "consciousness" or "qualia" or "pinochle.")

From that point on, it just seems that people build elaborate rhetorical structures to shift people's intuitions to the "my brain is doing something more like calculating the product of 1234 and 5678" or the "my brain is doing something more like exerting gravitational attraction on the moons of Jupiter" side.

Personally, I'm on the "more like calculating the product of 1234 and 5678" side of that particular fence, but I can totally see how that seems simply absurd to some people. And I know folks who are on the "more like gravitational attraction" side, which seems utterly unjustified to me.

I'm just not sure how any amount of rhethoric contributes anything useful to the discussion past that point.

I suspect that until someone can actually provide a real account of how my brain does what it does when I experience something -- either by giving an account of how that derives from the special physical properties of conscious/qualia-having/pinochle-playing systems, or by giving an account of how that derives from the special computational properties of conscious/qualia-having/pinochle-playing systems -- we'll just keep playing reference-class tennis.

comment by Psychohistorian · 2011-04-12T16:43:44.176Z · LW(p) · GW(p)

This proves that we cannot be in a simulation by... assuming we are not in a simulation.

Even granting you all of your premises, everything we know about brains and qualia we know by observing it in this universe. If this universe is in fact a simulation, then what we know about brains and qualia is false. At the very most, your argument shows that we cannot create a simulation. It does not prove that we cannot be in a simulation, because we have no idea what the physics of the real world would be like.

I'm also rather unconvinced as to the truth of your premises. Even if qualia are a phenomenon of the physical brain, that doesn't mean you can't generate a near-identical phenomenon in a different substrate. In general, John Searle has some serious problems when it comes to trying to answer essentially empirical questions with a priori reasoning.

Replies from: dfranke
comment by dfranke · 2011-04-12T17:49:49.568Z · LW(p) · GW(p)

Even granting you all of your premises, everything we know about brains and qualia we know by observing it in this universe. If this universe is in fact a simulation, then what we know about brains and qualia is false. At the very most, your argument shows that we cannot create a simulation. It does not prove that we cannot be in a simulation, because we have no idea what the physics of the real world would be like.

Like pjeby, you're attacking a claim much stronger than the one I've asserted. I didn't claim we cannot be in a simulation. I claimed that if we are in a simulation, then the simulator must be of a sort that Bostrom's argument provides us no reason to suppose is likely to exist.

In general, John Searle has some serious problems when it comes to trying to answer essentially empirical questions with a priori reasoning.

There's nothing wrong with trying to answer empirical questions with deductive reasoning if your priors are well-grounded. Deductive logic allows me to reliably predict that a banjo will fall if I drop it, even if I have never before observed a falling banjo, because I start with the empirically-acquired prior that, in general, dropped objects fall.

Replies from: Psychohistorian, Cyan
comment by Psychohistorian · 2011-04-12T18:17:53.927Z · LW(p) · GW(p)

I didn't claim we cannot be in a simulation.

Then the title, "We are not living in a simulation" was rather poorly chosen.

Deductive logic allows me to reliably predict that a banjo will fall if I drop it, even if I have never before observed a falling banjo, because I start with the empirically-acquired prior that, in general, dropped objects fall.

Observation gives you, "on Earth, dropped objects fall." Deduction lets you apply that to a specific hypothetical. You don't have observation backing up the theory you advance in this article. You need, "Only biological brains can have qualia." You have, "Biological brains have qualia." Big difference.

Ultimately, it seems you're trying to prove a qualified universal negative - "Nothing can have qualia, except biological brains (or things in many respects similar)." It is unbelievably difficult to prove such empirical claims. You'd need to try really hard to make something else have qualia, and then if you failed, the most you could conclude is, "It seems unlikely that it is possible for non-biological brains to have qualia." This is what I mean when I disparage Searle; many of his claims require mountains of evidence, yet he thinks he's resolved them from his armchair.

Replies from: dfranke
comment by dfranke · 2011-04-12T18:25:47.757Z · LW(p) · GW(p)

we cannot be in a simulation

We are not living in a simulation

These things are not identical.

Replies from: Cyan
comment by Cyan · 2011-04-12T18:39:41.789Z · LW(p) · GW(p)

So you would assert that we can be in a simulation, but not living in it...?

Replies from: dfranke
comment by dfranke · 2011-04-12T19:35:16.108Z · LW(p) · GW(p)

Try reading it as "the probability that we are living in a simulation is negligibly higher than zero".

Replies from: Cyan, None
comment by Cyan · 2011-04-12T21:47:36.514Z · LW(p) · GW(p)

I tried it. It didn't help.

No joke -- I'm completely confused: the referent of "it" is not clear to me. Could be the apparent contradiction, could be the title...

Here's what I'm not confused about: (i) your post only argues against Bostrom's simulation argument; (ii) it seems you also want to defend yourself against the charge that your title was poorly chosen (in that it makes a broader claim that has misled your readership); (iii) your defense was too terse to make it into my brain.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-13T08:28:01.519Z · LW(p) · GW(p)

dfranke means, I think, that he considers being in a simulation possible, but not likely.

Statement A) "We are not living in a simulation": P(living in a simulation) < 50%

Statement B) "We cannot be in a simulation": P(living in a simulation) ~= 0%

dfranke believes A, but not B.

Replies from: dfranke, CuSithBell, Cyan
comment by dfranke · 2011-04-13T14:58:02.227Z · LW(p) · GW(p)

No, rather:

A) "We are not living in a simulation" = P(living in a simulation) < ε.

B) "we cannot be living in a simulation" = P(living in a simulation) = 0.

I believe A but not B. Think of it analogously to weak vs. strong atheism. I'm a weak atheist with respect to both simulations and God.

Replies from: Cyan
comment by Cyan · 2011-04-14T00:46:03.374Z · LW(p) · GW(p)

Ah, got it. Thanks.

comment by CuSithBell · 2011-04-13T14:40:19.504Z · LW(p) · GW(p)

That may be dfranke's intent, but categorically stating something to be the case generally indicates a much higher confidence than 50%. ("If you roll a die, it will come up three or higher.")

comment by Cyan · 2011-04-13T12:50:22.428Z · LW(p) · GW(p)

Thanks.

comment by [deleted] · 2011-04-12T20:08:56.458Z · LW(p) · GW(p)

That I agree with, though not for reasons brought up here.

comment by Cyan · 2011-04-12T18:21:15.830Z · LW(p) · GW(p)

I didn't claim we cannot be in a simulation.

Then it's from your title that people might get the impression you're making a stronger claim than you mean to be.

comment by PlaidX · 2011-04-12T04:25:16.910Z · LW(p) · GW(p)

If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.

But in the simulation, you WOULD have an objection to purple, and you would call purple "hot", right? Or is this some haywire simulation where the simulated people act normally except they're completely baffled as to why they're doing any of it? Either what you're saying is incredibly stupid, or I don't understand it. Wait, does that mean I'm in a simulation?

Replies from: dfranke
comment by dfranke · 2011-04-12T12:01:06.392Z · LW(p) · GW(p)

Is this some haywire simulation where the simulated people act normally except they're completely baffled as to why they're doing any of it?

Yes. A simulation in which people experienced one sort of qualia but behaved as though they were experiencing another would go completely haywire.

Replies from: DSimon
comment by DSimon · 2011-04-13T14:08:39.847Z · LW(p) · GW(p)

This doesn't seem right. If they experience qualia A but react exactly as though they were experiencing qualia B... how's that practically different from just experiencing qualia B?

You might be able to tell the difference between the two qualia by somehow arranging to experience both subjective points of view through a telepathy machine or something. However, considering a single individual's viewpoint and actions, if they get "purple" when it's too hot outside and stop being "purple" when they went somewhere cool, then the person's actions are the same as if they were avoiding the qualia "hot" or "sour" or "flarglblargl", and the system doesn't go haywire at all.

comment by JGWeissman · 2011-04-12T02:12:58.538Z · LW(p) · GW(p)

Run a cable from the computer to the spark plug array, and have the program fire the spark plugs in the same sequence that it predicts that synapses would occur in a biological human brain. As these firings occurred, the array would experience human-like qualia. The same qualia would not result if the simulator merely computed what plugs ought to fire without actually firing them.

This would imply that qualia are epiphenomenal. If so, and when people talk about qualia they are accurately reporting them, without the epiphenomenal qualia causing the accurate report, where does that improbability come from?

Replies from: dfranke
comment by dfranke · 2011-04-12T02:33:11.789Z · LW(p) · GW(p)

I don't understand why you think it would imply that. The claims in my second-to-last paragraph clearly imply that they are not epiphenomenal. Where have I contradicted myself?

Replies from: Manfred, JGWeissman
comment by Manfred · 2011-04-12T03:22:48.645Z · LW(p) · GW(p)

The idea is that If you were simulated on that computer and someone asked you to describe your qualia, you could do it perfectly - despite having not qualia! This is a bit magical.

comment by JGWeissman · 2011-04-12T03:05:51.515Z · LW(p) · GW(p)

I don't understand why you think it would imply that.

The simulation does not receive any feedback from the spark plugs, and so, within the simulation, everything is the same whether the spark plugs are there or not, so the qualia are (only) in the spark plugs, the simulation does the same thing whether the qualia exist or not, i.e. the qualia have no causal effects on the simulation, which is what I mean by saying they are epiphenomenal.

Replies from: dfranke
comment by dfranke · 2011-04-12T03:17:58.375Z · LW(p) · GW(p)

The spark doesn't have any effect on the simulator, but that doesn't mean that the simulator can't predict in advance what effect that spark would have if it occurred inside a brain and reason accordingly. You seem to be implying that the simulator can't determine what effect the spark (and its resulting qualia) would have before the spark actually occurs. This isn't the case for any other physical phenomenon -- I don't have to let go of a ball in mid-air to predict that it will fall -- so why would you suppose it to be true of qualia?

Replies from: JGWeissman
comment by JGWeissman · 2011-04-12T03:23:50.197Z · LW(p) · GW(p)

The spark doesn't have any effect on the simulator, but that doesn't mean that the simulator can't predict in advance what effect that spark would have if it occurred inside a brain and reason accordingly.

The simulator can make that prediction and apply the results within the simulation even if it is not connected to the spark plugs.

You seem to be implying that the simulator can't determine what effect the spark (and its resulting qualia) would have before the spark actually occurs.

No, I am implying that since you can make the prediction, the actual spark isn't important.

Replies from: dfranke
comment by dfranke · 2011-04-12T03:29:38.377Z · LW(p) · GW(p)

No, I am implying that since you can make the prediction, the actual spark isn't important.

Why is this different from the claim that because you can make the prediction of what gravitational field a massive sphere will produce, the actual sphere isn't important?

Replies from: JGWeissman
comment by JGWeissman · 2011-04-12T03:41:15.831Z · LW(p) · GW(p)

Within the simulation, having an actual sphere is not important, the simulator applies the same prediction to the simulator either way. If you care about effects outside the simulation, then you would need an outside-the-simulation sphere to gravitationally attract objects outside the simulation, in the same way that you would need to report a simulated person's musings about their own qualia (or other reactions to their own qualia) to me outside the simulation for their qualia to affect me in the same way I would affected by similar musings (or other reactions) of people outside the simulation that I learn about.

Replies from: dfranke
comment by dfranke · 2011-04-12T03:53:00.836Z · LW(p) · GW(p)

I think I can justly paraphrase you as follows:

The gravity, and the qualia, are occurring inside the simulation. You only need to worry about having an actual sphere, or an actual brain, if you want to have effects outside the simulation.

If this paraphrasing is accurate, then I ask you, what does "occurring inside the simulation mean"? What is the physical locus at which the gravity and qualia are happening? I see two reasonable answers to this question: either, "at the simulator", or "nowhere". In the former case, I refer you back to my previous reply. In the latter case, you concede that neither the gravity nor the qualia are real.

Replies from: Yvain, JGWeissman
comment by Scott Alexander (Yvain) · 2011-04-13T10:34:34.289Z · LW(p) · GW(p)

Your position within our universe is giving you a bias toward one side of a mostly symmetrical situation.

Let's throw out the terms "real" and "simulated" universe and call them the "parent" and "child" universe.

Gravity in the child universe doesn't affect the parent universe, true; creating a simulation of a black hole doesn't suck the simulating computer into the event horizon. But gravity in the parent universe doesn't affect the child universe either - if I turn my computer upside-down while playing SimCity, it doesn't make my Sims scream and start falling into the sky as their city collapses around them. So instead of saying "simulated gravity isn't real because it can't affect the real universe", we say "both the parent and child universes have gravity that only acts within their own universe, rather than affecting the other."

Likewise, when you say that you can't point to the location of a gravitional force within the simulation so it must be "nowhere" - balderdash. The gravitational force that's holding Sim #13335 to the ground in my SimCity game is happening on Oak Street, right between the park and the corporate tower. When discussing a child-universe gravitational force, it is only necessary to show it has a location within the child-universe. For you to say it "doesn't exist" because you can't localize it in your universe is as parochial as for one of my Sims to say you don't exist because he's combed the entire city from north to south and he hasn't found any specific location with a person named "dfranke".

Replies from: JGWeissman, TheOtherDave, Sniffnoy, dfranke
comment by JGWeissman · 2011-04-13T16:38:51.835Z · LW(p) · GW(p)

if I turn my computer upside-down while playing SimCity, it doesn't make my Sims scream and start falling into the sky as their city collapses around them.

This calls for a port of SimCity to a mobile device with an accelerometer.

Replies from: None
comment by [deleted] · 2011-04-13T16:51:14.106Z · LW(p) · GW(p)

This calls for a port of SimCity to a mobile device with an accelerometer.

Simcity has been ported to a mobile device with an accelerometer. No, I don't think it uses it (at least, not in that way).

comment by TheOtherDave · 2011-04-13T14:21:38.526Z · LW(p) · GW(p)

This is a digression, but... I'm not sure it actually makes sense to claim that what holds Sim #1335 to the ground is a gravitational force, any more than it would make sense to say that what holds an astronaut connected to the outside of their shuttle via magnetic boots is a gravitational force.

What it is, exactly, I don't know -- I haven't played SimCity since the early 90s, and have no sense of how it behaves or operates. But I'd be really surprised if it were something that, if I found myself in that universe having my memories, I'd be inclined to call gravitation.

Replies from: wnoise
comment by wnoise · 2011-04-13T16:39:15.759Z · LW(p) · GW(p)

For the Sims, yes, I'd agree. For a more physical based simulation, I would not.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-13T17:26:35.203Z · LW(p) · GW(p)

(nods) As I say, it's a digression.

comment by Sniffnoy · 2011-04-14T01:18:49.955Z · LW(p) · GW(p)

In addition it should probably be pointed out that real things in general don't need to have a location. I think we can all agree that the electromagnetic field is real, e.g., but the question "Where is the electromagnetic field?" is nonsense.

Replies from: wnoise
comment by wnoise · 2011-04-14T08:41:54.374Z · LW(p) · GW(p)

The question itself is not quite nonsense. There is a perfectly reasonable answer of "everywhere". It's just not a particularly useful question, and this is because of the hidden assumptions behind it, which are wrong and can easily lead to nonsense questions. "What is the value of the electromagnetic field at X?" is a much more interesting question that can be asked once those incorrect assumptions are removed and replaced.

Replies from: Sniffnoy
comment by Sniffnoy · 2011-04-14T22:59:51.969Z · LW(p) · GW(p)

Eh. You can force an answer in English, sure, but it's still not really the "right" answer. The electromagnetic field is a function from spacetime, to, uh, some sort of tangent bundle on it or something? My knowledge of how to formalize this sort of thing isn't so great. My point is that it's a function taking spacetime locations as inputs; it doesn't really have a location itself any more than, say, the metric of spacetime does. When we say "it's everywhere" what's meant is something more like "it's defined everywhere" or "at every location, it affects things".

Replies from: wnoise
comment by wnoise · 2011-04-17T20:40:55.050Z · LW(p) · GW(p)

The EM field is used both for the function, and the values of that function. (I think it's actually a skew-symmetric linear operator on the tangent space T_x M at a given point. This can be phrased in terms of a bivector at that point. A "bundle" TM = Union_x T_x M talks about an extended manifold connecting tangent spaces at a different points.) I think it's entirely reasonable in common language to use "where" to mean "where it's non-negligible". Consider that physical objects are also fields. It's entirely reasonable to ask "where an electron is" even though the electron field is a function of spacetime. Once we're able to ask the right questions, this becomes a less-useful question, as it only applicable in cases where the field is concentrated. The EM field case just breaks down much sooner.

comment by dfranke · 2011-04-13T13:15:02.122Z · LW(p) · GW(p)

The claim that the simulated universe is real even though its physics are independent of our own seem to imply a very broad definition of "real" that comes close to Tegmarck IV. I've posted a followup to my article to the discussion section: Eight questions for computationalists. Please to reply to it so I can better understand your position.

comment by JGWeissman · 2011-04-12T04:11:42.604Z · LW(p) · GW(p)

I think I can justly paraphrase you as follows:

The gravity, and the qualia, are occurring inside the simulation. You only need to worry about having an actual sphere, or an actual brain, if you want to have effects outside the simulation.

Not quite. Where as with the simulation of the sphere you need to an actual sphere or equivalent mass to produce the simulated effect outside the simulation, with a simulated person you need only the simulated output of the person, not the person (or its physical components) itself, to have the same effect outside the simulation as the output of a person from outside the simulation. The improbability of having a philosophy paper copied from with the simulation that describes qualia is explained by the qualia within the simulation.

comment by Alex Flint (alexflint) · 2011-04-12T17:42:49.756Z · LW(p) · GW(p)

The reason we think intelligence is substrate-independent is that the properties we're interested in (the ones we define to constitute "intelligence") do not make reference to any substrate. Can a simulation of a brain design a aeroplane? Yes. Can a simulation of a brain prove Pythagoras' theorem? Yes. Can a simulation of a brain plan strategically in the presence of uncertainty? Yes. These are the properties we mean when we say "intelligence". Under a different definition for "intelligence" that stipulates "composed of neurons" or "looks grey and mushy", intelligence is not substrate-independent. It's just a word game.

Replies from: TheOtherDave, dfranke
comment by TheOtherDave · 2011-04-12T17:59:18.944Z · LW(p) · GW(p)

Well, that's not true for everyone here, I suspect.

Eliezer, for example, does seem very concerned with whether the optimization process that gets constructed (or, at least, the process he constructs) has some attribute that is variously labelled by various people as "is sentient," "has consciousness," "has qualia," "is a real person," etc.

Presumably he'd be delighted if someone proved that a simulation of a human created by an AI can't possibly be a real person because it lacks some key component that mere simulations cannot have. He just doesn't think it's true. (Nor do I.)

comment by dfranke · 2011-04-12T18:01:10.617Z · LW(p) · GW(p)

I can't figure out whether you're trying to agree with me or disagree with me. You comment sounds argumentative, yet you seem to be directly paraphrasing my critique of Searle.

comment by Perplexed · 2011-04-12T13:39:42.411Z · LW(p) · GW(p)

If I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?

The two apples in the head of your strawman have the same cardinality as the two hemispheres of your brain, but what is needed to permit replacement of a component without ill-effect is that the replacement have the same functionality. That is, internally in the replaced component, we only require isomorphism to the original (i.e. embodying the same computation). But at the interface we require something stronger than mere isomorphism - we require functional equivalence, which in practice usually means identical interfaces.

So, to replace a brain, we require not only that the same computation be performed, but also that the same functional connections to the nerves of the spinal column exist, that the same chemical interactions with the limbic system exist,that the blood-brain barrier remain intact, yet that oxygen and nutrients from the blood continue to be depleted, etc.

That is my response to your ETA, but how do I respond to your main argument? To be brutally honest, I don't think that your argument deserves response. Though to be fair, the Bostrom argument that you seek to refute was no better, nor was Searle's Chinese room or Descartes' confident Cogito. Philosophical speculation regarding cognition in our present state of ignorance is just about as useful as would be disputation by medieval philosophers confronted with a 21st century TV newscast - wondering whether the disembodied talking heads appearing there experience pain.

Replies from: dfranke
comment by dfranke · 2011-04-12T14:31:30.166Z · LW(p) · GW(p)

Philosophical speculation regarding cognition in our present state of ignorance is just about as useful as would be disputation by medieval philosophers confronted with a 21st century TV newscast - wondering whether the disembodied talking heads appearing there experience pain.

I don't think this is quite fair. The concept that medieval philosophers were missing was analytic philosophy, not cathode rays. If the works of Quine and Popper and Wittgenstein fell through a time warp, it'd be plausible that medieval philosophers could have made legitimate headway on such a question.

Replies from: Perplexed
comment by Perplexed · 2011-04-12T14:52:47.348Z · LW(p) · GW(p)

I sincerely don't understand what you are saying here. The most natural parsing is that a medieval philosopher could come to terms with the concept of a disembodied talking head, if only he read some Quine, Popper, and Wittgenstein first. Yet, somehow, that interpretation seems uncharitable.

If you are instead suggesting that the schoolmen would be able to understand Quine, Popper, and Wittgenstein, should their works magically be transmitted back in time, then I tend to agree. But I don't think of this 'timeless quality' as a point recommending analytic philosophy.

Replies from: dfranke
comment by dfranke · 2011-04-12T14:58:51.139Z · LW(p) · GW(p)

The interpretation that you deem uncharitable is the one I intended.

Replies from: wnoise, Perplexed
comment by wnoise · 2011-04-12T15:52:35.876Z · LW(p) · GW(p)

Community: clarifications like this are vital, and to be encouraged. Please don't downvote them.

Replies from: dfranke
comment by dfranke · 2011-04-12T15:59:36.154Z · LW(p) · GW(p)

The guy who downvoted that one downvoted all the rest of my comments in this thread at the same time. Actually, he downvoted most of them earlier, then picked that one up in a second sweep of those comments that I had posted since he did his first pass. So, your assumption that the downvote had anything to do with the content of that particular comment is probably misguided.

Replies from: thomblake, shokwave, AstroCJ
comment by thomblake · 2011-04-12T19:37:19.110Z · LW(p) · GW(p)

Where do you get such specific information about those who vote on your comments?

Replies from: dfranke
comment by dfranke · 2011-04-12T19:49:00.936Z · LW(p) · GW(p)

I just hit reload at sufficiently fortuitous times that I was able to see all my comments drop by exactly one point within a minute or so of each other, then later see the same thing happen to exactly those comments that it didn't happen to before.

Replies from: wedrifid
comment by wedrifid · 2011-04-13T10:32:53.237Z · LW(p) · GW(p)

I downvoted most of your comments in this thread too, for what it is worth. With very few exceptions I downvote all comments and posts advocating 'qualia'. Because qualia are stupid, have been discussed here excessively and those advocating them tend to be completely immune to reason. Most of the comments downvoted by this heuristic happen to be incidentally worth downvoting based on individual (lack of) merit.

comment by shokwave · 2011-04-12T16:21:28.549Z · LW(p) · GW(p)

However, wnoise's comment scored the grandparent an upvote from me, and possibly from others too!

comment by AstroCJ · 2011-04-13T07:58:09.133Z · LW(p) · GW(p)

hiss

I downvoted a fair number of your comments because they appear to me to be extremely ill-thought out; I did not downvote your clarification above.

Do not gender me male by assumption.

Edit: DVer: I can see no reason to DV that is both "self evident" and "reasonable after proper consideration". Please, feel free to be more constructive.

Replies from: wedrifid, shokwave, Alicorn
comment by wedrifid · 2011-04-13T10:41:04.095Z · LW(p) · GW(p)

I didn't downvote but suggest that the hiss probably didn't help. It gave away the intellectual high ground.

Actually, revise that, I will downvote you. Because "do not gender me male by assumption" is outright petty when you are not even named. The 'gender' was assigned to a perceived pattern in voting.

comment by shokwave · 2011-04-13T11:51:33.978Z · LW(p) · GW(p)

IIRC from surveys and such, males are overrepresented on LessWrong. If dfranke is going to assume gender at all, he's better off assuming male than assuming female. If you'd prefer he didn't assume gender at all, then say so. But I presume the gendering was not a conscious decision, but rather an artifact of comfortably expressing himself; we deal with easily identifiable genders in everyday speech so we're used to patterns of speech that use genders, and consequently we have to make special effort to rephrase sentences in a non-gendered fashion.

Basically, you can't be indignant about being assumed male; only about being assumed at all. This means you can't take any personal affront, because now you are criticizing someone else's style of expression, not being personally insulted or attacked.

(I submit that you are being downvoted because you took personal affront to something that you really cannot take personal affront to at all)

Replies from: NancyLebovitz, AstroCJ
comment by NancyLebovitz · 2011-04-13T15:06:27.300Z · LW(p) · GW(p)

Basically, you can't be indignant about being assumed male; only about being assumed at all. This means you can't take any personal affront, because now you are criticizing someone else's style of expression, not being personally insulted or attacked.

Telling people what they can't feel when it's obvious that they're feeling it isn't likely to have the effect you want.

Sidetrack: I thought "guy" wasn't all that strongly gendered any more, but I seem to be wrong about that.

Replies from: shokwave, Alicorn
comment by shokwave · 2011-04-13T15:16:53.309Z · LW(p) · GW(p)

Good point. I had in mind Eliezer's "the way opposes your fear / the way opposes your calm" when I wrote that part, and reading it without that specific mindset it does appear quite off-putting.

comment by Alicorn · 2011-04-13T15:10:05.850Z · LW(p) · GW(p)

I thought "guy" wasn't all that strongly gendered any more, but I seem to be wrong about that.

"Guys" as a plural in the second person isn't gendered ("you guys"). In other grammatical contexts it is quite male.

comment by AstroCJ · 2011-04-13T14:05:12.889Z · LW(p) · GW(p)

[blanked]

I have no desire to continue this upsetting conversation.

Replies from: shokwave
comment by shokwave · 2011-04-13T14:36:19.176Z · LW(p) · GW(p)

When someone makes a completely wrong assumption and a completely wrong deduction about you

But, dfranke didn't do these things. He made a completely correct assumption based on his knowledge of LessWrong's population, or on his prior for a random sample of the population. He didn't know anything about the downvoter except that they downvoted - he had to guess at the rest of their characteristics, he chose (again, possibly not entirely consciously) to guess at their gender in order to express himself the way he wished to.

If he had known he was speaking of a transgendered downvoter, you would be justified in being angry. As he did not, you should not be angry. Note that in the past, commenters have been corrected on their usage of male gendered pronouns when explicitly referring to other posters who do not appreciate that practice, and these corrections have been upvoted - as I believe Alicorn may have mentioned.

If you wish to criticize the practice of using gendered pronouns in common communication, you may do so; LessWrong is already partial to this argument, but it's not a community norm, so indignation is not the correct response.

Replies from: JGWeissman, AstroCJ
comment by JGWeissman · 2011-04-13T16:35:16.767Z · LW(p) · GW(p)

He made a completely correct assumption

You are declaring "completely correct" an assumption that turned out to be wrong.

Replies from: None, shokwave
comment by [deleted] · 2011-04-13T16:55:00.481Z · LW(p) · GW(p)

"Correct" clearly is not, in shokwave's statement, intended to refer to its truth value. There are other forms of correctness, such as obeying the rules.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T17:03:33.430Z · LW(p) · GW(p)

There are other forms of correctness, such as obeying the rules.

There is no rule that excuses you for being wrong because you followed the rules.

"If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."

Replies from: None, shokwave
comment by [deleted] · 2011-04-13T17:47:14.826Z · LW(p) · GW(p)

I'm slightly saddened that you were so rapidly upvoted for lazily mis-applying Eliezer cites as a substitute for doing your own thinking.

I'll take the cites one by one.

The first cite (which you badly paraphrased) concerns people who fail to follow Bayesian rationality and who attempt to excuse this by saying that they followed some social-custom-based rationality. His point in this article is that rationality, true rationality, is not a matter of social rules.

You have mis-applied this. True rationality does not preclude having false factual beliefs on occasion. It is possible, for example, for the evidence to temporarily mislead an ideally rational person.

The second cite is an improvement on the first since you are providing a quote but still does not apply in the way you want it to. His point is that rationalists should win. This does not mean that rationalists will necessarily win every time. Rather, rationalists will win more often than non-rationalists. In Newcomb's Problem Omega has a historical record of being wrong 1 times out of 100, so even if all players have so far been making the correct, rational choice (which is the one-box choice), then 1 out of 100 of them have still been losing. This does not demonstrate that those who picked the one-box and still lost were not, in fact, being rational on those occasions. That they were being rational is demonstrated by the fact that the ones who adopted their exact same strategy won 99 times out of 100.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T18:07:50.705Z · LW(p) · GW(p)

In Newcomb's Problem Omega has a historical record of being wrong 1 times out of 100, so even if all players have so far been making the correct, rational choice (which is the one-box choice), then 1 out of 100 of them have still been losing. This does not demonstrate that those who picked the one-box and still lost were not, in fact, being rational on those occasions. That they were being rational is demonstrated by the fact that the ones who adopted their exact same strategy won 99 times out of 100.

There is a level of rationality where when you know what happens 99 times out of 100, you can win 99 times out of 100. And then there is a higher level, where you figure out how to predict that 1 in 100 deviation, and win all the time. The protestations that you followed the rules when you were wrong are excuses not to pursue the higher level.

Replies from: shokwave, None
comment by shokwave · 2011-04-13T18:17:52.008Z · LW(p) · GW(p)

(I'll gladly take the downvotes for this.)

what the hell is this crap

Replies from: jimrandomh, NancyLebovitz, None
comment by jimrandomh · 2011-04-13T18:51:07.083Z · LW(p) · GW(p)

what the hell is this crap

Seconded. This whole conversation appears to be the result of someone stepping on a social land mine, by using an incorrect pronoun. We've got people arguing he should have known it was there and detoured around it (presupposing gender is bad); people arguing that he acted correctly because the land mine was on the shortest path to his destination (Spivak pronouns are awkward); people arguing that it ought not to be a social land mine in the first place (the offense taken was disproportionate).

And now, it seems, we've gone meta and somehow produced an analogy to Newcomb's problem. I still don't understand that one.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-13T19:33:22.968Z · LW(p) · GW(p)

I'm embarrassed to note that I misread "presupposing gender is bad" as (presupposing (gender is bad)) rather than ((presupposing gender) is bad), and was halfway through a comment pointing out that nobody was presupposing any such thing before I realized I was being an idiot.

I feel oddly compelled to confess to this.

Replies from: thomblake
comment by thomblake · 2011-04-13T20:26:02.355Z · LW(p) · GW(p)

I parsed it the same way, and did not even catch the mistake.

comment by NancyLebovitz · 2011-04-13T20:18:15.284Z · LW(p) · GW(p)

Where does this post fit into your ideas about community norms and indignation?

Replies from: shokwave
comment by shokwave · 2011-04-14T04:59:52.889Z · LW(p) · GW(p)

The post I replied to was so ridiculous that I was forced to be the unreasonable one in order to fully communicate my distaste to JGWeissman.

As it turned out, it appears I misjudged - my comment was not as far out of line as I had thought it would be. If I could do this again, I would not have put the parenthetical disclaimer in.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-14T19:50:47.636Z · LW(p) · GW(p)

It would not surprise me if, had you posted it without the disclaimer, it would have been downvoted. Of course, I have no data to back that up.

Replies from: shokwave
comment by shokwave · 2011-04-15T02:04:07.402Z · LW(p) · GW(p)

I expect so too, but I would have gladly taken the downvotes for it, disclaimer or no.

comment by [deleted] · 2011-04-13T18:26:42.635Z · LW(p) · GW(p)

All in the service of bizarrely attempting to refute my point, which is an elementary point about word meanings that any English speaker should be be aware of, that the word "correct" has more than one meaning, and that "factually true" is only one, and clearly not the one meant.

Did Eliezer write any post about the abuse of cleverness to promote stupidity? Seeing as citing Eliezer posts seems to be a shortcut to karma fortune.

Replies from: Desrtopa
comment by Desrtopa · 2011-04-13T18:29:34.894Z · LW(p) · GW(p)

Knowing about biases can hurt people is the first thing that comes to mind.

comment by [deleted] · 2011-04-13T18:17:16.340Z · LW(p) · GW(p)

You've retreated to the argument that there is no excuse for failing to be omniscient.

I'm sorry, but this simply does not fly in context. If you apply this in context, then the complaint that was raised against dfranke becomes a complaint that he failed to be omniscient. None of us are omniscient. It is arbitrary to single out and selectively attack dfranke for his failure to be omniscient.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T18:23:15.159Z · LW(p) · GW(p)

I am not saying that dfranke should be forever banished to Bayesian Hell for his mistake, but he did make a mistake. Like I said:

It is true that people make mistakes, and we should be able to react by improving ourselves and moving on, but the first step in this process is to stop making excuses and admit the mistake.

Replies from: None
comment by [deleted] · 2011-04-13T18:37:47.591Z · LW(p) · GW(p)

I actually agree with you that dfranke may have made a mistake, but I disagree about the identity of the mistake. The possible mistake would be the inverse of what you have been arguing. You cited Eliezer posts to the effect that obeying social rules is no excuse for being irrational. But the purported problem here surely is that dfranke broke certain social rules - the purported rule to make no assumptions about a poster's gender when referring to them. It is the breaking of social rules, not irrationality per se, that typically causes offense. And offense is what was caused here.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T18:53:32.985Z · LW(p) · GW(p)

And offense is what was caused here.

No, mere offense is not the problem. AstroCJ reports:

When someone calls me "he", the strong and immediate association in my mind is that they are about to verbally abuse or assault me. "He" as a default might be lovely and convenient for cisgendered men, but it's not polite to women, and it stumbles across very, very negative and visceral associations to transwomen.

It happens that transwomen who physically look like men get physically asaulted, by people who identify the transwomen as a (defective) man, and make a point emphasizing this identification during the assault. So when someone else identifies her as a man, she anticpates (through the representitive heurestic) that she is about to be assaulted. This anticipation, though irrational and inaccurate and possibly even contradicted by more accurate explicit beliefs, is highly stressful. This stress is a real consequence of the misidentification, and I think we should be able to recognize this consequence as a bad thing independantly of social rules.

Replies from: None
comment by [deleted] · 2011-04-13T19:07:35.224Z · LW(p) · GW(p)

I think we should be able to recognize this consequence as a bad thing independantly of social rules.

All right, point taken.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T19:24:31.346Z · LW(p) · GW(p)

If I understand correctly, you agree that dfranke made an actual mistake about what decision to make to get good consequences, rather than merely violating a social rule.

Given that context, is there anything I said in the previous discussion that you were previously confused about that you understand now, or any assertions you may have confidently made that you should now reconsider?

Replies from: None
comment by [deleted] · 2011-04-13T19:46:30.177Z · LW(p) · GW(p)

I am only agreeing to the specified point, which is that the stress caused to AstroCJ is a bad thing independently of social rules.

It does not follow that dfranke necessarily made a mistake of rationality, given what dfranke knew at the time, and even given what dfranke was responsible for knowing at the time (to take the criterion of responsibility up a notch).

Would it be a normal psychological reaction for dfranke now to feel guilt and apologize for the stress caused, even if dfranke has genuinely done nothing wrong? Maybe. Recall this post. Quoting:

Fourth, guilt sometimes occurs even when a person has done nothing wrong.

As a matter of fact - and here I'm re-introducing the idea of the social norm - it may be a social norm for dfranke now to apologize even if dfranke has done nothing wrong. Such a social norm could be built on top of the psychological regularity that Yvain pointed out.

Replies from: JGWeissman, wedrifid
comment by JGWeissman · 2011-04-13T20:11:14.120Z · LW(p) · GW(p)

Ok, so you previously said that you agree "that dfranke may have made a mistake", and you now agree that this mistake was not a violation of social rules. You still assert that it was not a "mistake of rationality".

Would you agree that it was a mistake that dfranke, and others who behave the same way, should take note of and avoid repeating in the future? Ultimately, my point is that whatever rules were correctly or incorrectly followed to lead to this bad outcome, the bad outcome should be a red flag that says we should try to understand what happened, and fix the rules or follow the rules better or whatever will work to not repeat the mistake.

The general problem with arguing that bad outcomes were not caused by a mistake is that whatever denotations you use to make it technically correct, it is bringing in connotations that there is nothing to fix, which is flat out false.

Replies from: None
comment by [deleted] · 2011-04-13T20:37:15.199Z · LW(p) · GW(p)

Ok, so you previously said that you agree "that dfranke may have made a mistake", and you now agree that this mistake was not a violation of social rules.

No, I retract entirely the claim that he may have made a social mistake. I do not substitute for it any other claim.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T21:05:46.465Z · LW(p) · GW(p)

Let me get this straight:

  1. You thought that dfrank had made a mistake of violating a social rule.

  2. I argued that the mistake was not merely violation of a social rule.

  3. You accepted my argument, thus modifying your belief to: there was no mistake?

Tabooing "mistake", would you agree that a bad outcome occured, and that in the future we should make better decisions so that similar bad outcomes do not occur?

Replies from: None, Jonathan_Graehl
comment by [deleted] · 2011-04-13T21:36:13.718Z · LW(p) · GW(p)

You accepted my argument,

No, I did not accept your argument. I accepted a particular point that you raised. I quoted the point that I accepted.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T21:49:46.659Z · LW(p) · GW(p)

No, I did not accept your argument. I accepted a particular point that you raised. I quoted the point that I accepted

Fine, consider line 3 to be modified to say "You accepted a point in my argument" instead of "You accepted my argument". I still want to know: is the result that you now believe there was no mistake? If so, how did that happen? If not, what was the result?

(Was that distinction really so important that it had to take the place of responding to my questions?)

And the really important part of that comment was:

Tabooing "mistake", would you agree that a bad outcome occured, and that in the future we should make better decisions so that similar bad outcomes do not occur?

Replies from: None
comment by [deleted] · 2011-04-13T22:09:41.312Z · LW(p) · GW(p)

Tabooing "mistake", would you agree that a bad outcome occured, and that in the future we should make better decisions so that similar bad outcomes do not occur?

I think that even ideal decisionmaking will, in the face of uncertainty, occasionally produce bad outcomes. Therefore the occurrence of a single bad outcome is not proof, and may not even be strong evidence, that a bad decision was made.

I think moreover that thinking about modifying the rules in the immediate wake of a specific bad outcome can be a dangerous thing to do, because the recency of the particular event will tend to bias the result toward avoiding that class of event, at the disproportionate expense of those who are inconvenienced or bothered by the imposition of the rule. I'm pretty sure that this class of bias has been named here before, though I don't recall the name.

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T22:26:44.508Z · LW(p) · GW(p)

I think that even ideal decisionmaking will, in the face of uncertainty, occasionally produce bad outcomes.

The problem here is that what we are using is not even close to ideal. Yes, you should consider the reasons of why you made the decision the way you did, and how modifying it prevent the recent bad outcome may make you vulnerable to other bad outcomes. But that concern doesn't mean that you should avoid even considering how to improve. It may also be that after looking for ways to improve you can't figure anything out with acceptable tradeoffs. But you still should take note there is something you are dissatisfied with and would like third alternatives for.

the recency of the particular event will tend to bias the result toward avoiding that class of event, at the disproportionate expense of those who are inconvenienced or bothered by the imposition of the rule.

In this case, if it were an available action to make everyone feel more welcome in communities where they are not the dominant gender, at the expense of making everyone accept the inconvenience of learning new pronouns, taking that action would be a no brainer. The tradeoff is clear even before looking at the visceral physical fear our ignorance can cause in victims of those who are actively for less tolerant than ourselves.

Replies from: Costanza
comment by Costanza · 2011-04-13T22:43:10.619Z · LW(p) · GW(p)

In this case, if it were an available action to make everyone feel more welcome in communities where they are not the dominant gender, at the expense of making everyone accept the inconvenience of learning new pronouns, taking that action would be a no brainer. The tradeoff is clear...

Not without numbers. Would you prefer that one person be made to feel horribly unwelcome in an online community, or that 3^^^3 members of the community go to the trouble of using new pronouns?

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T22:50:39.624Z · LW(p) · GW(p)

Not without numbers. Would you prefer that one person be made to feel horribly unwelcome in an online community, or that 3^^^3 members of the community go to the trouble of using new pronouns?

Ok, in the real world where we are making the decisions I am talking about, there are not 3^^^3 people at all. Yes in that world I would say fine, the trivial convenience of those 3^^^3 trumps the inclusiveness for 1 person. But in the real world, there are about 7 billion people, and a substantial fraction of them are subject to the problems of exclusiveness.

comment by Jonathan_Graehl · 2011-04-13T21:37:34.199Z · LW(p) · GW(p)

I don't see how this profits anyone. Constant has been precise enough already.

comment by wedrifid · 2011-04-13T20:01:02.695Z · LW(p) · GW(p)

Dfranke apologising would be faux pas. Or at least it would be a strategically poor social move.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-14T01:55:51.828Z · LW(p) · GW(p)

Dfranke apologising would be faux pas. Or at least it would be a strategically poor social move.

Really? If I unintentionally do something to offend someone, I apologize. If that holds for unintentionally bumping into someone, or spilling coffee on their shoe, then as a logical extension it holds true for things I say, whatever medium I use to say them. The relevant aspect in this case isn't what I say, it's what effect that has. If I said (or wrote) something that seemed reasonable at the time, but offended someone or hurt their feelings, then I'm sorry to have hurt their feelings. I won't necessarily censor myself forever after, or even change the things I say, but I will apologize because it's a social ritual that hopefully makes me feel less guilty and the hurt/offended party feel less offended or hurt.

Replies from: wedrifid
comment by wedrifid · 2011-04-14T10:09:31.855Z · LW(p) · GW(p)

(For the sake of abstract curiosity:)

If that holds for unintentionally bumping into someone, or spilling coffee on their shoe, then as a logical extension it holds true for things I say, whatever medium I use to say them.

I would apologise for spilling coffee on someone but not in this situation. The analogy is not a good one and definitely not one of logical deduction! Some relevant factors:

  • Astro was being obnoxious and disrespectful. (Barring a couple of exceptions that would not apply in this case) apologising to people when they are being obnoxious and disrespectful legitimises people behaving that way to you.
  • This isn't direct personal interaction going on in good faith. It's an absurd public spectacle. It's an entirely different situation and one in which people's judgement changes drastically, losing perspective. An apology here wouldn't just be
  • Give an inch and they'll take a mile. See JGWeissman's behaviour here with Constant for an illustration. An apology would be twisted into a confession of guilt. As though Dfranke actually did something wrong. (Apart from spam the forum with Qualia nonsense - I'd appreciate an apology for that!)
  • Dfranke didn't call Astro a dude - it was a guess that it was even one distinct individual and picking an arbitrary gender for the hypothesised individual isn't saying anything about Astro at all. In fact the unknown downvoter could just as easily have been me. My voting patterns (everything by Dfranke in this thread down whenever I noticed it) match exactly what he described.
  • Dfranke apologising would be a (minor) slight to all those who have defended him from perceived unjust accusations. The clear consensus (by voting pattern) is that Astro was behaving inappropriately and there was a solid base of support for Dfranke at least as far as pronoun use goes. You don't undermine that without good reason.
  • Dfranke basically isn't involved in this discussion. That's a good way to be. Some people have taken it as an excuse to push their spivak related political agenda but he has chosen not to try to desperately justify himself. Staying uninvolved is a wise move and if he did choose to make a statement it would be significant primarily as a political feature, not an instrument of furthering interpersonal harmony.
  • If Dfranke did feel guilt (or, more realistically given that it would be a response to public criticism, shame) then that is a problem of miscalibrated emotions and not something to submit to. Guilt would not be serving him in this instance and he has the opportunity to release that feeling and move the stimulus response pattern (disapproval -> shame -> supplication) one step closer to extinction.
  • Even if an apology is met with approval in the moment it is not necessarily producing an overall good outcome for you. It may get an apparently encouraging response from a minority but would not lead to being treated with respect in the future either by those people doing the encouraging or by others. You apologise when you have actually done something wrong, not because someone else tries to emotionally bully you.
Replies from: None, Swimmer963
comment by [deleted] · 2011-04-14T17:52:51.997Z · LW(p) · GW(p)

See JGWeissman's behaviour here with Costanza for an illustration.

I may have missed something, but I think the bulk of the interaction was with me, though Costanza added a comment at the end. The username similarity is pure coincidence.

Replies from: wedrifid
comment by wedrifid · 2011-04-15T03:03:55.911Z · LW(p) · GW(p)

I may have missed something, but I think the bulk of the interaction was with me, though Costanza added a comment at the end. The username similarity is pure coincidence.

That's the one! Fixed.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-14T11:29:56.796Z · LW(p) · GW(p)

Astro was being obnoxious and disrespectful. This isn't direct personal interaction going on in good faith. It's an absurd public spectacle. It's an entirely different situation and one in which people's judgement changes drastically, losing perspective.

I guess maybe I did not read the entire comment string, since I didn't notice any 'obnoxious' comments from Astro, or much of an 'absurd public spectacle'. You may be right about that.

Dfranke basically isn't involved in this discussion. That's a good way to be. Some people have taken it as an excuse to push their spivak related political agenda but he has (wisely) chosen not to try to desperately justify himself.

Agreed!

Guilt would not be serving him in this instance and he has the opportunity to release that feeling and move the stimulus response pattern (disapproval -> shame -> supplication) one step closer to extinction.

I would still apologize. That is the person I've chosen to be (and by extension, the person I've chosen to represent myself as). It may not produce an overall 'good' outcome, but I'm not sure what you define as 'good'. I've never been treated with disrespect by people I've apologized too.

comment by shokwave · 2011-04-13T17:10:23.879Z · LW(p) · GW(p)

It was the correct assumption to make, in precisely the same way that 2 for 1 odds on a coin flip is the correct bet to take.

comment by shokwave · 2011-04-13T17:11:33.499Z · LW(p) · GW(p)

It was the correct assumption to make, in precisely the same way that 2 for 1 odds on a coin flip is the correct bet to take. That is why I included the context of "given his knowledge or priors" directly after the part you quoted.

Replies from: JGWeissman, Vladimir_Nesov
comment by JGWeissman · 2011-04-13T17:23:51.841Z · LW(p) · GW(p)

So what?

In any event, dfranke failed to multiply this small prior probability by the huge negative utility of bringing up associations in a transwoman of being cruelly treated as a defective male instead of the female she sees herself as. The art did not fail him in assigning low probability to the truth, he failed the art in not considering the potential consequences of low probability possibilities.

It is true that people make mistakes, and we should be able to react by improving ourselves and moving on, but the first step in this process is to stop making excuses and admit the mistake.

Replies from: shokwave
comment by shokwave · 2011-04-13T17:54:42.139Z · LW(p) · GW(p)

Ever played poker? You can tell if a player's going to improve a lot or only a little by looking at whether they reward themselves for making the right play win or lose, or for winning the hand right play or no. Analogously, dfranke made the right play and got unlucky.

I can invoke selection effects and the dust specks vs torture post and especially a failure to multiply to explain why the disutility of accidentally insulting a transgendered person appears to outweigh the disutility of adopting a different communication style but does not, but you should be doing that for yourself.

comment by Vladimir_Nesov · 2011-04-13T17:26:27.740Z · LW(p) · GW(p)

It was a reasonable assumption, but not a "completely correct" one. Certainty, for example, wouldn't be justified (but it wasn't expressed either, this sub-discussion rather refers to shokwave's "completely correct" characterization).

Replies from: shokwave
comment by shokwave · 2011-04-13T17:48:01.063Z · LW(p) · GW(p)

He completely correctly made the assumption that....

Would this phrasing illustrate the nuance I was aiming for better?

Replies from: Desrtopa, Vladimir_Nesov, Vladimir_Nesov
comment by Desrtopa · 2011-04-13T18:18:15.823Z · LW(p) · GW(p)

"Made the reasonable assumption" strikes me as most appropriate.

Replies from: shokwave
comment by shokwave · 2011-04-13T18:36:45.811Z · LW(p) · GW(p)

Interesting. I got completely stuck on shuffling around the existing words instead of looking for a substitute.

"Reasonable" may suffer from the same problem (immediately I can imagine "reasonable people don't go around Xing all the Ys") as correct, but to a lesser extent. At the very least, thanks for opening up my thought process on the matter.

comment by Vladimir_Nesov · 2011-04-13T18:08:21.153Z · LW(p) · GW(p)

Making assumptions usually trades off correctness for simplicity (which is often a good idea), raising merely likely to the status of certain. By its nature, making of assumptions won't be characterized by "complete correctness".

Replies from: shokwave
comment by shokwave · 2011-04-13T18:14:53.373Z · LW(p) · GW(p)

What I am aiming for is to be able to examine the process a person used in producing their assumption, compare it to a prototypical process that always produces the best possible assumption from all given knowledge, background knowledge, and prior distributions, and then be able to say "this person made the best possible assumption they could have possibly made under the circumstances".

Something similar to how you can look at a person making a bet and say whether they have made that bet correctly or not - before they win or lose.

It might be that 'correct' is simply contraindicated with 'assumption' and I have to find another way to express this.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-04-13T19:33:37.984Z · LW(p) · GW(p)

A 50% chance of each outcome of a coin toss would not count as an assumption about the outcome in my sense.

comment by Vladimir_Nesov · 2011-04-13T18:05:22.975Z · LW(p) · GW(p)

Works for me.

comment by AstroCJ · 2011-04-13T14:52:18.091Z · LW(p) · GW(p)

(I appreciate that you are taking the time to engage with me politely, especially after I have previously been (rightly or wrongly) impolite due to anger.)

dfranke didn't make a "correct" assumption, they[1] made an "unnecessary" assumption. I find it really quite surprising and disheartening that the Less Wrong community doesn't have an interest in making a habit of avoiding these - yes, even to the point of thinking for a tenth of a second longer when using vernacular speech. Good habits, people.

There are numerous other problems here; if the community assumes that everyone in the community is male, then the community is more likely to lose female (or third-gender members) - witness both Alicorn's and my strong irritation at being misgendered. You might chose to ignore third-gender folk, since they're not numerous, but ignoring the [potential] presence of the entire female gender is not healthy for the individual or for the community.

If I were strictly third gender and I had complained about someone referring to me as "he/she" or similar, then I think your point here would stand; the commenter would have signalled clearly that they had made no assumptions about my gender, even if they had also signalled at the same time that they had made assumptions about gender in general. I would then be being unreasonable.

Finally, "indignation is not the correct response" because "it's not a community norm". Since a good number of people are avoiding gendered assumptions whilst posting here, I think indignation might well be the only way to point out to some people just how rude they are being.

[1] Edited after Perplexed pointed out that dfranke had not explicitly identified as male.

Replies from: Perplexed, shokwave, Vladimir_Nesov, Alicorn
comment by Perplexed · 2011-04-13T17:48:19.299Z · LW(p) · GW(p)

dfranke didn't make a "correct" assumption, he made an "unnecessary" assumption.

Excuse me, I know you are not the first person to use the pronoun 'he' regarding dfranke, but are you certain it is appropriate? (Incidentally, I did notice that you avoided making that assumption in your initial complaint about being labeled a 'guy'. Has dfranke self-identified as male somewhere since then?)

Replies from: shokwave, AstroCJ
comment by shokwave · 2011-04-13T17:57:39.229Z · LW(p) · GW(p)

I'm doing it egregiously and on purpose (if you doubt this read the first paragraph of this comment :D) to satisfy my sense of irony, to (perhaps unethically) see if I could trick other commentors into using the pronoun too, and because there is no possible way in which dfranke could hold me accountable for misidentifying his or her gender, given the debate that has sprung up.

comment by AstroCJ · 2011-04-13T18:56:03.540Z · LW(p) · GW(p)

You're quite right; by paraphrasing shokwave in my rebuttal, I picked up a male pronoun. I've now edited the relevant comment to remove this. Thank you, on two levels.

EDIT: I didn't actually consciously avoid it in my first post.

comment by shokwave · 2011-04-13T17:01:17.836Z · LW(p) · GW(p)

dfranke didn't make a "correct" assumption, he made an "unnecessary" assumption.

I should have included "if he wished to gender his pronouns". I meant to communicate that the assumption he made was the correct one given his information and priors at the time; I grant that it spilled over into saying that gendering his speech was a correct choice and I did not intend that.

I find it really quite surprising and disheartening that the Less Wrong community doesn't have an interest in making a habit of avoiding these

Actually we do - as I said in the previous comment we are partial to this practice, but it is not (yet) a community norm the way that, say, having read the Sequences, or arguing in good faith allowing for the possibility of changing your mind is. I fully expect it will soon become a norm.

A note on indignation: although it's a greasy social psychology point, indignation isn't the correct response unless it is a community norm. Reacting indignantly to something which is normally reacted to neutrally or ignored marks you as the unreasonable one, instead of the person that casually insulted you. Of course, this is only where "correct response" means "response that achieves the goal you want". (There's another interpretation of "correct response" that would say that indignation is a correct response, and that it fails to achieve the goal you want is a fact about the environment, not about the response).

if the community assumes that everyone in the community is male, then the community is more likely to lose female or third-gender members

Given the concern that LessWrong already suffers from style and interest deficiencies in such respects, this is a crucial matter. I don't know how to address it other than to increase my efforts to avoid gendered speech and more often point it out to others.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-13T17:56:24.540Z · LW(p) · GW(p)

A note on indignation: although it's a greasy social psychology point, indignation isn't the correct response unless it is a community norm.

Leaving aside semantics around "correct," I agree that getting indignant over X when most people around me think X is unobjectionable often has results I don't want.

That said, sometimes things become community norms as a consequence of the expressed indignation of individuals and the community's willingness to align with those individuals.

Predicting when that second result is likely is easy to get wrong. Sometimes it's worthwhile just to try and see.

Replies from: shokwave
comment by shokwave · 2011-04-13T18:04:03.030Z · LW(p) · GW(p)

Yes. I feel that is an extension on my parenthetical about the other interpretation of correct response - that it could lead to changing the environment.

Predicting when that second result is likely is easy to get wrong.

I'd like to put it down in writing somewhere that I predict a community norm of using nongendered speech, at least on the level of the norm of "read the Sequences", to be fully formed and applied by six months from now.

Replies from: None, TheOtherDave
comment by [deleted] · 2011-04-13T18:58:37.792Z · LW(p) · GW(p)

By nongendered to you mean ve, ver, vis? Conditional prediction: If there is a move away from "he,she,etc." I predict "they/them/their" will dominate.

Replies from: shokwave
comment by shokwave · 2011-04-14T05:01:01.861Z · LW(p) · GW(p)

By nongendered speech, I mean speech that does not indicate male or female gender. So they/them/their, ve/ver/vis, ey, or any other gender-neutral pronouns. It also includes my preferred way of avoiding gendered speech - you, the poster, and using the poster's name. Yes, it's fairly broad :P

comment by TheOtherDave · 2011-04-13T18:17:34.610Z · LW(p) · GW(p)

Huh. Confidence interval?

I totally endorse that, and I'm pretty good about it myself (at least, I think I am), but I'd be very surprised if it ever became a reliable LW community norm.

Replies from: shokwave
comment by shokwave · 2011-04-13T18:26:32.087Z · LW(p) · GW(p)

It's pretty uncalibrated but let's say 90% confidence interval of 2 months to 2 years.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-13T19:29:54.790Z · LW(p) · GW(p)

Upvoted for being willing to put numbers to it. I'll never remember to come back and check, though.

comment by Vladimir_Nesov · 2011-04-13T17:30:03.651Z · LW(p) · GW(p)

dfranke didn't make a "correct" assumption, he made an "unnecessary" assumption.

It's not completely unnecessary, it's grammatically more convenient to use a specific gender. It's a question of priorities in deciding what to say, not of factual knowledge. You would be incorrect to argue that no a priori knowledge about your gender exists, or that it doesn't say "probably male".

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T17:40:04.780Z · LW(p) · GW(p)

it's grammatically more convenient to use a specific gender

If someone wants to avoid specifying gender, ey has options.

Replies from: JoshuaZ, Alicorn
comment by JoshuaZ · 2011-04-13T18:25:39.717Z · LW(p) · GW(p)

If someone wants to avoid specifying gender, ey have options.

Spivak pronouns look weird and are hard to read for most people who aren't used to using them. Just use the singular they. Much simpler and has been used colloquially for centuries.

Edit: This seems to be just way too much drama. Can we all just agree that English is a sucky language and that no matter what we do we're going to be using some kludge and just get along?

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T18:32:25.623Z · LW(p) · GW(p)

I got used to Spivak pronouns in less than a day. People generally are capable of learning new vocabulary, if we don't indulge their excuse that they aren't used to it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-04-13T20:05:40.039Z · LW(p) · GW(p)

You can learn to use new vocabulary, but this is not the same thing as adjusting aesthetic perception of its use.

comment by Alicorn · 2011-04-13T21:18:10.369Z · LW(p) · GW(p)

It's "ey has".

Replies from: JGWeissman
comment by JGWeissman · 2011-04-13T21:22:46.762Z · LW(p) · GW(p)

Conjugation fixed.

comment by Alicorn · 2011-04-13T21:14:34.379Z · LW(p) · GW(p)

I can personally attest that dfranke is male.

comment by Alicorn · 2011-04-13T14:22:55.873Z · LW(p) · GW(p)

*hugs* I'm sorry that I haven't more effectively paved the way for you. This is a longstanding problem. Speaking from my (obviously inadequately-preparatory) experience, there are more effective ways to express this complaint.

Replies from: AstroCJ
comment by AstroCJ · 2011-04-13T14:55:39.216Z · LW(p) · GW(p)

From reading the thread you linked, it seems like things have improved an awful lot; no-one has weighed in with suggestions that I nail my gender to my name to warn innocent posters that they might be about to interact with a woman. Thank you for the hug; I do need to learn to control my responses to that stimulus.

(Edit: Pft, today is a day of typos.)

comment by Perplexed · 2011-04-12T18:36:41.490Z · LW(p) · GW(p)

OK, then. It seems we have another example of the great philosophical principle YMMV. My own experience with analytic philosophy is that it is not particularly effective in shutting down pointless speculation. I would have guessed that the schoolmen would have been more enlightened and satisfied by an analogy than by anything they might find in Quine.

"The talking head," I would explain, "is like an image seen in a reflecting pool. The image feels no pain, nor is it capable of independent action. The masters, from which the image is made are a whole man and woman, not disembodied heads. And the magic which transfers their image to the box does no more harm to the originals than would ripples in a reflecting pool."

Replies from: dfranke
comment by dfranke · 2011-04-12T18:42:18.825Z · LW(p) · GW(p)

My own experience with analytic philosophy is that it is not particularly effective in shutting down pointless speculation.

Oh, certainly not. Not in the least. Think of it this way. Pre-analytic philosophy is like a monkey throwing darts at a dartboard. Analytic philosophy is like a human throwing them. There's no guarantee that he'll hit the board, much less the bullseye, but at least he understands where he's supposed to aim.

comment by DanielLC · 2011-04-12T05:43:15.246Z · LW(p) · GW(p)

A human brain is a computer. The brain of a living human differs from a dead one in that it's running a program. If the universe is as it seems, running a program on a computer causes qualia.

If the simulation hypothesis is true, human brains are still programs; they're just running on different computers.

Unless you have some reason qualia would be more likely to occur when the program is run on one of those computers than the other, you have no evidence about the simulation hypothesis.

comment by Cyan · 2011-04-12T03:58:13.597Z · LW(p) · GW(p)

I'm confused about exactly what qualia are, but I feel reasonably sure that they are related to information processing in somewhat the same way that high gravity and things moving through space are related. Substrate independence of qualia immediately follows from this point of view without any need to assert that qualia are not physical phenomena.

comment by TimFreeman · 2011-04-12T23:14:36.313Z · LW(p) · GW(p)

The original post takes the trouble to define "simulation" but not "qualia". The argument would make much more sense to me if it offered a definition of "qualia" precise enough to determine whether a simulated being does or does not have qualia, since that's the crux of the argument. I'm not aware of a commonly accepted definition that is that precise.

As it stands, I had to make sure as I was reading it to keep in mind that I didn't know what the author meant by "qualia", and after discarding all the statements using that undefined term the remainder didn't make much sense.

Replies from: None
comment by [deleted] · 2011-04-12T23:34:45.337Z · LW(p) · GW(p)

By its very concept, only the person himself can actually observe his own qualia. Qualia are defined that way - at least, in any serious treatment that I've seen (aside, of course, from the skeptical and deprecatory ones). This is one of the key elements that make them a problematic concept.

Consciousness - as conceived by many philosophers - is also defined that way. Hence the "other minds problem" - which is the problem that only the person himself can "directly" observe his own consciousness, and other people can at best infer from his behavior, from his similarity to them, etc., that he has consciousness.

So both the concepts of consciousness and of qualia are defined in a way that makes them - by definition - problematic. As far as I know you're not going to get any qualia believer to define qualia in a way that allows you, a third-party observer, to look at something with a microscope, or telescope, or MRI, or with any other instrument real or physically possible, and personally witness that it has qualia, because qualia are by their nature, or rather by their definition, "directly" perceptible only to the person who has them.

In contrast, the neurons in my brain are no more "directly" perceptible to me than to you. I use the machinery of my brain, but I don't perceive it. You have as good access to it, in principle, as I do. If you're my neurosurgeon, then you are a better witness of the material of my brain than I am. This does not hold for qualia.

This is one of the properties of qualia - and indeed of the concept of consciousness as understood by many philosophers - that I find sufficiently faulty as to warrant rejection.

comment by knb · 2011-04-13T09:27:12.871Z · LW(p) · GW(p)

If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.

If a simulation allowed life to evolve within it, and was not just an attempt to replicate something which already exists, would you expect natural selection within the simulation to produce beings with qualia that "match" the functional purpose?

If so, it seems like that would leave much of the simulation argument in tact.

Replies from: Kevin, Kevin
comment by Kevin · 2011-04-13T09:31:35.045Z · LW(p) · GW(p)

Yes, dfranke's argument seems to map to "we are not living in a simulation because we are not zombies and people living in a simulation are zombies".

Replies from: dfranke
comment by dfranke · 2011-04-13T15:29:07.943Z · LW(p) · GW(p)

s/are not zombies/have qualia/ and you'll get a little more accurate. A zombie, supposing such a thing is possible (which I doubt for all the reasons given in http://lesswrong.com/lw/p7/zombies_zombies ), is still a real, physical object. The objects of a simulation don't even rise to zombie status.

Replies from: Jonathan_Graehl, Kevin, jsalvatier
comment by Jonathan_Graehl · 2011-04-14T20:18:16.389Z · LW(p) · GW(p)

A zombie ... is still a real, physical object. The objects of a simulation don't even rise to zombie status.

It's really unclear what you mean by 'zombie', 'real, physical object', and 'objects of a simulation'. But you're right that Kevin meant by 'zombie' exactly 'us without qualia'. I thought this was obvious in context.

comment by Kevin · 2011-04-14T08:57:06.277Z · LW(p) · GW(p)

What is a physical object?

comment by jsalvatier · 2011-04-14T18:17:11.375Z · LW(p) · GW(p)

If you are not arguing for zombies, I am really confused about what you're trying to argue for.

comment by Kevin · 2011-04-13T09:30:38.341Z · LW(p) · GW(p)

...unless you believe in zombies!

comment by shokwave · 2011-04-12T06:09:08.995Z · LW(p) · GW(p)

I will do a more in-depth reading when I have time, but from a quick skim:

If you're basing your argument against a simulation universe on qualia, what do you say to those of us who reject qualia?

Replies from: dfranke
comment by dfranke · 2011-04-12T10:21:16.337Z · LW(p) · GW(p)

I can think of three, maybe more, ways to unpack the phrase "reject qualia":

  1. "Qualia are not a useful philosophical concept. The things you're trying to refer to when you say 'qualia' are better understood in different terms that will provide greater clarity".

  2. "Qualia don't exist. The things you're trying to refer to when you say 'qualia' are figments of your imagination."

  3. "The very notion of qualia is inconceivable. It's like talking about dry water."

Please clarify what you mean.

Replies from: shokwave, None
comment by shokwave · 2011-04-12T15:47:33.166Z · LW(p) · GW(p)

I mean #2 precisely.

That is, qualia - the universalised experience of 'redness', of fundamental experience, or what-have-you - is a category which we dump neural firing patterns into. At the level of patterns in the brain physiology, there are only patterns, and some patterns are isomorphic to each other - that is, a slightly different pattern in a slightly different architecture nevertheless builds up to the same higher-level result.

It is a figment of your imagination because that's an easy shortcut that our brains take. In attempting to communicate ideas - to cause isomorphic patterns to arise in the other's brain - our brains may tend to create a common cause, an abstract concept that both patterns are derived from. There isn't any such platonic concept! There's just the neural firing in my head (completely simulable on a computer, no human brain needed) and the neural firing in your head (also completely simulable, no brain needed). There's nothing that, in essence, requires a human brain involved in doing the simulating, at any point.

Hmm. Qualia's come up a few times on LessWrong, and it seems like a nonzero portion of the comments accept it. I'll have to go through the literature on qualia to build a more thorough case against it. Look forward to a "No Qualia" post sometime soon edit: including baseless speculation on why talking about it is so confusing! - unless, in going through the literature, I change my mind about whether qualia exist.

comment by [deleted] · 2011-04-12T10:41:06.318Z · LW(p) · GW(p)

2 and 3 seem a little extreme. 1 seems about right. I am particularly sympathetic to Gary Drescher's account.

Replies from: dfranke
comment by dfranke · 2011-04-12T11:22:27.214Z · LW(p) · GW(p)

I find Drescher's account of computationalism even more nonsensical than most others. Here's why. Gensyms are a feature that are mostly exclusive to Lisp. At the machine level, they're implemented as pointers, and there you can do other things with them besides test them for equality: you can dereference them, do pointer arithmetic, etc. Surely, if you're going to compare qualia to some statement about computation, it needs to be a statement that can be expressed independently of any particular model of it. All that actually leaves you, you'll find, is functions, not algorithms. You can write "sort" on anything Turing-equivalent, but there's no guarantee that you can write "quicksort".

Replies from: None
comment by [deleted] · 2011-04-12T12:26:17.876Z · LW(p) · GW(p)

I'm trying to understand your objection, but it seems like a quibble to me. You seem to be saying that the analogy between qualia and gensyms isn't perfect because gensyms are leaky abstractions. But I don't think it has to be to convey the essential idea. Analogies rarely are perfect.

Here's my understanding of the point. Let's say that I'm looking at something, and I say, "that's a car". You ask me, "how do you know it's a car?" And I say, "it's in a parking lot, it looks like a car..." You say, "and what does a car look like?" And maybe I try to describe the car in some detail. Let's say I mention that the car has windows, and you ask, "what does a window look like". I mention glass, and you ask, "what does glass look like". We keep drilling down. Every time I describe something, you ask me about one of the components of the description.

This can't go on forever. It has to stop. It stops somewhere. It stops where I say, "I see X", and you ask, "describe X", and I say, "X looks like X" - I'm no longer able to give a description of the thing in terms of component parts or aspects. I've reached the limit.

There has to be a limit, because the mind is not infinite. There have to be things which I can perceive, which I can recognize, but which I am unable to describe - except to say that they look like themselves, that I recognize them. This is unavoidable. Create for me any AI that has the ability to perceive, and we can drill down the same way with that AI, finally reaching something about which the AI says, "I see X", and when we ask the AI what X looks like, the AI is helpless to say anything but, "it looks like X".

Any finite creature (carbon or silicon) that can perceive, has some limit, where it can perceive a thing, but can't describe it except to say that it looks like itself. The creature just knows, it clearly sees that thing, but for the life of it, it can't give a description of it. But since the creature can clearly see it, the creature can say that it has a "raw feel".

These things are ineffable - indescribable. And it's ineffability that is one of the key properties of qualia. The four properties given by Dennett (from Wpedia) are:

  1. ineffable; that is, they cannot be communicated, or apprehended by any other means than direct experience.
  2. intrinsic; that is, they are non-relational properties, which do not change depending on the experience's relation to other things.
  3. private; that is, all interpersonal comparisons of qualia are systematically impossible.
  4. directly or immediately apprehensible in consciousness; that is, to experience a quale is to know one experiences a quale, and to know all there is to know about that quale.

As for the other three. Well, they would take a book.

Replies from: dfranke, dfranke
comment by dfranke · 2011-04-12T12:39:29.163Z · LW(p) · GW(p)

But qualia are not any of those things! They are not epiphenomenal! They can be compared. I can classify them into categories like "pleasant", "unpleasant" and "indifferent". I can tell you that certain meat tastes like chicken, and you can understand what I mean by "taste", and understand the gist of "like chicken" even if the taste is not perfectly indistinguishable from that of chicken. I suppose that I would be unable to describe what it's like to have qualia to something that has no qualia whatsoever, but even that I think is just a failure of creativity rather than a theoretical impossibility -- [ETA: indeed, before I could create a conscious AI, I'd in some sense have to figure out how to provide exactly such a description to a computer.]

Replies from: TheOtherDave
comment by TheOtherDave · 2011-04-12T14:14:57.462Z · LW(p) · GW(p)

I apologize if this is recapitulating earlier comments -- I haven't read this entire discussion -- and feel free to point me to a different thread if you've covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like "pleasant" and "unpleasant" and "indifferent"? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by "taste" and understand the gist of "like chicken"?

If not, then on your view, what would actually happen instead, if it tried? (Or, if trying is another thing that can't be a computation, then: if it simulated me trying?)

If so, then on your view, how can any of those operations qualify as comparing qualia?

Replies from: dfranke
comment by dfranke · 2011-04-12T14:46:54.720Z · LW(p) · GW(p)

I apologize if this is recapitulating earlier comments -- I haven't read this entire discussion -- and feel free to point me to a different thread if you've covered this elsewhere, but: on your view, could a simulation of me in a computer classify the things that it has (which, on your view, cannot be actual qualia) into categories like "pleasant" and "unpleasant" and "indifferent"? Could it tell me that certain (simulations of) meat tastes like chicken, and if it did, could I understand what it meant by "taste" and understand the gist of "like chicken"?

I'm not certain what you mean by "could a simulation of me do X". I'll read it as "could a simulator of me of do X". And my answer is yes, a computer program could make those judgements without actually experiencing any of those qualia, just like it could make judgements about what trajectory the computer hardware would follow if it were in orbit around Jupiter, without it having to actually be there.

Replies from: pjeby, TheOtherDave
comment by pjeby · 2011-04-12T17:13:35.981Z · LW(p) · GW(p)

a computer program could make those judgements (sic) without actually experiencing any of those qualia

Just as an FYI, this is the place where your intuition is blindsiding you. Intuitively, you "know" that a computer isn't experiencing anything... and that's what your entire argument rests on.

However, this "knowing" is just an assumption, and it's assuming the very thing that is the question: does it make sense to speak of a computer experiencing something?

And there is no reason apart from that intuition/assumption, to treat this as a different question from, "does it make sense to speak of a brain experiencing something?".

IOW, substitute "brain" for every use of "computer" or "simulation", and make the same assertions. "The brain is just calculating what feelings and qualia it should have, not really experiencing them. After all, it is just a physical system of chemicals and electrical impulses. Clearly, it is foolish to think that it could thereby experience anything."

By making brains special, you're privileging the qualia hypothesis based on an intuitive assumption.

Replies from: dfranke
comment by dfranke · 2011-04-12T17:22:34.750Z · LW(p) · GW(p)

I don't think you read my post very carefully. I didn't claim that qualia are a phenomenon unique to human brains. I claimed that human-like qualia are a phenomenon unique to human brains. Computers might very well experience qualia; so might a lump of coal. But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.

Replies from: pjeby
comment by pjeby · 2011-04-12T21:14:28.657Z · LW(p) · GW(p)

But if you think a computer simulation of a human experiences the same qualia as a human, while a lump of coal experiences no qualia or different ones, you need to make that case to me.

Actually, I'd say you need to make a case for WTF "qualia" means in the first place. As far as I've ever seen, it seems to be one of those words that people use as a handwavy thing to prove the specialness of humans. When we know what "human qualia" reduce to, specifically, then we'll be able to simulate them.

That's actually a pretty good operational definition of "reduce", actually. ;-) (Not to mention "know".)

comment by TheOtherDave · 2011-04-12T15:24:31.274Z · LW(p) · GW(p)

Sure, ^simulator^simulation preserves everything relevant from my pov.

And thanks for the answer.

Given that, I really don't get how the fact that you can do all of the things you list here (classify stuff, talk about stuff, etc.) should count as evidence that you have non-epiphenomenal qualia, which seems to be what you are claiming there.

After all, if you (presumed qualiaful) can perform those tasks, and a (presumed qualialess) simulator of you also can perform those tasks, then the (presumed) qualia can't play any necessary role in performing those tasks.

It follows that those tasks can happen with or without qualia, and are therefore not evidence of qualia and not reliable qualia-comparing operations.

The situation would be different if you had listed activities, like attracting mass or orbiting around Jupiter, that my simulator does not do. For example, if you say that your qualia are not epiphenomenal because you can do things like actually taste chicken, which your simulator can't do, that's a different matter, and my concern would not apply.

(Just to be clear: it's not obvious to me that your simulator can't taste chicken, but I don't think that discussion is profitable, for reasons I discuss here.)

comment by dfranke · 2011-04-12T12:58:16.175Z · LW(p) · GW(p)

I'm trying to understand your objection, but it seems like a quibble to me. You seem to be saying that the analogy between qualia and gensyms isn't perfect because gensyms are leaky abstractions. But I don't think it has to be to convey the essential idea. Analogies rarely are perfect.

You haven't responded to the broader part of my point. If you want to claim that qualia are computations, then you either need to specify a particular computer architecture, or you need to describe them in a way that's independent of any such choice. In the the first case, then the architecture you want is probably "the universe", in which case you're defining an algorithm by specifying its physical implementation and you've affirmed my thesis. In the latter case, all you get to talk about is inputs and outputs, not algorithms.

Replies from: None
comment by [deleted] · 2011-04-12T13:48:51.711Z · LW(p) · GW(p)

If you want to claim that qualia are computations

You seem to be mixing up two separate arguments. In one argument I am for the sake of argument assuming the unproblematic existence of qualia and arguing, under this assumption, that qualia are possible in a simulation and therefore that we could (in principle) be living in a simulation. In the other argument (the current one) I simply answered your question about what sort of qualia skeptic I am.

So, in this argument, the current one, I am continuing the discussion where, in answer to your question, I have admitted to being a qualia skeptic more or less along the lines of Drescher and Dennett. This discussion is about my skepticism about the idea of qualia. This discussion is not about whether I think qualia are computations. It is about my skepticism.

Similarly, if I were admitting to skepticism about Santa Claus, it would not be an appropriate place to argue with me about whether Santa is a human or an elf.

Maybe you are basing your current focus on computations on Drescher's analogy with Lisp's gensyms. That's something for you to take up with Drescher. By now I've explained - at some length - what it is that resonated with me in Drescher's account and why. It doesn't depend on qualia being computations. It depends on there being a limit to perception.

Replies from: dfranke, dfranke
comment by dfranke · 2011-04-12T15:29:49.979Z · LW(p) · GW(p)

On further reflection, I'm not certain that your position and mine are incompatible. I'm a personal identity skeptic in roughly the same sense that you're a qualia skeptic. Yet, if somebody points out that a door is open when it was previously closed, and reasons "someone must have opened it", I don't consider that reasoning invalid. I just think the need to modify the word "someone" if they want to be absolutely pedantically correct about what occurred. Similarly, your skepticism about qualia doesn't really contradict my claim that the objects of a computer simulation would have no (or improper ) qualia; at worst it means that I ought to slightly modify my description of what it is that those objects wouldn't have.

comment by dfranke · 2011-04-12T13:59:42.338Z · LW(p) · GW(p)

Ok, I've really misunderstood you then. I didn't realize that you were taking a devil's advocate position in the other thread. I maintain the arguments I've made in both threads in challenge to all those commenters who do claim that qualia are computation.

comment by Sideways · 2011-04-12T03:16:58.604Z · LW(p) · GW(p)

the type of qualia that a simulator actually produces (if any) depends crucially on the actual physical form of that simulator.... [to simulate humans] the simulator must physically incorporate a human brain.

It seems like the definition of "physical" used in this article is "existing within physics" (a perfectly reasonable definition). By this definition, phenomena such as qualia, reasoning, and computation are all "physical" and are referred to as such in the article itself.

Brains are physical, and local physics seems Turing-computable. Therefore, every phenomenon that a physical human brain can produce, can be produced by any Turing-complete computer, including human reasoning and qualia.

So to "physically incorporate a human brain" in the sense relative to this article, the simulator does NOT need to include an actual 3-pound blob of neurons exchanging electrochemical signals. It only needs to implement the same computation that a human brain implements.

Replies from: dfranke
comment by dfranke · 2011-04-12T03:24:01.123Z · LW(p) · GW(p)

Therefore, every phenomenon that a physical human brain can produce, can be produced by any Turing-complete computer.

You're continuing to confuse reasoning about a physical phenomenon with causing a physical phenomenon. By the Church-Turing thesis, which I am in full agreement with, a Turing machine can reason about any physical phenomenon. That does not mean a Turing machine can cause any physical phenomenon. A PC running a program which reasons about Jupiter's gravity cannot cause Jupiter's gravity.

Replies from: kurokikaze, Sideways
comment by kurokikaze · 2011-04-12T10:37:06.906Z · LW(p) · GW(p)

From inside the simulation, the simulation "reasoning" about phenomenon cannot be distincted from actually causing this phenomenon. From my point of view, gravity inside two-body simulator is real for all bodies inside the simulator.

If you separate "reasoning" from "happening" only because you are able to tell one from another from your point of view, why don't we say that all working of our world can be "reasoning" instead of real phenomena if there are entities that can separate its "simulated working" from their "real" universe?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-12T10:51:15.369Z · LW(p) · GW(p)

gravity inside two-body simulator is real for all bodies inside the simulator.

For a two body simulator we can just use the Newtonian equation for F = G m1m2 / (r^2), right? You aren't claiming we need any sort of computing apparatus to make gravity real for "all bodies inside the simulator"?

Replies from: kurokikaze
comment by kurokikaze · 2011-04-12T10:55:01.968Z · LW(p) · GW(p)

I don't get the question, frankly. Simulation, in my opinion, is not a single formula but the means of knowing the state of system at particular time. In this case, we need an "apparatus", even if it's only a piece of paper, crayon and our own brain. It will be a very simple simulator, yes.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-04-12T11:11:17.561Z · LW(p) · GW(p)

Basically I'm asking: is gravity "real for all bodies inside the system" or "real for all bodies inside the simulator"?

If the former, then we have Tegmark IV.

If ONLY the latter, then you're saying that a system requires a means to be made known by someone outside the system, in order to have gravity "be real" for it. That's not substrate independence; we're no longer talking about its point of view, as it only becomes "real" when it informs our point of view, and not before.

Replies from: kurokikaze, kurokikaze
comment by kurokikaze · 2011-04-12T11:43:46.983Z · LW(p) · GW(p)

Oh, I got what you mean by "Tegmark IV" here from your another answer. Then it's more complicated and depends on our definition of "existance" (there can be many, I presume).

comment by kurokikaze · 2011-04-12T11:31:45.789Z · LW(p) · GW(p)

I think gravity is "real" for any bodies that it affects. For the person running the simulator it's "real" too, but in some other sense — it's not affecting the person physically but it produces some information for him that wouldn't be there without the simulator (so we cannot say they're entirely causally disconnected). All this requires further thinking :)

Also, english is not my main language so there can be some misunderstanding on my part :)

Replies from: kurokikaze
comment by kurokikaze · 2011-04-12T14:36:52.551Z · LW(p) · GW(p)

Okay, I had pondered this question for some time and the preliminary conclusions are strange. Either "existance" is physically meaningless or it should be split to at least three terms with slightly different meanings. Or "existance" is purely subjective things and we can't meaningfully argue about "existance" of things that are causally disconnected from us.

comment by Sideways · 2011-04-12T03:51:06.548Z · LW(p) · GW(p)

I'm asserting that qualia, reasoning, and other relevant phenomena that a brain produces are computational, and that by computing them, a Turing machine can reproduce them with perfect accuracy. I apologize if this was not clear.

Adding two and two is a computation. An abacus is one substrate on which addition can be performed; a computer is another.

I know what it means to compute "2+2" on an abacus. I know what it means to compute "2+2" on a computer. I know what it means to simulate "2+2 on an abacus" on a computer. I even know what it means to simulate "2+2 on a computer" on an abacus (although I certainly wouldn't want to have to actually do so!). I do not know what it means to simulate "2+2" on a computer.

Replies from: dfranke
comment by dfranke · 2011-04-12T04:03:52.591Z · LW(p) · GW(p)

You simulate physical phenomena -- things that actually exist. You compute combinations of formal symbols, which are abstract ideas. 2 and 4 are abstract; they don't exist. To claim that qualia are purely computational is to claim that they don't exist.

Replies from: Sideways
comment by Sideways · 2011-04-12T04:11:58.567Z · LW(p) · GW(p)

"Computation exists within physics" is not equivalent to " "2" exists within physics."

If computation doesn't exist within physics, then we're communicating supernaturally.

If qualia aren't computations embodied in the physical substrate of a mind, then I don't know what they are.

Replies from: dfranke
comment by dfranke · 2011-04-12T04:21:26.839Z · LW(p) · GW(p)

Computation does not exist within physics, it's a linguistic abstraction of things that exist within physics, such as the behavior of a CPU. Similarly, "2" is an abstraction of a pair of apples, a pair of oranges, etc. To say that the actions of one physical medium necessarily has a similar physical effect (the production of qualia) as the actions of another physical medium, just because they abstractly embody the same computation, is analagous to saying that two apples produce the same qualia as two oranges, because they're both "2".

This is my last reply for tonight. I'll return in the morning.

Replies from: Sideways
comment by Sideways · 2011-04-12T04:59:36.819Z · LW(p) · GW(p)

If computation doesn't exist because it's "a linguistic abstraction of things that exist within physics", then CPUs, apples, oranges, qualia, "physical media" and people don't exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don't think this definition of existence is particularly useful in context.

As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Obviously color, smell, etc. are different, but in both cases I have the experience of seeing two objects. And if I'm trying to do sums by putting apples or oranges together, substituting one for the other will give the same result. In comparing my brain to a hypothetical simulation of my brain running on a microchip, I would claim a number of differences (weight, moisture content, smell...), but I hold that what makes me me would be present in either one.

See you in the morning! :)

Replies from: dfranke
comment by dfranke · 2011-04-12T10:12:01.288Z · LW(p) · GW(p)

If computation doesn't exist because it's "a linguistic abstraction of things that exist within physics", then CPUs, apples, oranges, qualia, "physical media" and people don't exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don't think this definition of existence is particularly useful in context.

Not quite reductionist enough, actually: physics is made of the relationship rules between configurations of spacetime which exist independently of any formal model of them that give us concepts like "quark" and "lepton". But digging deeper into this linguistic rathole won't clarify my point any further, so I'll drop this line of argument.

As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Obviously color, smell, etc. are different, but in both cases I have the experience of seeing two objects. And if I'm trying to do sums by putting apples or oranges together, substituting one for the other will give the same result. In comparing my brain to a hypothetical simulation of my brain running on a microchip, I would claim a number of differences (weight, moisture content, smell...), but I hold that what makes me me would be present in either one.

If you started perceiving two apples identically to the way you perceive two oranges, without noticing their difference in weight, smell, etc., then you or at least others around you would conclude that you were quite ill. What is your justification for believing that being unable to distinguish between things that are "computationally identical" would leave you any healthier?

Replies from: Sideways, AstroCJ
comment by Sideways · 2011-04-12T18:05:26.420Z · LW(p) · GW(p)

I didn't intend to start a reductionist "race to the bottom," only to point out that minds and computations clearly do exist. "Reducible" and "non-existent" aren't synonyms!

Since you prefer the question in your edit, I'll answer it directly:

if I replaced the two hemispheres of your brain with two apples, clearly you would become quite ill, even though similarity in number has been preserved. If you believe that "embodying the same computation" is somehow a privileged concept in this regard -- that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?

Computation is "privileged" only in the sense that computationally identical substitutions leave my mind, preferences, qualia, etc. intact; because those things are themselves computations. If you replaced my brain with a computationally equivalent computer weighing two tons, I would certainly notice a difference and consider myself harmed. But the harm wouldn't have been done to my mind.

I feel like there must be something we've missed, because I'm still not sure where exactly we disagree. I'm pretty sure you don't think that qualia are reified in the brain-- that a surgeon could go in with tongs and pull out a little lump of qualia-- and I think you might even agree with the analogy that brains:hardware::minds:software. So if there's still a disagreement to be had, what is it? If qualia and other mental phenomena are not computational, then what are they?

Replies from: dfranke
comment by dfranke · 2011-04-12T18:15:15.355Z · LW(p) · GW(p)

I'm pretty sure you don't think that qualia are reified in the brain-- that a surgeon could go in with tongs and pull out a little lump of qualia

I do think that qualia are reified in the brain. I do not think that a surgeon could go in with tongs and remove them any more than he could in with tongs and remove your recognition of your grandmother.

If qualia and other mental phenomena are not computational, then what are they?

They're a physical effect caused by the operation of a brain, just as gravity is a physical effect of mass and temperature is a physical effect of Brownian motion. See here and here for one reason why I think the computational view falls somewhere in between problematic and not-even-wrong, inclusive.

ETA: The "grandmother cell" might have been a poorly chosen counterexample, since apparently there's some research that sort of actually supports that notion with respect to face recognition. I learned the phrase as identifying a fallacy. Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.

Replies from: wnoise, FAWS, Sideways
comment by wnoise · 2011-04-12T20:17:21.921Z · LW(p) · GW(p)

See for instance this report

on this paper

Where they find apparent "Jennifer Anniston" and "Halle Berry" cells. The former is a little bit muddled as it doesn't fire when a picture contains both her and Brad Pitt. The latter fires for both pictures of her, and the text of her name.

comment by FAWS · 2011-04-12T19:11:02.034Z · LW(p) · GW(p)

Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.

Do we know enough to tell for sure?

Replies from: dfranke
comment by dfranke · 2011-04-12T19:14:15.678Z · LW(p) · GW(p)

Do you mean, "know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?". No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.

Replies from: gwern, FAWS
comment by gwern · 2011-04-12T23:50:49.709Z · LW(p) · GW(p)

"know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?".

Depending on various details, this might well be impossible. Rice's theorem comes to mind - if it's impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn't bode well for similar questions for Turing-equivalent substrates.

Replies from: dfranke
comment by dfranke · 2011-04-13T00:08:44.658Z · LW(p) · GW(p)

Brains, like PCs, aren't actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they'd need something equivalent to a Turing machine's infinite tape. There's nothing analogous to Rice's theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn't mean tractable.

Replies from: gwern
comment by gwern · 2011-04-13T00:27:56.965Z · LW(p) · GW(p)

There's nothing analogous to Rice's theorem or the halting theorem which holds for finite state machines.

It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice's theorem by pointing out that 'actually brains are just FSMs', you replace it with another question, 'are they FSMs decidable within the length of tape available to us?'

Given how fast the Busy Beaver grows, the answer is almost surely no - there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it's impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice's theorem applies again).

(I know you understand this because you pointed out 'Of course, decidable doesn't mean tractable.' but it's not obvious to a lot of people and is worth noting.)

Replies from: dfranke
comment by dfranke · 2011-04-13T00:40:09.590Z · LW(p) · GW(p)

This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.

comment by FAWS · 2011-04-12T19:33:46.033Z · LW(p) · GW(p)

Do you mean, "know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?"

Yes

No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.

Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don't see how they can serve as a replacement in your argument when we can't identify them.

Replies from: dfranke
comment by dfranke · 2011-04-12T19:46:45.611Z · LW(p) · GW(p)

The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a "qualia cell". You clearly already understand this concept, so is my particular choice of analogy so terribly important that it's necessary to nitpick over this?

Replies from: FAWS
comment by FAWS · 2011-04-12T20:26:43.327Z · LW(p) · GW(p)

The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can't answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can't even verify their existence in the first place?

comment by Sideways · 2011-04-12T18:54:24.171Z · LW(p) · GW(p)

They're a physical effect caused by the operation of a brain

You haven't excluded a computational explanation of qualia by saying this. You haven't even argued against it! Computations are physical phenomena that have meaningful consequences.

"Mental phenomena are a physical effect caused by the operation of a brain."

"The image on my computer monitor is a physical effect caused by the operation of the computer."

I'm starting to think you're confused as a result of using language in a way that allows you to claim computations "don't exist," while qualia do.

As to your linked comment: ISTM that qualia are what an experience feels like from the inside. Maybe it's just me, but qualia don't seem especially difficult to explain or understand. I don't think qualia would even be regarded as worth talking about, except that confused dualists try to use them against materialism.

comment by AstroCJ · 2011-04-12T13:25:50.334Z · LW(p) · GW(p)

If I have in front of me four apples that appear to me to be identical, but a specific two of them consistently are referred to as oranges by sources I normally trust, they are not computationally identical. If everyone perceived them as apples, I doubt I would be seen as ill.

Replies from: dfranke
comment by dfranke · 2011-04-12T14:57:21.090Z · LW(p) · GW(p)

I did a better job of phrasing my question in the edit I made to my original post than I did in my reply to Sideways that you responded to. Are you able to rephrase your response so that it answers the better version of the question? I can't figure out how to do so.

Replies from: AstroCJ
comment by AstroCJ · 2011-04-13T07:49:13.357Z · LW(p) · GW(p)

Ok, I'll give a longer response a go.

You seem to me to be fundamentally confused about the separation between the (at a minimum) two levels of reality being proposed. We have a simulation, and we have a real world. If you affect things in the simulation, such as replacing Venus with a planet twice the mass of Venus, then they are not the same; the gravitational field will be different and the simulation will follow a path different to the simulation with the original Venus. These two options are not "computationally the same".

If, on the other hand, in the real world you replace your old, badly programmed Venus Simulation Chip 2000 with the new, shiny Venus Simulation Chip XD500, which does precisely the same thing as the old chip but in fewer steps so we in the real world have to sit around waiting for fewer processor cycles to end, then the simulation will follow the same path as it would have done before. Observers in the sim won't know what Venus Chip we're running, and they won't know how many processor cycles it's taking to simulate it. These two different situations are "computationally the same".

If, in the simulation world, you replaced half of my brain with an apple, then I would be dead. If you replaced half of my brain with a computer that mimicked perfectly my old meat brain, I would be fine. If we're in the computation world then we should point out that again, the gravitational field of my brain computer will likely be different from the gravitational field of my meat brain, and so I would label these as "not computationally the same" for clarity. If we are interested in my particular experiences of the world, given that I can't detect gravitational fields very well, then I would label them as "computationally the same" if I am substrate independent, and "computationally different" if not.

I grew up in this universe, and my consciousness is embedded in a complex set of systems, my human brain, which is designed to make things make sense at any cost. I feel purple whenever I go outside - that's just how I've always felt. Purple makes sense. This is fatal for your argument.

(Now, if one day soon my qualia jump from one state to another, now that would be something interesting.)

comment by gscshoyru · 2011-04-12T18:53:45.181Z · LW(p) · GW(p)

Maybe I've missed something in your original article or your comments, but I don't understand why you think a person in a perfect physics simulation of the universe would feel differently enough about the qualia he or she experiences to notice a difference. Qualia are probably a physical phenomenon, yes -- but if that physical phenomenon is simulated in exact detail, how can a simulated person tell the difference? Feelings about qualia are themselves qualia, and those qualia are also simulated by the physics simulator. Imagine for a moment, that some superbeing was able to determine the exact physical laws and initial conditions of this universe, and then construct a Turing machine that simulated our universe based on those rules and inital conditions. Or for argument's sake, imagine instead that the intial conditions plugged into the simulation were the state of the universe an hour before you wrote this article. At what point would the simulation and the real world diverge? If this world were the simulation, would the simulated you still have written this article? If so, then what's the difference between the "you" in the two universes? You've argued in your post that your experiences would be noticably different -- but if you're not acting on that difference, then what is this "you", exactly, and why can't it affect your actions? Or is there no such "you" -- and in which case, how would the simulated you differ from a "zombie"? And how do you know there is such a "you", here and now? If the simulated you would not have written this article -- well, then either there's something about qualia that can't be simulated, in which case qualia are not physical... or the physics simulation is imperfect, in which case it's not a perfect simulator by your definition, and if so why not?

comment by Kevin · 2011-04-12T22:07:58.666Z · LW(p) · GW(p)

Qualia or not, it seems a straightforward consequence of Tegmark's mathematical universe hypothesis that we derive significant proportions of our measure from simulators.

that if I replaced your brain with something else embodying the same computation that you would feel yourself to be unharmed -- what is your justification for believing this?

I think the quantum physics sequences basically argues this in a really roundabout way.

http://lesswrong.com/lw/qx/timeless_identity/

comment by Jonathan_Graehl · 2011-04-14T21:44:00.020Z · LW(p) · GW(p)

The answer is that I know my qualia are right because they make sense. Qualia are not pure "outputs": they feed back on the rest of the world. If I step outside on a scorching summer day, then I feel hot, and this unpleasant quale causes me to go back inside, and I am able to understand and articulate this cause and effect. If my qualia were actually those of a computer chip, then rather than feeling hot I would feel purple (or rather, some quale that no human language can describe), and if you asked me why I went back indoors even though I don't have any particular objection to purple and the weather is not nearly severe enough to pose any serious threat to my health, I wouldn't be able to answer you or in any way connect my qualia to my actions.

I'll grant that people actually have something in mind when they talk about "qualia", and the primary disagrement is whether it's epiphenomenal or fundamental.

Even still, this paragraph is extremely confused. The question seems to be "can qualia be simulated?" There's no reason to believe they cannot, whether or not they're an epiphenomenon.

"But a simulation of a thing is not the same as the real thing!" - a banal tautology. Say specifically what's different (in the relations between simulated things) and fix it.

comment by James_Miller · 2011-04-12T04:03:10.918Z · LW(p) · GW(p)

reasoning about a physical phenomenon is not the same as causing a physical phenomenon. You cannot create new territory by sketching a map of it, no matter how much detail you include in your map..

But reasoning about reasoning does cause reasoning.

Replies from: dfranke
comment by dfranke · 2011-04-12T04:07:41.831Z · LW(p) · GW(p)

It's the first "reasoning", not the second, that's causing the third. Reasoning about puppies causes reasoning, not puppies.

Replies from: James_Miller
comment by James_Miller · 2011-04-12T04:19:38.418Z · LW(p) · GW(p)

Is it possible for a simulator, that doesn't physically incorporate a human brain, to reason just as we do?

Replies from: dfranke
comment by dfranke · 2011-04-12T10:33:14.684Z · LW(p) · GW(p)

Yes.

comment by wedrifid · 2011-04-12T07:58:34.687Z · LW(p) · GW(p)

The aim of this post is to challenge Nick Bostrom's simulation argument by attacking the premise of substrate-independence.

I read as far as the first sentence.

comment by Will_Sawin · 2011-04-12T02:14:35.487Z · LW(p) · GW(p)

"Qualia are not pure "outputs": they feed back on the rest of the world."

A sufficiently advanced simulation on any substrate would have this property - the simulated qualia would feed back on the simulated world.

Maybe the qualia of people who ACTUALLY have bodies are completely different from yours, a person who has no body.

Replies from: wedrifid, AstroCJ, dfranke
comment by wedrifid · 2011-04-12T09:15:24.964Z · LW(p) · GW(p)

Wow. You actually said that.

comment by AstroCJ · 2011-04-12T08:36:20.141Z · LW(p) · GW(p)

DV for being unconstructive.

Replies from: wedrifid, Will_Sawin
comment by wedrifid · 2011-04-12T09:18:21.224Z · LW(p) · GW(p)

(He was constructive - see the first sentence. You downvoted him because he was also rude.)

comment by Will_Sawin · 2011-04-13T02:42:55.654Z · LW(p) · GW(p)

I cannot determine the difference between my heavily downvoted comment and this one:

"http://lesswrong.com/lw/57e/we_are_not_living_in_a_simulation/3wxb"

Mine is more abridged and might be unclear, but that doesn't seem worth 13 karma points. I am confused.

Replies from: None, orthonormal
comment by [deleted] · 2011-04-13T03:17:10.022Z · LW(p) · GW(p)

I did not downvote you, but the following looked to me superficially like an insult:

Maybe the qualia of people who ACTUALLY have brains are completely different from yours, a person who has no brain.

People react badly to personal insults. I am not sure that you really intended an insult, which is one reason I didn't downvote.

comment by orthonormal · 2011-04-13T03:38:35.084Z · LW(p) · GW(p)

If you replaced "brains" with "bodies" in your comment, it would make the same point and not look like an insult.

comment by dfranke · 2011-04-12T02:40:21.859Z · LW(p) · GW(p)

A sufficiently advanced simulation on any substrate would have this property - the simulated qualia would feed back on the simulated world.

Correct, but both still are just simulated. The qualia that are actually occurring are those associated with the simulator substrate, not those associated with the simulated world, and in the context of the simulated world, they would not make sense.

Replies from: AstroCJ
comment by AstroCJ · 2011-04-12T08:37:18.632Z · LW(p) · GW(p)

they would not make sense

Proof?