Causal Reference

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-20T22:12:30.227Z · LW · GW · Legacy · 247 comments

Contents

  Mainstream status.
None
248 comments

Followup to:  The Fabric of Real ThingsStuff That Makes Stuff Happen

Previous meditation: "Does your rule forbid epiphenomenalist theories of consciousness that consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness is that we can imagine a universe where people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. For all the atoms in this universe to be in the same place - for there to be no detectable difference internally, not just externally - 'consciousness' would have to be something created by the atoms in the brain, but which didn't affect those atoms in turn. It would be an effect of atoms, but not a cause of atoms. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."

Is it coherent to imagine a universe in which a real entity can be an effect but not a cause?

Well... there's a couple of senses in which it seems imaginable. It's important to remember that imagining things yields info primarily about what human brains can imagine. It only provides info about reality to the extent that we think imagination and reality are systematically correlated for some reason.

That said, I can certainly write a computer program in which there's a tier of objects affecting each other, and a second tier - a lower tier - of epiphenomenal objects which are affected by them, but don't affect them. For example, I could write a program to simulate some balls that bounce off each other, and then some little shadows that follow the balls around.

But then I only know about the shadows because I'm outside that whole universe, looking in. So my mind is being affected by both the balls and shadows - to observe something is to be affected by it. I know where the shadow is, because the shadow makes pixels be drawn on screen, which make my eye see pixels. If your universe has two tiers of causality - a tier with things that affect each other, and another tier of things that are affected by the first tier without affecting them - then could you know that fact from inside that universe?

Again, this seems easy to imagine as long as objects in the second tier can affect each other. You'd just have to be living in the second tier! We can imagine, for example - this wasn't the way things worked out in our universe, but it might've seemed plausible to the ancient Greeks - that the stars in heaven (and the Sun as a special case) could affect each other and affect Earthly forces, but no Earthly force could affect them:

(Here the X'd-arrow stands for 'cannot affect'.)

The Sun's light would illuminate Earth, so it would cause plant growth. And sometimes you would see two stars crash into each other and explode, so you'd see they could affect each other. (And affect your brain, which was seeing them.) But the stars and Sun would be made out of a different substance, the 'heavenly material', and throwing any Earthly material at it would not cause it to change state in the slightest. The Earthly material might be burned up, but the Sun would occupy exactly the same position as before. It would affect us, but not be affected by us.

(To clarify an important point raised in the comments: In standard causal diagrams and in standard physics, no two individual events ever affect each other; there's a causal arrow from the PAST to FUTURE but never an arrow from FUTURE to PAST. What we're talking about here is the sun and stars over time, and the generalization over causal arrows that point from Star-in-Past to Sun-in-Present and Sun-in-Present back to Star-in-Future. The standard formalism dealing with this would be Dynamic Bayesian Networks (DBNs) in which there are repeating nodes and repeating arrows for each successive timeframe: X1, X2, X3, and causal laws F relating Xi to Xi+1. If the laws of physics did not repeat over time, it would be rather hard to learn about the universe! The Sun repeatedly sends out photons, and they obey the same laws each time they fall on Earth; rather than the Fi being new transition tables each time, we see a constant Fphysics over and over. By saying that we live in a single-tier universe, we're observing that whenever there are F-arrows, causal-link-types, which (over repeating time) descend from variables-of-type-X to variables-of-type-Y (like present photons affecting future electrons), there are also arrows going back from Ys to Xs (like present electrons affecting future photons). If we weren't generalizing over time, it couldn't possibly make sense to speak of thingies that "affect each other" - causal diagrams don't allow directed cycles!)

A two-tier causal universe seems easy to imagine, even easy to specify as a computer program. If you were arranging a Dynamic Bayes Net at random, would it randomly have everything in a single tier? If you were designing a causal universe at random, wouldn't there randomly be some things that appeared to us as causes but not effects? And yet our own physicists haven't discovered any upper-tier particles which can move us without being movable by us. There might be a hint here at what sort of thingies tend to be real in the first place - that, for whatever reasons, the Real Rules somehow mandate or suggest that all the causal forces in a universe be on the same level, capable of both affecting and being affected by each other.

Still, we don't actually know the Real Rules are like that; and so it seems premature to assign a priori zero probability to hypotheses with multi-tiered causal universes. Discovering a class of upper-tier affect-only particles seems imaginable[1] - we can imagine which experiences would convince us that they existed. If we're in the Matrix, we can see how to program a Matrix like that. If there's some deeper reason why that's impossible in any base-level reality, we don't know it yet. So we probably want to call that a meaningful hypothesis for now.

But what about lower-tier particles which can be affected by us, and yet never affect us?

Perhaps there are whole sentient Shadow Civilizations living on my nose hairs which can never affect those nose hairs, but find my nose hairs solid beneath their feet. (The solid Earth affecting them but not being affected, like the Sun's light affecting us in the 'heavenly material' hypothesis.) Perhaps I wreck their world every time I sneeze. It certainly seems imaginable - you could write a computer program simulating physics like that, given sufficient perverseness and computing power...

And yet the fundamental question of rationality - "What do you think you know, and how do you think you know it?" - raises the question:

How could you possibly know about the lower tier, even if it existed?

To observe something is to be affected by it - to have your brain and beliefs take on different states, depending on that thing's state. How can you know about something that doesn't affect your brain?

In fact there's an even deeper question, "How could you possibly talk about that lower tier of causality even if it existed?"

Let's say you're a Lord of the Matrix. You write a computer program which first computes the physical universe as we know it (or a discrete approximation), and then you add a couple of lower-tier effects as follows:

First, every time I sneeze, the binary variable YES_SNEEZE will be set to the second of its two possible values.

Second, every time I sneeze, the binary variable NO_SNEEZE will be set to the first of its two possible values.

Now let's say that - somehow - even though I've never caught any hint of the Matrix - I just magically think to myself one day, "What if there's a variable that watches when I sneeze, and gets set to 1?"

It will be all too easy for me to imagine that this belief is meaningful and could be true or false:

And yet in reality - as you know from outside the matrix - there are two shadow variables that get set when I sneeze. How can I talk about one of them, rather than the other? Why should my thought about '1' refer to their second possible value rather than their first possible value, inside the Matrix computer program? If we tried to establish a truth-value in this situation, to compare my thought to the reality inside the computer program - why compare my thought about SNEEZE_VAR to the variable YES_SNEEZE instead of NO_SNEEZE, or compare my thought '1' to the first possible value instead of the second possible value?

Under more epistemically healthy circumstances, when you talk about things that are not directly sensory experiences, you will reference a causal model of the universe that you inducted to explain your sensory experiences. Let's say you repeatedly go outside at various times of day, and your eyes and skin directly experience BRIGHT-WARM, BRIGHT-WARM, BRIGHT-WARM, DARK-COOL, DARK-COOL, etc. To explain the patterns in your sensory experiences, you hypothesize a latent variable we'll call 'Sun', with some kind of state which can change between 1, which causes BRIGHTness and WARMness, and 0, which causes DARKness and COOLness. You believe that the state of the 'Sun' variable changes over time, but usually changes less frequently than you go outside.

p(BRIGHT|Sun=1) 0.9
p(¬BRIGHT|Sun=1) 0.1
p(BRIGHT|Sun=0) 0.1
p(¬BRIGHT|Sun=0) 0.9

Standing here outside the Matrix, we might be tempted to compare your beliefs about "Sun = 1", to the real universe's state regarding the visibility of the sun in the sky (or rather, the Earth's rotational position).

But even if we compress the sun's visibility down to a binary categorization, how are we to know that your thought "Sun = 1" is meant to correspond to the sun being visible in the sky, rather than the sun being occluded by the Earth? Why the first state of the variable, rather than the second state?

How indeed are we know that this thought "Sun = 1" is meant to compare to the sun at all, rather than an anteater in Venezuela?

Well, because that 'Sun' thingy is supposed to be the cause of BRIGHT and WARM feelings, and if you trace back the cause of those sensory experiences in reality you'll arrive at the sun that the 'Sun' thought allegedly corresponds to. And to distinguish between whether the sun being visible in the sky is meant to correspond to 'Sun'=1 or 'Sun'=0, you check the conditional probabilities for that 'Sun'-state giving rise to BRIGHT - if the actual sun being visible has a 95% chance of causing the BRIGHT sensory feeling, then that true state of the sun is intended to correspond to the hypothetical 'Sun'=1, not 'Sun'=0.

Or to put it more generally, in cases where we have...

...then the correspondence between map and territory can at least in principle be point-wise evaluated by tracing causal links back from sensory experiences to reality, and tracing hypothetical causal links from sensory experiences back to hypothetical reality. We can't directly evaluate that truth-condition inside our own thoughts; but we can perform experiments and be corrected by them.

Being able to imagine that your thoughts are meaningful and that a correspondence between map and territory is being maintained, is no guarantee that your thoughts are true. On the other hand, if you can't even imagine within your own model how a piece of your map could have a traceable correspondence to the territory, that is a very bad sign for the belief being meaningful, let alone true. Checking to see whether you can imagine a belief being meaningful is a test which will occasionally throw out bad beliefs, though it is no guarantee of a belief being good.


Okay, but what about the idea that it should be meaningful to talk about whether or not a spaceship continues to exist after it travels over the cosmological horizon? Doesn't this theory of meaningfulness seem to claim that you can only sensibly imagine something that makes a difference to your sensory experiences?

No. It says that you can only talk about events that your sensory experiences pin down within the causal graph. If you observe enough protons, electrons, neutrons, and so on, you can pin down the physical generalization which says, "Mass-energy is neither created nor destroyed; and in particular, particles don't vanish into nothingness without a trace." It is then an effect of that rule, combined with our previous observation of the ship itself, which tells us that there's a ship that went over the cosmological horizon and now we can't see it any more.

To navigate referentially to the fact that the ship continues to exist over the cosmological horizon, we navigate from our sensory experience up to the laws of physics, by talking about the cause of electrons not blinking out of existence; we also navigate up to the ship's existence by tracing back the cause of our observation of the ship being built. We can't see the future ship over the horizon - but the causal links down from the ship's construction, and from the laws of physics saying it doesn't disappear, are both pinned down by observation - there's no difficulty in figuring out which causes we're talking about, or what effects they have.[2]


All righty-ighty, let's revisit that meditation:

"Does your rule forbid epiphenomenalist theories of consciousness in which consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness is that we can imagine a universe where people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. For all the atoms in this universe to be in the same place - for there to be no detectable difference internally, not just externally - 'consciousness' would have to be something created by the atoms in the brain, but which didn't affect those atoms in turn. It would be an effect of atoms, but not a cause of atoms. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."

The closest theory to this which definitely does seem coherent - i.e., it's imaginable that it has a pinpointed meaning - would be if there was another little brain living inside my brain, made of shadow particles which could affect each other and be affected by my brain, but not affect my brain in turn. This brain would correctly hypothesize the reasons for its sensory experiences - that there was, from its perspective, an upper tier of particles interacting with each other that it couldn't affect. Upper-tier particles are observable, i.e., can affect lower-tier senses, so it would be possible to correctly induct a simplest explanation for them. And this inner brain would think, "I can imagine a Zombie Universe in which I am missing, but all the upper-tier particles go on interacting with each other as before." If we imagine that the upper-tier brain is just a robotic sort of agent, or a kitten, then the inner brain might justifiably imagine that the Zombie Universe would contain nobody to listen - no lower-tier brains to watch and be aware of events.

We could write that computer program, given significantly more knowledge and vastly more computing power and zero ethics.

But this inner brain composed of lower-tier shadow particles cannot write upper-tier philosophy papers about the Zombie universe. If the inner brain thinks, "I am aware of my own awareness", the upper-tier lips cannot move and say aloud, "I am aware of my own awareness" a few seconds later. That would require causal links from lower particles to upper particles.

If we try to suppose that the lower tier isn't a complicated brain with an independent reasoning process that can imagine its own hypotheses, but just some shadowy pure experiences that don't affect anything in the upper tier, then clearly the upper-tier brain must be thinking meaningless gibberish when the upper-tier lips say, "I have a lower tier of shadowy pure experiences which did not affect in any way how I said these words." The deliberating upper brain that invents hypotheses for sense data, can only use sense data that affects the upper neurons carrying out the search for hypotheses that can be reported by the lips. Any shadowy pure experiences couldn't be inputs into the hypothesis-inventing cognitive process. So the upper brain would be talking nonsense.

There's a version of this theory in which the part of our brain that we can report out loud, which invents hypotheses to explain sense data out loud and manifests physically visible papers about Zombie universes, has for no explained reason invented a meaningless theory of shadow experiences which is experienced by the shadow part as a meaningful and correct theory.  So that if we look at the "merely physical" slice of our universe, philosophy papers about consciousness are meaningless and the physical part of the philosopher is saying things their physical brain couldn't possibly know even if they were true.  And yet our inner experience of those philosophy papers is meaningful and true. In a way that couldn't possibly have caused me to physically write the previous sentence, mind you. And yet your experience of that sentence is also true even though, in the upper tier of the universe where that sentence was actually written, it is not only false but meaningless.

I'm honestly not sure what to say when a conversation gets to that point. Mostly you just want to yell, "Oh, for the love of Belldandy, will you just give up already?" or something about the importance of saying oops.

(Oh, plus the unexplained correlation violates the Markov condition for causal models.)

Maybe my reply would be something along the lines of, "Okay... look... I've given my account of a single-tier universe in which agents can invent meaningful explanations for sense data, and when they build accurate maps of reality there's a known reason for the correspondence... if you want to claim that a different kind of meaningfulness can hold within a different kind of agent divided into upper and lower tiers, it's up to you to explain what parts of the agent are doing which kinds of hypothesizing and how those hypotheses end up being meaningful and what causally explains their miraculous accuracy so that this all makes sense."

But frankly, I think people would be wiser to just give up trying to write sensible philosophy papers about lower causal tiers of the universe that don't affect the philosophy papers in any way.


MeditationIf we can only meaningfully talk about parts of the universe that can be pinned down inside the causal graph, where do we find the fact that 2 + 2 = 4? Or did I just make a meaningless noise, there? Or if you claim that "2 + 2 = 4" isn't meaningful or true, then what alternate property does the sentence "2 + 2 = 4" have which makes it so much more useful than the sentence "2 + 2 = 3"?


Mainstream status.


 [1] Well, it seems imaginable so long as you toss most of quantum physics out the window and put us back in a classical universe. For particles to not be affected by us, they'd need their own configuration space such that "which configurations are identical" was determined by looking only at those particles, and not looking at any lower-tier particles entangled with them. If you don't want to toss QM out the window, it's actually pretty hard to imagine what an upper-tier particle would look like.

 [2] This diagram treats the laws of physics as being just another node, which is a convenient shorthand, but probably not a good way to draw the graph. The laws of physics really correspond to the causal arrows Fi, not the causal nodes Xi. If you had the laws themselves - the function from past to future - be an Xi of variable state, then you'd need meta-physics to describe the Fphysics arrows for how the physics-stuff Xphysics could affect us, followed promptly by a need for meta-meta-physics et cetera. If the laws of physics were a kind of causal stuff, they'd be an upper tier of causality - we can't appear to affect the laws of physics, but if you call them causes, they can affect us. In Matrix terms, this would correspond to our universe running on a computer that stored the laws of physics in one area of RAM and the state of the universe in another area of RAM, the first area would be an upper causal tier and the second area would be a lower causal tier. But the infinite regress from treating the laws of determination as causal stuff, makes me suspicious that it might be an error to treat the laws of physics as "stuff that makes stuff happen and happens because of other stuff". When we trust that the ship doesn't disappear when it goes over the horizon, we may not be navigating to a physics-node in the graph, so much as we're navigating to a single Fphysics that appears in many different places inside the graph, and whose previously unknown function we have inferred. But this is an unimportant technical quibble on Tuesdays, Thursdays, Saturdays, and Sundays. It is only an incredibly deep question about the nature of reality on Mondays, Wednesdays, and Fridays, i.e., less than half the time.

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Proofs, Implications, and Models"

Previous post: "Stuff That Makes Stuff Happen"

247 comments

Comments sorted by top scores.

comment by CronoDAS · 2012-10-22T00:23:50.082Z · LW(p) · GW(p)

Epiphenominal theories of consciousness are kind of silly, but here's another situation I can wonder about... some cellular automata rules, including the Turing-complete Conway's Game of Life, can have different "pasts" that can lead to the same present. From the point of view of a being living in such a universe (one in which information can be destroyed), is there a fact of the matter as to which "past" actually happened?

Replies from: Pentashagon, Eliezer_Yudkowsky, abramdemski, mfb, drethelin, hwc
comment by Pentashagon · 2012-10-22T23:16:56.435Z · LW(p) · GW(p)

I had always thought that our physical universe had this property as well, i.e. the Everett multiverse branches into the past as well as into the future.

Replies from: Eliezer_Yudkowsky, Viliam_Bur
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-25T08:01:33.979Z · LW(p) · GW(p)

If you take a single branch and run it backward, you'll find that it diverges into a multiverse of its own. If you take all the branches and run them backward, their branches will cohere instead of decohering, cancel out in most places, and miraculously produce only the larger, more coherent blobs of amplitude they started from. Sort of like watching an egg unscramble itself.

Replies from: SilasBarta
comment by SilasBarta · 2012-10-28T00:06:52.117Z · LW(p) · GW(p)

If you take all the branches and run them backward, their branches will cohere instead of decohering, cancel out in most places, and miraculously produce only the larger, more coherent blobs of amplitude they started from.

And the beings in them will only have memories of further-cohered (further "pastward") events, just as if you didn't run anything backwards.

comment by Viliam_Bur · 2012-10-25T07:40:09.086Z · LW(p) · GW(p)

And at the beginning of the universe we have a set of states which just point time-backwards at each other, which is why we cannot go meaningfully more backwards in time.

Something like:
A1 goes with probability 1% to B1, 1% to C1, and 98% to A2.
B1 goes with probability 1% to A1, 1% to C1, and 98% to B2.
C1 goes with probability 1% to A1, 1% to B1, and 98% to C2.

So if you ask about the past of A2, you get A1, which is the part that makes intuitive sense for us. But trying to go deeper in the past just gives us that the past of A1 is B1 or C1, and the past of B1 is A1 or C1, etc. Except that the change does not clearly happen in one moment (A2 has a rather well-defined past, A1 does not), but more gradually.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-25T08:00:07.303Z · LW(p) · GW(p)

As I understand it, this is not how standard physics models the beginning of time.

Replies from: army1987, Viliam_Bur
comment by A1987dM (army1987) · 2012-10-25T18:26:35.946Z · LW(p) · GW(p)

I don't think anyone takes seriously the way standard physics models the beginning of time (temperature and density of the universe approaching infinity as its age approaches zero), anyway, as it's most likely incorrect due to quantum gravity effects.

Replies from: wedrifid
comment by wedrifid · 2012-10-25T22:21:14.436Z · LW(p) · GW(p)

I don't think anyone takes seriously the way standard physics models the beginning of time (temperature and density of the universe approaching infinity

This is a correct usage of terminology but the irony still made me smile.

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-26T00:35:45.235Z · LW(p) · GW(p)

What?

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2013-06-09T23:10:18.921Z · LW(p) · GW(p)

I think wedrifid is pointing to the irony in saying that the 'standard' model is (on some issue) standardly rejected.

comment by Viliam_Bur · 2012-10-25T09:24:06.184Z · LW(p) · GW(p)

Oh. I tried to find something, but the only thing that partially pattern-matches it was the Hartle–Hawking state. If we mix it with the "universe as a Markov chain over particle configurations" model, it could lead to something like this. Or could not.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-22T00:55:49.969Z · LW(p) · GW(p)

Interesting question! I'd say that you could refer to the possibilities as possibilities, e.g. in a debate over whether a particular past would in fact have led to the present, but to speak of the 'actual past' might make no sense because you couldn't get there from there... no, actually, I take that back, you might be able to get there via simplicity. I.e. if there's only one past that would have evolved from a simply-tiled start state for the automaton.

Replies from: ialdabaoth, CCC, Pentashagon
comment by ialdabaoth · 2012-10-22T23:08:41.608Z · LW(p) · GW(p)

But does it really matter? If both states are possible, why not just say "my past contains ambiguity?"

With quantum mechanics, even though the "future" itself (as a unified wavefunction) evolves forward as a whole, the bit-that-makes-up-this-pseudofactor-of-me has multiple possible outcomes. We live with future ambiguity just fine, and quantum mechanics forces us to say "both experienced futures must be dealt with probabilistically". Even though the mechanism is different, what's wrong with treating the "past" as containing the same level of branching as the future?

EDIT: From a purely global, causal perspective, I understand the desire to be able to say, "both X and Y can directly cause Z, but in point of fact, this time it was Y." But you're inside, so you don't get to operate as a thing that can distinguish between X and Y, and this isn't necessarily an "orbital teapot" level of implausibility. If configuration Y is 10^4 more likely as a 'starting' configuration than configuration X according to your understanding of how starting configurations are chosen, then sure - go ahead and assert that it was (or may-as-well-have-been) configuration Y that was your "actual" past - but if the configuration probabilities are more like 70%/30%, or if your confidence that you understand how starting configurations are chosen is low enough, then it may be better to just swallow the ambiguity.

EDIT2: Coming from a completely different angle, why assert that one or the other "happened", rather than looking at it as a kind of path-integral? It's a celular automaton, instead of a quantum wave-function, which means that you're summing discrete paths instead of integrating infinitesimals, but it seems (at first glance) that the reasoning is equally applicable.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-10-25T07:59:16.185Z · LW(p) · GW(p)

If both states are possible, why not just say "my past contains ambiguity?"

Ambiguity it is, but we usually want to know the probabilities. If I tell you that whether you win or not win a lottery tomorrow is "ambiguous", you would not be satisfied with such answer, and you would ask how much likely it is to win. And this question somehow makes sense even if the lottery is decided by a quantum event, so you know that each future happens in some Everett branch.

Similarly, in addition to knowing that the past is ambiguous, we should ask how likely are the individual pasts. In our universe you would want to know how the pasts P1 and P2 are likely to become NOW. The Conway's Game of Life does not branch time-forward, so if you have two valid pasts, their probabilities of becoming NOW are 100% each.

But that is only a part of the equation. The other part are the prior probabilities of P1 and P2. Even if both P1 and P2 deterministically evolve to NOW, their prior probabilities influence how likely did NOW really evolve from each of them.

I am not sure what would be the equivalent of Solomonoff induction for the Conway's Game of Life. Starting with a finite number of "on" cells, where each additional "on" cell decreases the prior probability of the configuration? Starting with an infinite plane where each cell has a 50% probability to be "on"? Or an infinite plane with each cell having a p probability of being "on", where p has the property that after one step in such plane, the average ratio of "on" cells remain the same (the p being kind-of-eigenvalue of the rules)?

But the general idea is that if P1 is somehow "generally more likely to happen" than P2, we should consider P1 to be more likely the past of NOW than P2, even if both P1 and P2 deterministically evolve to NOW.

comment by CCC · 2012-10-25T08:35:49.315Z · LW(p) · GW(p)

In the Game of Life, a single live cell with no neighbours will become a dead cell in the next step. Therefore, any possible present state that has at least one past state has an infinite number of one-step-back states (which differ from the one state merely in having one or more neighbourless cells at random locations, far enough from anything else to have no effect).

Some of these one-step-back states may end up having evolved from simpler starting tilesets than the one with no vanishing cells.

comment by Pentashagon · 2012-10-22T23:14:36.155Z · LW(p) · GW(p)

no, actually, I take that back, you might be able to get there via simplicity. I.e. if there's only one past that would have evolved from a simply-tiled start state for the automaton.

The simplest start state might actually be a program that simulates the evolution of every possible starting state in parallel. If time and space are unbounded and an entity is more complex than the shortest such program then it is more likely that the entity is the result of the program and not the result of evolving from another random state.

comment by abramdemski · 2012-11-16T05:21:05.487Z · LW(p) · GW(p)

I am unable to see the appeal of a view in which there is no fact of the matter. It seems to me that there is a fact of the matter concerning the past, even if it is impossible for us to know. This is not similar to the case where sneezing alters two shadow variables, and it is impossible for us to meaningfully refer to variable 1 as opposed to variable 2; the past has a structure, so assertions will typically have definite referents.

comment by mfb · 2012-10-25T15:10:45.082Z · LW(p) · GW(p)

The Standard Model of particle physics with MWI is time-symmetric (to be precise: CPT symmetric) and conserves information. If you define the precise state at one point in time, you can calculate the unique past which lead to that state and the unique future which will evolve from that state. Note that for general states, "past" and "future" are arbitrary definitions.

Replies from: CronoDAS
comment by CronoDAS · 2012-10-25T16:44:31.103Z · LW(p) · GW(p)

(Which is why I specified a different set of laws of physics.)

comment by drethelin · 2012-10-22T21:20:32.342Z · LW(p) · GW(p)

This is actually one of the reasons I have to doubt Cryonics. You can talk about nano-tech being able to "reverse" the damage, but it's possible (and I think likely), that it's very hard to go from damaged states to the specific non-damaged state that actually constitutes your consciousness/memory.

Replies from: ialdabaoth, DaFranker
comment by ialdabaoth · 2012-10-22T23:13:28.394Z · LW(p) · GW(p)

Assuming that "you" are a point in consciousness phase-space, and not a "smear". If "you-ness" is a locus of similar-but-slightly-different potential states, then "mostly right" is going to be good enough.

And, given that every morning when you wake up, you're different-but-still-you, I'd say that there's strong evidence that "you-ness" is a locus of similar-but-slightly-different potential states, rather than a singular point.

This means, incidentally, that it may be possible to resurrect people without a physical copy of their brains at all, if enough people who remember them well enough when the technology becomes available.

Of course, since it's a smear, the question becomes "where do you want to draw the line between Bob and not-Bob?" - since whatever you create will believe it's Bob, and will act the way everyone alive remembers that Bob acted, and the "original" isn't around to argue (assuming you believe in concepts like "original" to begin with, but if you do, you have some weirder paradoxes to deal with).

comment by DaFranker · 2012-10-22T21:36:00.466Z · LW(p) · GW(p)

Which is why it's better for there to be more people signed up, but not actually being frozen yet. The more money they get while the later you get frozen, the better the odds. If immortality is something you want, this still seems like the best gamble.

comment by hwc · 2012-10-22T21:06:24.905Z · LW(p) · GW(p)

Or, a Boltzmann brain that flickered into existence with memories of a past that never happened.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-10-23T00:54:18.026Z · LW(p) · GW(p)

In that particular case, "never happened" has some weird ontological baggage. If a simulated consciousness is still conscious, then isn't its simulated past still a past?

Perhaps "didn't happen" in the sense that its future reality will not conform to its memory-informed expectations, but it seems like, if those memories form a coherent 'past', then in a simulationist sense that past did happen, even if it wasn't simulated with perfect fidelity.

comment by Armok_GoB · 2012-10-21T16:25:36.454Z · LW(p) · GW(p)

Just for the Least Convenient World, what if the zombies build a supercomputer and simulate random universes, and find that in 98% of simulated universes life forms like theirs do have shadow brains, and that the programs for the remaining 2% are usually significantly longer?

Replies from: Dweomite, Liron, afeller08
comment by Dweomite · 2023-06-10T21:06:16.702Z · LW(p) · GW(p)

When you "simulate random universes," what distribution are you randomizing over?

Seems like the simulations only help if you somehow already know the true probability distribution from which the actual universe was selected.

comment by Liron · 2012-10-23T04:47:23.414Z · LW(p) · GW(p)

How can the version without shadow brains be significantly longer? Even in the worst possible world, it seems like the 2% of non-shadow-brain programs could be encoded by copying their corresponding shadow-brain programs and adding a few lines telling the computer how to garbage-collect shadows using a straightforward pruning algorithm on the causal graph.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-10-23T16:45:10.708Z · LW(p) · GW(p)

by the programs being short enough in the first place that those few lines still doubles the length? By the universe like part not being straightforwardly encoded so that to distinguish anything about it you first need a long AI-like interpreter just to get there?

comment by afeller08 · 2012-10-25T01:09:40.015Z · LW(p) · GW(p)

That would strongly indicate that something caused the zombies to write a program for generating simulations that was likely to create simulated shadow brains in most of the simulations. (The compiler's built in prover for things like type checking was inefficient and left behind a lot of baggage that produced second tier shadow brains in all but 2% of simulations). It might cause the zombies to conclude that they probably had shadow brains and start talking about the possibility of shadow brains, but it should be equally likely to do that whether the shadow brains were real or not. (Which means any zombie with a sound epistemology would not give additional credence to the existence of shadow brains after the simulation caused other zombies to start talking about shadow brains than it would if the source of the discussion of shadow brains had come from a random number generator producing a very large number, and that large number being interpreted as a string in some normal encoding for the zombies producing a paper that discussed shadow brains. Shadow brains in that world should be an idea analogous to Russell's teapot, astrology, or the invisible pink unicorn in our world.)

Now, if there was some outside universe capable of looking at all of the universes and seeing some universes with shadow brains and some without, and in the the universes with shadow brains zombies were significantly more likely to produce simulations that created shadow brains than in the universe in which zombies had shadow brains they were much more likely to create simulations that predicted shadow brains similar to their actual shadow brains -- then, we would be back to seeing exactly what we see when philosophers talk about shadow brains directly: namely, the shadow brains are causing the zombies to imagine shadow brains which means that the shadow brains aren't really shadow brains because they are affecting the world (with probability 1).

Either the result of the simulations points to gross inefficiency somewhere (their simulations predicted something that their simulations shouldn't have been able to predict) or the shadow brains not really being shadow brains because they are causally impacting the world. (This is slightly more plausible than philosopher's postulating shadow brains correctly for no reason only because we don't necessarily know that there is anything driving the zombies to produce simulations efficiently; whereas, we know in our world that we can assume that brains typically produce non-gibberish because enormous selective pressures have caused brains to create non-gibberish.)

Replies from: Armok_GoB
comment by Armok_GoB · 2012-10-25T02:20:57.623Z · LW(p) · GW(p)

I were talking about the logical counter-factual, where it genuinely is true and knowably so through rationality.

It might be easier to think about it like this: there is a large number of civilizations in T4, each of which can observe that almost all the other have shadow brains but none of which can see if they have them themselves.

comment by Shmi (shminux) · 2012-10-21T17:08:22.849Z · LW(p) · GW(p)

Is it coherent to imagine a universe in which a real entity can be an effect but not a cause?

Your favorite example of event horizons, cosmological or otherwise, is like that. GR suggests that there can be a ring singularity inside an eternal spinning black hole (but not one spun up from rest), near/around which you can go forever without being crushed. (it also suggests that there could be closed timelike curves around it, but I'll ignore this for now.) So maybe there are particles/objects/entities living there.

Stuff thrown into such a black hole can certainly affect the hypothetical entities living inside. Like a meteor shower from the outside. But the outside is not affected by anything happening inside, the horizon prevents it.

Replies from: AnthonyC, MrMind
comment by AnthonyC · 2012-10-22T01:23:22.153Z · LW(p) · GW(p)

Fair, but quantum mechanics gives us Hawking radiation, which may or may not provide information (in principle) about what went into the black hole.

Also, there are causal arrows from the black hole to everything it pulls on, and those are ultimately the sum of causal arrows from each particle in the black hole even if from the outside we can't discern the individual particles.

comment by MrMind · 2012-10-22T14:54:30.807Z · LW(p) · GW(p)

Stuff thrown into such a black hole can certainly affect the hypothetical entities living inside. Like a meteor shower from the outside. But the outside is not affected by anything happening inside, the horizon prevents it.

That is not entirely true: stuff thrown into the black hole increases the horizon area and possibly modifies its geometry, and in return the horizon affects the spatial infinity (the area around the horizon). The debate is about how much information the horizon deletes in the process. The same is for the cosmological horizon, which is effectively just another kind of singularity.

Replies from: shminux
comment by Shmi (shminux) · 2012-10-22T14:57:07.466Z · LW(p) · GW(p)

stuff thrown into the black hole increases the horizon area and possibly modifies its geometry, and in return the horizon affects the spatial infinity (the area around the horizon).

That's outside affecting outside, not inside affecting outside.

The same is for the cosmological horizon, which is effectively just another kind of singularity.

Horizon is not a singularity.

Replies from: MrMind
comment by MrMind · 2012-10-22T15:52:56.938Z · LW(p) · GW(p)

That's outside affecting outside, not inside affecting outside.

Hmm... let's taboo "outside" and "inside". The properties of stuff within the horizon affect the properties of the horizon, which in turn affect the properties of space-matter at spatial infinity. Is this formulation more acceptable?

Horizon is not a singularity.

Right, I'll rephrase: the same goes for the cosmological horizon, which effectively 'surrounds' just another kind of singularity.

Replies from: shminux
comment by Shmi (shminux) · 2012-10-22T16:22:24.311Z · LW(p) · GW(p)

The properties of stuff within the horizon affect the properties of the horizon

Wrong. There could be tons of different things going on inside, absolutely indistinguishable from outside, which only sees mass, electric charge and angular momentum. There is no causal connection from inside to outside whatsoever, barring FTL communication.

Right, I'll rephrase: the same goes for the cosmological horizon, which effectively 'surrounds' just another kind of singularity.

Wrong again. There is no singularity of any kind behind the cosmological horizon (which is not a closed surface to begin with, so it cannot "surround" anything). Well, there might be black holes and stuff, or there might not be, but there is certainly not a requirement of anything singular being there. Consider googling the definition of singularity in general relativity.

Replies from: DaFranker, Alejandro1, MrMind
comment by DaFranker · 2012-10-22T17:03:55.039Z · LW(p) · GW(p)

Wrong. There could be tons of different things going on inside, absolutely indistinguishable from outside, which only sees mass, electric charge and angular momentum. There is no causal connection from inside to outside whatsoever, barring FTL communication.

Unless the "inside" was spontaneously materialized into existence while simultaneously a different chunk of the singularity's mass blinked out of existence in manners which defy nearly all the physics I know, then there still remains a causal connection from the "disappearance" of this stuff that's "inside" from the world "outside" at some point in outside time frames, AFAICT. This disappearance of specific pieces of matter and energy seems to more than qualify as a causal effect, when compared to counterfactual futures where they do not disappear.

Also, the causal connection [Inside -> Mass -> Outside] pretty much looks like a causal connection from inside to outside to me. There's this nasty step in the middle that blurs all the information such that under most conceivable circumstances there's no way to tell which of all possible insides is the "true" one, but combined with the above about matter disappearance can still let you concentrate your probability mass, compared to meaningless epiphenomena that cover the entire infinite hypothesis space (minus one single-dimensional line representing its interaction with anything that interacts with our reality in any way) with equal probability because there's no way it could even in principle affect us even in CTCs, FTL, timeless or n-dimensional spaces, etc.

(Note: I'm not an expert on mind-bending hypothetical edge cases of theoretical physics, so I'm partially testing my own understanding of the subject here.)

Replies from: shminux
comment by Shmi (shminux) · 2012-10-22T17:57:03.921Z · LW(p) · GW(p)

I'm partially testing my own understanding of the subject here.

Most of what you said is either wrong or meaningless, so I don't know where to begin unraveling it, sorry. Feel free to ask simple questions of limited scope if you want to learn more about black holes, horizons, singularities and related matters. The subject is quite non-trivial and often counter-intuitive.

Replies from: DaFranker
comment by DaFranker · 2012-10-22T19:56:54.876Z · LW(p) · GW(p)

Hmm, alright.

In more vague, amateur terms, isn't the whole horizon thing always the same case, i.e. it's causally linked to the rest of the universe by observations in the past and inferences using presumed laws of physics, even if the actual state of things beyond the horizon (or inside it or whatever) doesn't change what we can observe?

Replies from: shminux
comment by Shmi (shminux) · 2012-10-22T20:19:22.193Z · LW(p) · GW(p)

The event horizon in an asymptotically flat spacetime (which is not quite the universe we live in, but a decent first step) is defined as the causal past of the infinite causal future. This definition guarantees that we see no effects whatsoever from the part of the universe that is behind the event horizon. The problem with this definition is that we have to wait forever to draw the horizon accurately. Thus there are several alternative horizons which are more instrumentally useful for theorem proving and/or numerical simulations, but are not in general identical to the event horizon. The cosmological event horizon is a totally different beast (it is similar to the Rindler horizon, used to derive the Unruh effect), though it does share a number of properties with the black hole event horizon. There are further exciting complications once you get deeper into the subject.

comment by Alejandro1 · 2012-10-22T16:35:54.592Z · LW(p) · GW(p)

Wrong. There could be tons of different things going on inside, absolutely indistinguishable from outside, which only sees mass, electric charge and angular momentum.

Nitpick: this is only true for a stationary black hole. If you throw something sufficiently big in, you would expect the shape of the horizon to change and bulge a bit, until it settles down into a stationary state for a larger black hole. You are of course correct that this does not allow anything inside to send a signal to the outside.

Replies from: shminux
comment by Shmi (shminux) · 2012-10-22T16:41:07.551Z · LW(p) · GW(p)

Nitpick: this is only true for a stationary black hole.

Right, I didn't want to go into these details, MrMind seems confused enough as it is. I'd have to explain that the horizon shape is only determined by what falls in, and eventually talk about apparent and dynamical horizons and marginally outer trapped surfaces...

comment by MrMind · 2012-10-24T14:33:07.592Z · LW(p) · GW(p)

Wrong. There could be tons of different things going on inside, absolutely indistinguishable from outside, which only sees mass, electric charge and angular momentum.

Also entropy. Anyway, those are determined by the mass, electrical charge and angular momentum of the matter that fell inside. We may not want to call it a causal connection, but it's certainly a case of properties within determining properties outside.

There is no causal connection from inside to outside whatsoever, barring FTL communication.

There is no direct causal connection, meaning a worldline from the inside to the outside of the black hole. But even if the horizon screens almost all of the infalling matter properties, it doesn't screen everything (and probably, but this is a matter of quantum gravity, doesn't screen nothing).

Wrong again. There is no singularity of any kind behind the cosmological horizon (which is not a closed surface to begin with, so it cannot "surround" anything). Well, there might be black holes and stuff, or there might not be, but there is certainly not a requirement of anything singular being there. Consider googling the definition of singularity in general relativity.

I'll admit to not have much knowledge about this specific theme, and I'll educate myself more properly, but in the case of my earlier sentence I used "singularity" as a mathematical term, referring to a region of spacetime in which the GR equations acquire a singular value, so not specifically to a gravitational singularity like a black-hole or a domain wall. In the case of most commonplace cosmological horizons, this region is simply space-like infinity.

comment by drnickbone · 2012-10-21T10:20:45.815Z · LW(p) · GW(p)

Some thoughts about "epiphenomena" in general, though not related to consciousness.

Suppose there are only finitely many events in the entire history of the universe (or multiverse), so that the universe can be represented by a finite casual graph. If it is an acrylic graph (no causal cycles), then there must be some nodes which are effects but not causes, that is, they are epiphenomena. But then why not posit a smaller graph with the epiphenomenal nodes removed, since they don't do anything? And then that reduced graph is also finite, and also has epiphenomenal nodes.... so why not remove those?

So, is the conclusion that the best model of the universe is a strictly infinite graph, with no epiphenomenal nodes that can be removed e.g. no future big crunches or other singularities? This seems like a dubious piece of armchair cosmology.

Or are there cases where the larger finite graph (with the epiphenomenal nodes) is strictly simpler as a theory than the reduced graph (with the epiphenomena removed), so that Occam's razor tells us to believe in the larger graph? But then Occam's razor is justifying a belief in epiphenomena, which sounds rather odd when put like that!

Replies from: Eliezer_Yudkowsky, CCC, army1987
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-21T21:18:31.315Z · LW(p) · GW(p)

The last nodes are never observed by anyone, but they descend from the same physics, the same F(physics), that have previously been pinned down, or so I assume. You can thus meaningfully talk about them for the same reason you can meaningfully talk about a spaceship going over the cosmological horizon. What we're trying to avoid is SNEEZE_VARs or lower qualia where there's no way that the hypothesis-making agent could ever have observed, inducted, and pinned down the causal mechanism - where there's no way a correspondence between map and territory could possibly be maintained.

comment by CCC · 2012-10-21T11:26:02.900Z · LW(p) · GW(p)

Following this reasoning, if there is a finite causal state machine, then your pruning operation would eventually remove me, you, the human race, the planet Earth.

Now, from inside the universe, I cannot tell whether your hypothesis of a finite state graph universe is true or not - but I do have a certain self-interest in not being removed from existence. I find, therefore, that I am scrambling for justifications for why the finite-state-model universe nodes containing myself are somehow special, that they should not be removed (to be fair, I extend the same justifications to all sentient life).

comment by Eugine_Nier · 2012-10-21T03:24:47.947Z · LW(p) · GW(p)

That said, I can certainly write a computer program in which there's a tier of objects affecting each other, and a second tier - a lower tier - of epiphenomenal objects which are affected by them, but don't affect them.

I would like to point out that any space-like surface (technically 3-fold) divides our universe into two such tiers.

Replies from: Eliezer_Yudkowsky, CCC
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-21T21:27:23.907Z · LW(p) · GW(p)

Okay, I can see that I need to spell out in more detail one of the ideas here - namely that you're trying to generalize over a repeating type of causal link and that reference is pinned down by such generalization. The Sun repeatedly sends out light in individual Sun-events, electrons repeatedly go on traveling through space instead of vanishing; in a universe like ours, rather than the F(i) being whole new transition tables randomly generated each time, you see the same F(physics) over and over. This is what you can pin down and refer to. Any causal graph is acyclic and can be divided as you say; the surprising thing is that there are no F-types, no causal-link-types, which (over repeating time) descend from one kind of variable to another, without (over time) there being arrows also going back from that kind to the other. Yes, we're generalizing and inducting over time, otherwise it would make no sense to speak of thingies that "affect each other". No two individual events ever affect each other!

Replies from: army1987, Eugine_Nier
comment by A1987dM (army1987) · 2012-10-21T22:27:46.024Z · LW(p) · GW(p)

Maybe you should elaborate on this in a top-level post.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-25T04:29:15.631Z · LW(p) · GW(p)

I edited the main post to put it in.

Replies from: William_Quixote
comment by William_Quixote · 2012-10-29T21:55:31.529Z · LW(p) · GW(p)

I will probably reskim the post, but in general it’s not clear to me that editing content into a preexisting posts is better than incorporating then into the next post where it would be appropriate. The former provides the content to all people yet to read it and all people who will reread it while the latter provides the content to all people who yet to read it. So you are trading the time value of getting updated content to the people who will reread this post faster at the expense of not getting updated content to those who will read the next post but not reread the present post.

I don’t have readership and rereadership stats, but this seems like an answerable question.

comment by Eugine_Nier · 2012-10-21T22:38:23.939Z · LW(p) · GW(p)

Okay, I can see that I need to spell out in more detail one of the ideas here - namely that you're trying to generalize over a repeating type of causal link and that reference is pinned down by such generalization.

So in the end, we're back a frequentism.

Also, what about unique events?

Replies from: Cyan, Eliezer_Yudkowsky
comment by Cyan · 2012-10-22T02:47:55.876Z · LW(p) · GW(p)

Somewhat tangentially, I'd like to point out that simply bringing up relative frequencies of different types of events in discussion doesn't make one a crypto-frequentist -- the Bayesian approach doesn't bar relative frequencies from consideration. In contrast, frequentism does deprecate the use of mathematical probability as a model or representation of degrees of belief/plausibility.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-22T00:52:47.838Z · LW(p) · GW(p)

Er, no, they're called Dynamic Bayes Nets. And there are no known unique events relative to the fundamental laws of physics; those would be termed "miracles". Physics repeats perfectly - there's no question of frequentism because there's no probabilities - and the higher-level complex events are one-time if you try to measure them precisely; Socrates died only once, etc.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-10-23T00:44:37.596Z · LW(p) · GW(p)

And there are no known unique events relative to the fundamental laws of physics

What about some of the things going on at the LHC?

comment by CCC · 2012-10-21T10:58:25.626Z · LW(p) · GW(p)

So... the past can affect the future, but the future cannot affect the past (barring things like tachyons, which may or may not exist in any case). Taking this in context of the article could lead to an interesting discussion on the nature of time; does the past still exist once it is past? Does the future exist yet in any meaningful way?

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-21T13:07:24.824Z · LW(p) · GW(p)

I'd say "No, unless you're using the words still and yet in a weird way."

Replies from: CCC
comment by CCC · 2012-10-21T19:32:24.973Z · LW(p) · GW(p)

Consider for a moment the concept of the lack of simultaneity in special relativity.

Consider, specifically, the train and platform experiment. A train passes a platform travelling at a significant fraction of the speed of light; some time before it does so, a light bulb in the precise centre of the train flashes, once.

The observer T on the train will find that the light reaches the front and back of the train simultaneously; the observer P on the platform finds that the light hits the back of the train before it hits the front of the train.

Consider now the instant in which the train observer T says that the light is currently hitting the front and the back of the train simultaneously. At that precise instant, he glances out of the window and sees that he is right next to the platform observer P. the event "P and T pass each other" occurs at that same instant. Thus, all three events - the light hitting the front of the train, the light hitting the rear of the train, and P and T passing, are simultaneous.

Now consider P. In P's reference frame, these events are not simultaneous. The occur in the following order:

  • The light hits the rear of the train
  • T passes P
  • The light hits the front of the train

So. In the instant in which T and P pass each other, does the event "the light hits the rear of the train" exist? It is not in the past, or future, lightcone of the event "T and P pass each other", and thus cannot be directly causally linked to that event (barring FTL travel, which causes all sorts of headaches for relativity in any case).

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-21T19:47:03.113Z · LW(p) · GW(p)

In the instant in which T and P pass each other, does the event "the light hits the rear of the train" exist?

The phrase "In the instant in which T and P pass each other" has a different meaning (namely, it refers to a different spacelike hypersurface) depending on what frame of reference the speaker is using. Some of those hypersurfaces include the event "the light hits the rear of the train" and others don't.

Replies from: CCC
comment by CCC · 2012-10-23T07:25:25.304Z · LW(p) · GW(p)

That is true. Nonetheless, you have two observers, T and P, who disagree at a given moment on whether the event "the light hits the rear of the train" is currently happening, or whether it has already happened. (Another observer can be introduced for whom the event has not yet happened).

So. If the present exists - that is, if everything which is on the same spacelike hypersurface as me at this current moment exists - then every possible spacelike hypersurface including me must exist. Which means that, over in the Andromeda galaxy, there must be quite a large time interval that exists all at once (everything that's not in my past/future light cone). Applying the same argument to a hypothetical observer in the Andromeda Galaxy implies that a large swath of time over here must all be in existance as well.

Now, it is possible that there is one particular spacelike hypersurface that can be considered to be the only spacelike hypersurface in existance at any given time; if this were the case, though, then I would expect that there would be some experiment that could demonstrate which spacelike hypersurface it is. That same experiment would disprove special relativity, and require an updating of that theory. On the other hand, if the past and future are in existance in some way, I would expect that there would be some way, as yet undiscovered, to affect them - some way, in short, to travel through time (or at least to send an SMS to the past). Either way, it leads to interesting possibilities for future physics.

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-23T09:22:28.505Z · LW(p) · GW(p)

First of all, natural language sucks at specifying whether a statement is indexical/deictic (its referent depends on who is speaking, and where and when they are speaking, etc.) or not. The compulsory tense marking on verbs is part of the problem (“there is” is usually but not always taken to mean “there is now” as “is” is in the present tense, and that's indexical -- “now” refers to a different time in this comment than in one written three years ago), though not the only one (“it's not raining” is usually taken to mean “it's not raining here”, not “it's not raining anywhere”).

Now, it is possible that there is one particular spacelike hypersurface that can be considered to be the only spacelike hypersurface in existance at any given time

Yes, once you specify (explicitly or implicitly) what you mean by “at any given time” (i.e. what frame of reference you're using). But there's no God-given choice for that. (Well, there's the frame of reference in which the dipole anisotropy of the cosmic microwave background vanishes, but you have to “look out” to know what that is; in a closed system you couldn't tell.) IOW the phrase “at any given time” must also be taken as indexical; in everyday life that doesn't matter much because the hypersurfaces you could plausibly be referring to are only separated by tiny fractions of a second in the regions you'd normally want to talk about.

On the other hand, if the past and future are in existance in some way,

Yes, if “are” is interpreted non-indexically (i.e. not as “are now”).

I would expect that there would be some way, as yet undiscovered, to affect them - some way, in short, to travel through time (or at least to send an SMS to the past).

Why? In special relativity, “X can affect Y” is equivalent to “Y is within or on the future light cone of X”, which is a partial order relation, and that's completely self-consistent. (But in the real world, special relativity only applies locally, and even there we can't be 100% sure it applies exactly and in all conditions.)

Replies from: CCC
comment by CCC · 2012-10-23T18:12:11.010Z · LW(p) · GW(p)

Yes, once you specify (explicitly or implicitly) what you mean by “at any given time” (i.e. what frame of reference you're using).

This is where it all gets complicated. If I'm trying to talk about one instantaneous event maintaining an existence for longer than an instant - well, language just isn't structured right for that. An event can partake of many frames of reference, many of which can include me at different times by my watch (particularly if the event in question takes place in the Andromeda Galaxy). So, if there is one reference frame where an Event occurs at the same time as my watch shows 20:00, and another reference frame shows the same (distant) event happening while my watch says 21:00, then does that Event remain in existence for an entire hour?

That's basically the question I'm asking; while I suspect that the answer is 'no', I also don't see what experiment can be used to prove either a positive or a negative answer to that question (and either way, the same experiment seems likely to also prove something else interesting).

Yes, if “are” is interpreted non-indexically (i.e. not as “are now”).

I meant it as "are now".

I would expect that there would be some way, as yet undiscovered, to affect them - some way, in short, to travel through time (or at least to send an SMS to the past).

Why? In special relativity, “X can affect Y” is equivalent to “Y is within or on the future light cone of X”, which is a partial order relation, and that's completely self-consistent.

Because if it is now in existence, then I imagine that there is now some way to affect it; which in this case would imply time travel (and therefore at least some form of FTL travel)

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-24T13:22:40.953Z · LW(p) · GW(p)

First of all, Thou Shalt Not use several frames of reference at once unless you know what you're doing or you risk being badly confused. (Take a look at the Special Relativity section of the Usenet Physics FAQ, especially the discussion of the Twin Paradox.) Possibly, get familiar with spacetime diagrams (also explained in that FAQ).

According to special relativity, the duration of the set of instants B in your life such as there exists an inertial frame of reference such that B is simultaneous with a fixed event A happening in Andromeda is 2L/c, where L is the distance from you to Andromeda. (Now you do need experiments to tell whether special relativity applies to the real world, but any deviation from it --except due to gravitation-- must be very small or only apply to certain circumstances, or we would have seen it by now.)

I meant it as "are now".

I'd say that the concept of “now” needs a frame of reference to be specified (or implicit from the context) to make sense.

Because if it is now in existence, then I imagine that there is now some way to affect it; which in this case would imply time travel (and therefore at least some form of FTL travel)

I think you are trying to apply to Minkowski spacetime an intuition that only applies to Galilean spacetime (and even then, it's not an intuition that everyone shares; IIRC, there have been people thinking that instant action at a distance is counterintuitive and a reason to suspect that Newtonian physics is not the whole story for centuries, even before Einstein came along).

Replies from: CCC
comment by CCC · 2012-10-25T08:23:36.860Z · LW(p) · GW(p)

First of all, Thou Shalt Not use several frames of reference at once unless you know what you're doing or you risk being badly confused.

I think that this is important; I have come to suspect that I am somewhat confused.

I think you are trying to apply to Minkowski spacetime an intuition that only applies to Galilean spacetime

This is more than likely correct. I would also note that I have been applying, over very long (intergalactic) distances, the assumption that there is no expansion, which is clearly wrong. I suspect that I should probably look more into General Relativity before continuing along this train of thought.

Replies from: army1987
comment by A1987dM (army1987) · 2012-10-25T08:48:49.244Z · LW(p) · GW(p)

I would also note that I have been applying, over very long (intergalactic) distances, the assumption that there is no expansion, which is clearly wrong.

Andromeda is nowhere near so far away that the expansion of the universe is important. (In fact, according to Wikipedia it's being blueshifted, meaning that its gravitational attraction to us is winning over the expansion of space.)

comment by dspeyer · 2012-10-21T01:59:53.971Z · LW(p) · GW(p)

Can anyone explain why epiphenomenalist theories of consciousness are interesting? There have been an awful lot of words on them here, but I can't find a reason to care.

Replies from: CarlShulman, drethelin, AnotherIdiot, Eliezer_Yudkowsky, bryjnar, TAG
comment by CarlShulman · 2012-10-21T20:25:33.117Z · LW(p) · GW(p)

It seems that you get similar questions as a natural outgrowth of simple computational models of thought. E.g. if one performs Solomonoff induction on the stream of camera inputs to a robot, what kind of short programs will dominate the probability distribution over the next input? Not just programs that simulate the physics of our universe: one would also need additional code to "read off" the part of the simulated universe that corresponded to the camera inputs. That additional code looks like epiphenomenal mind-stuff. Using this framework you can pose questions like "if the camera is expected to be rebuilt using different but functionally equivalent materials, will his change the inputs Solomonoff induction predicts?" or "if the camera is about to be duplicated, which copy's inputs will be predicted by Solomonoff induction?"

If we go beyond Solomonoff induction to allow actions, then you get questions that map pretty well to debates about "free will."

Replies from: Matt_Simpson, Giles
comment by Matt_Simpson · 2012-10-28T21:48:43.048Z · LW(p) · GW(p)

one would also need additional code to "read off" the part of the simulated universe that corresponded to the camera inputs. That additional code looks like epiphenomenal mind-stuff.

I don't understand why the additional code looks like epiphenomenal mind-stuff. Care to explain?

Replies from: gwern, CarlShulman
comment by gwern · 2012-10-28T23:12:00.878Z · LW(p) · GW(p)

I take Carl to meaning that: the program corresponding to 'universe A simulating universe B and I am in universe B' is strictly more complex than 'I am in universe B' while also predicting all the same observations, and so the 'universe A simulating universe B' part of the program makes no difference in the same way that mental epiphenomena make no difference - they predict you will make the same observations, while being strictly more complex.

Replies from: CarlShulman, SilasBarta, Matt_Simpson
comment by CarlShulman · 2012-10-29T21:18:44.338Z · LW(p) · GW(p)

This seems to be talking about something entirely different.

comment by SilasBarta · 2012-10-28T23:59:15.634Z · LW(p) · GW(p)

the program corresponding to 'universe A simulating universe B and I am in universe B' is strictly more complex than 'I am in universe B' while also predicting all the same observations, and so the 'universe A simulating universe B' part of the program makes no difference in the same way that mental epiphenomena make no difference - they predict you will make the same observations, while being strictly more complex.

True, but, just as a reminder, that's not the position we're in. There are other (plausibly necessary) parts of our world model that could give us the implication "universe A simulates us" "for free", just as we get "the electron that goes beyond our cosmological horizon keeps existing" is an implication we get "for free" as a result of minimal models of physics.

In this case (per the standard Simulation Argument), the need to resolve the question of "what happens in civilizations that can construct virtual worlds indistinguishable from non-virtual worlds" can force us to posit parts of a (minimal) model that then imply the existence of universe A.

comment by Matt_Simpson · 2012-10-28T23:32:38.612Z · LW(p) · GW(p)

Ah, ok, that makes sense. Thanks!

comment by CarlShulman · 2012-10-29T21:17:30.604Z · LW(p) · GW(p)

The code simulating a physical universe doesn't need to make any reference to which brain or camera in the simulation is being "read off" to provide the sensory input stream. The additional code takes the simulation, which is a complete picture of the world according to the laws of physics as they are seen by the creatures in the simulation, and outputs a sensory stream. This function is directly analogous to what dualist/epiphenomenalist philosopher of mind David Chalmers calls "psychophysical laws."

Replies from: Gust
comment by Gust · 2012-12-26T04:15:24.673Z · LW(p) · GW(p)

I don't know if this insight is originally yours or not, but thank you for it. It's like you just gave me a piece of the puzzle I was missing (even if I still don't know where it fits).

comment by Giles · 2012-10-28T20:59:33.576Z · LW(p) · GW(p)

Oh wow... I had been planning on writing a discussion post on essentially this topic. One quick question - if you have figured out the shortest program that will generate the camera data, is there a non-arbitrary way we can decide which parts of the program correspond to "physics of our universe" and which parts correspond to "reading off camera's data stream within universe"?

comment by drethelin · 2012-10-21T02:20:18.175Z · LW(p) · GW(p)

Pretty much the same reason religion needs to be talked about. If no one had invented it wouldn't be useful to dispute notions of god creating us for a divine purpose, but because many people think this indeed happened you have to talk about it. It's especially important for reasonable discussions of AI.

Replies from: dspeyer
comment by dspeyer · 2012-10-21T06:47:27.410Z · LW(p) · GW(p)

Religion and epiphenomenalogy differ in three important ways:

  • Religion is widespread. Almost everyone knows what it is. Most people have at least some religious memes sticking in their heads. A significant fraction of people have dangerous religious memes in their heads so decreasing those qualifies as raising the sanity waterline. Epiphenomenalogy is essentially unknown outside academic philosophy, and now the lesswrong readership.
  • Religion has impact everywhere. People have died because of other people's religious beliefs, and not just from violence. Belief in epiphenomenalogy has almost no impact on the lives of non-believers.
  • Religious thought patterns re-occur. Authority, green/blue, and "it is good to believe" show up over and over again. The sort of thoughts that lead to epiphenomenalogy are quite obscure.
Replies from: hairyfigment, Kaj_Sotala
comment by hairyfigment · 2012-10-21T18:31:10.433Z · LW(p) · GW(p)

The word "epiphenomenalogy" is rare. The actual theory seems like an academic remnant of the default belief that 'You can't just reduce everything to numbers, people are more than that.'

So your last point seems entirely wrong. Zombie World comes from the urge to justify religious dualism or say that it wasn't all wrong (not in essence). And the fact that someone had to take it this far shows how untenable dualism seems in a practical sense, to educated people.

Replies from: TAG, TAG
comment by TAG · 2020-03-03T10:19:31.031Z · LW(p) · GW(p)

The most famous arguments for epiphenomenalism and zombies have nothing to do with religion. And we dont actually want to have reductive explanations of qualia, as you can tell from the fact that we can't construct qualia - - we can't write code that sees colours or tastes flavours. Construction is reduction in reverse.

comment by TAG · 2020-03-03T10:20:43.074Z · LW(p) · GW(p)
comment by Kaj_Sotala · 2012-10-21T08:47:39.709Z · LW(p) · GW(p)

Epiphenomenalogy is essentially unknown outside academic philosophy, and now the lesswrong readership.

I'd say it's more widespread than that. Some strands of Buddhist thought, for instance, seem to strongly imply it even if they didn't state it outright. And it feels like it'd be the most intuitive way of thinking about consciousness for many of the people who'd think about it at all, even if they weren't familiar with academic philosophy. (I don't think I got it from academic philosophy, though I can't be sure of that.)

comment by AnotherIdiot · 2012-10-21T02:16:45.428Z · LW(p) · GW(p)

Because epiphenomenalist theories are common but incorrect, and the goal of LessWrong is at least partially what its name implies.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-21T06:13:02.063Z · LW(p) · GW(p)

It's good practice for seeing how the rules play out; and having a clear mental visualization of how causality works, and what sort of theories are allowed to be meaningful, is actually reasonably important both for anti-nonsense techniques and AI construction. People who will never need any more skill than they currently have in Resist Nonsense or Create Friendly AI can ignore it, I suppose.

comment by bryjnar · 2012-10-21T10:30:50.400Z · LW(p) · GW(p)

Philosophers have a tendency to name pretty much every position that you can hold by accepting/refusing various "key" propositions. Epiphenomenalism tends to be reached by people frantically trying to hold on to their treasured beliefs about the way the mind works. Then they realise they can consistently be epiphenomenalists and they feel okay because it has a name or something.

Basically, it's a consistent position (well, Eliezer seems to think it's meaningless!), and so you want to go to some effort to show that it's actually wrong. Plus it's a good exercise to think about why it's wrong.

Replies from: RichardChappell
comment by RichardChappell · 2012-10-25T23:10:27.577Z · LW(p) · GW(p)

In my experience, most philosophers are actually pretty motivated to avoid the stigma of "epiphenomenalism", and try instead to lay claim to some more obscure-but-naturalist-friendly label for their view (like "non-reductive physicalism", "anomalous monism", etc.)

comment by TAG · 2020-03-03T10:31:22.333Z · LW(p) · GW(p)

People don't like epiphenomenalism per se, they feel they are forced into it by other claims they find compelling. Usually some combination of

  1. Qualia exist in some sense

  2. Qualia can't be explained reductively

3.The physical world is causally closed.

In other words, 1 and 2 jointly imply that qualia are non physical, 3 means that physical explanations are sufficient, so non physical qualia must be causally idle.

The rationalist world doesn't have a clear refutation of of the above. Some try to refute 1, the Dennett approach of qualia denial. Others try to refute 2, in ways that fall short of providing a reductive explanation of qualia. Or just get confused between solving the easy problem and the hard problem.

comment by mfb · 2012-10-25T15:12:00.802Z · LW(p) · GW(p)

I think the question "does consciousness affect neurons?" is as meaningful as "does the process of computation in a computer affect bits?".

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-26T05:57:35.982Z · LW(p) · GW(p)

In other words, "Yes"?

Replies from: mfb
comment by mfb · 2012-10-26T16:06:21.515Z · LW(p) · GW(p)

Bit modifications are part of the process of computation. I wouldn't say they are "affected by" that (they depend causally on the input which started the process of computation, however). In a similar way, individual humans are not affected by the concept of "mankind" for all of them.

comment by RichardChappell · 2012-10-22T22:43:22.925Z · LW(p) · GW(p)

FWIW, my old post 'Zombie Rationality' explores what I think the epiphenomenalist should say about the worry that "the upper-tier brain must be thinking meaningless gibberish when the upper-tier lips [talk about consciousness]"

One point to flag is that from an epiphenomenalist's perspective, mere brains never really mean anything, any more than squiggles of ink do; any meaning we attribute to them is purely derivative from the meaning of appropriately-related thoughts (which, on this view, essentially involve qualia).

Another thing to flag is that epiphenomenalism needn't imply that our thoughts are causally irrelevant, but merely their experiential component. It'd be a mistake to identify oneself with just one's qualia (as Eliezer seems to attribute to the epiphenomenalist). It's true that our qualia don't write philosophy papers about consciousness. But we, embodied conscious persons, do write such papers. Of course, the causal explanation of the squiggles depends only on our physical parts. But the fact that the squiggles are about consciousness (or indeed anything at all) depends crucially upon the epiphenomenal aspects of our minds, in addition.

Replies from: novalis
comment by novalis · 2012-10-25T20:42:15.341Z · LW(p) · GW(p)

Where in the causal diagram does "meaning" go?

Replies from: RichardChappell, ialdabaoth
comment by RichardChappell · 2012-10-25T23:21:37.847Z · LW(p) · GW(p)

Meaning doesn't seem to be a thing in the way that atoms and qualia are, so I'm doubtful that the causal criterion properly applies to it (similarly for normative properties).

(Note that it would seem rather self-defeating to claim that 'meaning' is meaningless.)

Replies from: Benito, novalis
comment by Ben Pace (Benito) · 2012-10-26T19:18:35.466Z · LW(p) · GW(p)

What exactly do you mean by "mean"?

Replies from: RichardChappell
comment by RichardChappell · 2012-10-27T18:17:44.788Z · LW(p) · GW(p)

I couldn't help one who lacked the concept. But assuming that you possess the concept, and just need some help in situating it in relation to your other concepts, perhaps the following might help...

Our thoughts (and, derivatively, our assertions) have subject-matters. They are about things. We might make claims about these things, e.g. claiming that certain properties go together (or not). When I write, "Grass is green", I mean that grass is green. I conjure in my mind's eye a mental image of blades of grass, and their colour, in the image, is green. So, I think to myself, the world is like that.

Could a zombie do all this? They would go "through the motions", so to speak, but they wouldn't actually see any mental image of green grass in their mind's eye, so they could not really intend that their words convey that the world is "like that". Insofar as there are no "lights on inside", it would seem that they don't really intend anything; they do not have minds.

If you can understand the above two paragraphs, then it seems that you have a conception of meaning as a distinctively mental relation (e.g. that holds between thoughts and worldly objects or states of affairs), not reducible to any of the purely physical/functional states that are shared by our zombie twins.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-28T05:53:26.147Z · LW(p) · GW(p)

(From "The Simple Truth", a parable about using pebbles in a bucket to keep count of the sheep in a pasture.)

“My pebbles represent the sheep!” Autrey says triumphantly. “Your pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.”

“Ah!” Mark says. “Special causal powers, instead of magic.”

“Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. If that doesn’t work, I’ll move on to calling it an emergent phenomenon.”

“What kind of special powers does the bucket have?” asks Mark.

“Hm,” says Autrey. “Maybe this bucket is imbued with an about-ness relation to the pastures. That would explain why it worked – when the bucket is empty, it means the pastures are empty.”

“Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?”

“It’s an ordinary bucket,” I say. “I used to climb trees with it… I don’t think this question needs to be difficult.”

“I’m talking to Autrey,” says Mark.

“You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual – pardon me, an emergent process with special causal powers – that my master discovered,” Autrey explains.

Autrey then attempts to describe the ritual, with Mark nodding along in sage comprehension.

“And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.”

Autrey winces and looks around. “Please! Don’t call it Sympathy and Contagion. We shepherds are an anti-superstitious folk. Use the word ‘intentionality’, or something like that.”

“Can I look at a pebble?” says Mark.

“Sure,” I say. I take one of the pebbles out of the bucket, and toss it to Mark. Then I reach to the ground, pick up another pebble, and drop it into the bucket.

Autrey looks at me, puzzled. “Didn’t you just mess it up?”

I shrug. “I don’t think so. We’ll know I messed it up if there’s a dead sheep next morning, or if we search for a few hours and don’t find any sheep.”

“But -” Autrey says.

“I taught you everything you know, but I haven’t taught you everything I know,” I say.

Mark is examining the pebble, staring at it intently. He holds his hand over the pebble and mutters a few words, then shakes his head. “I don’t sense any magical power,” he says. “Pardon me. I don’t sense any intentionality.”

“A pebble only has intentionality if it’s inside a ma- an emergent bucket,” says Autrey. “Otherwise it’s just a mere pebble.”

“Not a problem,” I say. I take a pebble out of the bucket, and toss it away. Then I walk over to where Mark stands, tap his hand holding a pebble, and say: “I declare this hand to be part of the magic bucket!” Then I resume my post at the gates.

Autrey laughs. “Now you’re just being gratuitously evil.”

I nod, for this is indeed the case.

“Is that really going to work, though?” says Autrey.

I nod again, hoping that I’m right. I’ve done this before with two buckets, and in principle, there should be no difference between Mark’s hand and a bucket. Even if Mark’s hand is imbued with the elan vital that distinguishes live matter from dead matter, the trick should work as well as if Mark were a marble statue.

(The moral: In this sequence, I explained how words come to 'mean' things in a lawful, causal, mathematical universe with no mystical subterritory. If you think meaning has a special power and special nature beyond that, then (a) it seems to me that there is nothing left to explain and hence no motivation for the theory, and (b) I should like you to say what this extra nature is, exactly, and how you know about it - your lips moving in this, our causal and lawful universe, the while.)

Replies from: RichardChappell, Psy-Kosh
comment by RichardChappell · 2012-10-28T15:56:30.501Z · LW(p) · GW(p)

It's a nice parable and all, but it doesn't seem particularly responsive to my concerns. I agree that we can use any old external items as tokens to model other things, and that there doesn't have to be anything "special" about the items we make use of in this way, except that we intend to so use them. Such "derivative intentionality" is not particularly difficult to explain (nor is the weak form of "natural intentionality" in which smoke "means" fire, tree rings "signify" age, etc.). The big question is whether you can account for the fully-fledged "original intentionality" of (e.g.) our thoughts and intentions.

In particular, I don't see anything in the above excerpt that addresses intuitive doubts about whether zombies would really have meaningful thoughts in the sense familiar to us from introspection.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-29T00:36:23.847Z · LW(p) · GW(p)

“I toss in a pebble whenever a sheep passes,” I point out.

“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”

“It’s an interaction between the sheep and the pebbles,” I reply.

“No, it’s an interaction between the pebbles and you,” Mark says. “The magic doesn’t come from the sheep, it comes from you. Mere sheep are obviously nonmagical. The magic has to come from somewhere, on the way to the bucket.”

I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that – it doesn’t work reliably – but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”

Mark furrows his brow. “I don’t quite follow you… is the cloth magical?”

I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket."

Replies from: RichardChappell
comment by RichardChappell · 2012-10-29T03:50:59.677Z · LW(p) · GW(p)

And this responds to what I said... how?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-29T03:57:32.228Z · LW(p) · GW(p)

I can build an agent that tracks how many sheep are in the pasture using an internal mental bucket, and keeps looking for sheep until they're all returned. From an outside standpoint, this agent's mental bucket is meaningful because there's a causal process that correlates it to the sheep, and this correlation is made use of to steer the world into futures where all sheep are retrieved. And then the mysterious sensation of about-ness is just what it feels like from the inside to be that agent, with a side order of explicitly modeling both yourself and the world so that you can imagine that your map corresponds to the territory, with a side-side order of your brain making the simplifying assumption that (your map of) the map has a primitive intrinsic correspondence to (your map of) the territory.

In actuality this correspondence is not the primitive and local quality it feels like; it's maintained by the meeting of hypotheses and reality in sense data. A third party or reflecting agent would be able to see the globally maintained correspondence by simultaneously tracing back actual causes of sense data and hypothesized causes of sense data, but this is a chain property involving real lattices of causal links and hypothetical lattices of causal links meeting in sense data, not an intrinsic quality of a single node in the lattice considered in isolation from the senses and the hypotheses linking it to the senses.

So far as I can tell, there's nothing left to explain.

--

“At exactly which point in the process does the pebble become magic?” says Mark.

“It… um…” Now I’m starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the point of the system is to keep track of sheep.”

Replies from: Benito
comment by Ben Pace (Benito) · 2012-10-29T21:49:22.099Z · LW(p) · GW(p)

I agree with all of this... I would personally ask one question though, as I'm quite confused here... I think (pardon me if I'm putting words in anyone's mouth) that the epiphenomenalist should agree that it's all related causally, and when the decision comes to say that "I've noticed that I've noticed that I'm aware of a chair", or something, it comes from causal relations. But that's not located the... "Subjective" or "first person" "experience" (whatever any of those word 'mean').

I observe (through photons and my eyes and all the rest) the five sheep going through the gate, even though I miss a sixth, and I believe that the world is how I think it is, and I believe my vision is an intrinsic property of me in the world, mistakenly of course. Actually, when I say I've seen five sheep go through the gate, loads of processes that are below the level the conscious/speaking me is aware of, are working away, and are just making the top level stuff available - the stuff that evolution has decided would be beneficial for me to be able to talk about. That doesn't mean I'm not conscious of the sheep, just that I'm mistaken about what my consciousness is, and what exactly it's telling me. Where does the 'aware' bit come in? The 'feeling'? The 'subjective'?

(My apologies if I've confused a well argued discussion)

comment by Psy-Kosh · 2012-10-28T07:03:49.300Z · LW(p) · GW(p)

How, precisely, does one formalize the concept of "the bucket of pebbles represents the number of sheep, but it is doing so inaccurately." ie, that it's a model of the number of sheep rather than about something else, but a bad/inaccurate model?

I've fiddled around a bit with that, and I find myself passing a recursive buck when I try to precisely reduce that one.

The best I can come up with is something like "I have correct models in my head for the bucket, pebbles, sheep, etc, individually except that I also have some causal paths linking them that don't match the links that exist in reality."

Replies from: fubarobfusco
comment by fubarobfusco · 2012-10-28T07:28:47.716Z · LW(p) · GW(p)

See this thread for a discussion. A less buck-passing model is: "This bucket represents the sheep ... plus an error term resulting from this here specific error process."

For instance, if I systematically count two sheep exiting together as one sheep, then the bucket represents the number of sheep minus the number of sheep-pairs erroneously detected as one sheep. It's not enough to say the sheep-detector is buggy; to have an accurate model of what it does (and thus, what its representations mean) you need to know what the bug is.

comment by novalis · 2012-10-26T00:10:12.106Z · LW(p) · GW(p)

I'm trying to figure out what work "meaning" is doing. Eliezer says brains are "thinking" meaningless gibberish. You dispute this by saying,

... mere brains never really mean anything, any more than squiggles of ink do; any meaning we attribute to them is purely derivative from the meaning of appropriately-related thoughts ...

But what are brains thinking, if not thoughts?

And then

But the fact that the squiggles are about consciousness (or indeed anything at all) depends crucially upon the epiphenomenal aspects of our minds, in addition.

This implies that "about"-ness and "meaning" have roughly the same set of properties. But I don't understand why anyone believes anything about "meaning" (in this sense). If it doesn't appear in the causal diagram, how could we tell that we're not living in a totally meaningless universe? Let's play the Monday-Tuesday game: on Monday, our thoughts are meaningful; on Tuesday, they're not. What's different?

Replies from: RichardChappell
comment by RichardChappell · 2012-10-26T00:36:34.595Z · LW(p) · GW(p)

But what are brains thinking, if not thoughts?

Right, according to epiphenomenalists, brains aren't thinking (they may be computing, but syntax is not semantics).

If it doesn't appear in the causal diagram, how could we tell that we're not living in a totally meaningless universe?

Our thoughts are (like qualia) what we are most directly acquainted with. If we didn't have them, there would be no "we" to "tell" anything. We only need causal connections to put us in contact with the world beyond our minds.

Replies from: novalis
comment by novalis · 2012-10-26T00:43:18.384Z · LW(p) · GW(p)

So if we taboo "thinking" and "computing", what is it that brains are not doing?

Replies from: RichardChappell
comment by RichardChappell · 2012-10-26T01:12:42.116Z · LW(p) · GW(p)

You can probably give a functionalist analysis of computation. I doubt we can reductively analyse "thinking" (at least if you taboo away all related mentalistic terms), so this strikes me as a bedrock case (again, like "qualia") where tabooing away the term (and its cognates) simply leaves you unable to talk about the phenomenon in question.

Replies from: novalis, Peterdjones
comment by novalis · 2012-10-26T15:31:47.188Z · LW(p) · GW(p)

It sounds like "thinking" and "qualia" are getting the special privilege of being irreducible, even though there have been plenty of attempts to reduce them, and these attempts have had at least some success. Why can't I pick any concept and declare it a bedrock case? Is my cat fuzzy? Well, you could talk about how she is covered with soft fur, but it's possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it's possible to imagine these things, clearly fuzziness must be non-physical. It's maybe harder to imagine a non-fuzzy cat than to imagine a non-thinking person, but that's just because fuzziness doesn't have the same aura of the mysterious that thinking and experiencing do.

Replies from: Peterdjones, RichardChappell
comment by Peterdjones · 2012-10-26T16:19:00.252Z · LW(p) · GW(p)

I don't beleive anyone has regarded thinking as causally irreducible for at least a century. Could you cite a partially successful reduction of qualia?

Replies from: pragmatist, novalis, None
comment by pragmatist · 2012-10-26T16:32:53.464Z · LW(p) · GW(p)

I don't beleive anyone has regarded thinking as causally irreducible for at least a century.

Read the parent of the comment you're responding to.

comment by novalis · 2012-10-26T16:22:22.808Z · LW(p) · GW(p)

Dennett: Consciousness Explained.

Replies from: Peterdjones
comment by Peterdjones · 2012-10-26T16:23:52.840Z · LW(p) · GW(p)

That was elimination.

Replies from: novalis
comment by novalis · 2012-10-26T16:47:28.383Z · LW(p) · GW(p)

Yes, that sometimes happens when you reduce something; it turns out that there's nothing left. Nobody would say that there is no reductionist account of phlogiston.

Replies from: None, Peterdjones
comment by [deleted] · 2012-10-26T17:29:21.146Z · LW(p) · GW(p)

That may be so (though I agree with Peter, that reduction and elimination are different), but regardless Dennett's actual argument is not a reduction of qualia to more simple terms. He argued (mostly on conceptual grounds) that the idea of qualia is incoherent. Even if elimination (in the manner of phlogiston) were reduction, Dennett's argument wouldn't be a case of either.

Replies from: novalis
comment by novalis · 2012-10-26T18:14:22.734Z · LW(p) · GW(p)

OK, I think I agree with this view of Dennett. I hadn't read the book in a while, and I conflated his reduction of consciousness (which is, I think, a genuine reduction) with his explanation of qualia.

comment by Peterdjones · 2012-10-26T17:14:45.945Z · LW(p) · GW(p)

I would. Reduction and elimination are clearly different. Heat was reduced, phlogiston was eliminated. There is heat. There is no phlogiston.

Replies from: novalis, thomblake, DaFranker
comment by novalis · 2012-10-26T18:34:51.860Z · LW(p) · GW(p)

So in this case, in your view, subjective experiences would be reduced, while qualia would be eliminated?

Replies from: Peterdjones
comment by Peterdjones · 2012-10-30T02:56:27.641Z · LW(p) · GW(p)

I am not saying that all posits are doomed to elimination, only that what is elemintated tends to be a posit rather than a prima facie phenomenon. How could you say that there is no heat? I also don't agree that qualia are posits...but Dennett of course needs to portray them that way in order to eliminate them.

Replies from: novalis
comment by novalis · 2012-10-30T04:09:51.050Z · LW(p) · GW(p)

I don't think I understand what you think is and isn't a "posit". "Cold" is a prima facie phenomenon as well, but it has been subsumed entirely into the concept of "heat".

Replies from: Peterdjones
comment by Peterdjones · 2012-10-30T18:51:32.292Z · LW(p) · GW(p)

The prima-facie phenomenon of "cold" (as in "your hands feel cold") has been subsumed under the scientific theory of heat-as-random-molecular-motion. That's reduction. it was never eilmininated in favour of the prima-facie phenomenn of heat, as in "This soup is hot".

comment by thomblake · 2012-10-26T18:28:54.467Z · LW(p) · GW(p)

Reduction and elimination are clearly different.

Only minorly. We could just as well still talk about phlogiston, which is just negative oxygen. The difference between reduction and elimination is just that in the latter, we do not think the concept is useful anymore. If there are different "we"s involved, you might have the same analysis result in both.

Replies from: Peterdjones
comment by Peterdjones · 2012-10-27T16:14:23.221Z · LW(p) · GW(p)

Only minorly. We could just as well still talk about phlogiston, which is just negative oxygen.

Not very menaingfully. What does that mean in terms of modern physics? Negatively ionised oxygen? Anti-oxygen? Negatively massive oxygen?

The difference between reduction and elimination is just that in the latter, we do not think the concept is useful anymore

Well, that's a difference.

Only minorly.

Is it minority opinion that reductive materialism and eliminative materialism are different positions?

"The reductive materialist contrasts the eliminativist more strongly, arguing that a mental state is well defined, and that further research will result in a more detailed, but not different understanding.[3]"--WP

comment by DaFranker · 2012-10-26T17:23:33.696Z · LW(p) · GW(p)

Heat was reduced, phlogiston was eliminated. There is heat. There is no phlogiston.

That is the reductionist account of phlogiston. The grandparent didn't claim that everyone would agree that there is a reduction of phlogiston that makes sense. The result of reduction is that phlogiston was eliminated. Which sometimes happens when you try to reduce things.

This is what the grandparent was saying. You were in agreement already.

Replies from: Peterdjones
comment by Peterdjones · 2012-10-26T18:02:02.647Z · LW(p) · GW(p)

That is the reductionist account of phlogiston

It;s an emlimination. If it were a reduction, there would still be phlogiston as is there is still heat. The reductive explanation of combustion did not need phlogiston as a posit, so it was eliminated. Note the difference beteen phlogiston, a posit, and heat/combustion, which are prima-facie phenomena. Nobody was trying to reductivley explain phlogiston, they were trying to explain heat with it.

You were in agreement already.

I disagree.

Replies from: None
comment by [deleted] · 2012-10-26T18:08:43.212Z · LW(p) · GW(p)

Please, just read this.

comment by [deleted] · 2012-10-26T16:54:26.033Z · LW(p) · GW(p)

It depends on what you mean by 'thinking', but I think the view is pretty widespread that rational relations (like the relation of justification between premises and a conclusion) are not reducible to any physical relation in such a way that explains or even preserves the rational relation.

I'm thinking of Donald Davidson's 'Mental Events' as an example at the moment, just to illustrate the point. He would say that while every token mental state is identical to a token physical state, and every token mental causal relation (like a relation of inference or justification) is identical to a token physical causal relation...

...nevertheless, types of mental states, like the thought that it is raining, and types of mental causal relations, like the inference that if it is raining, and I don't want to get wet, then I should bring an umbrella, are not identical to types of physical states or types of physical causal relations.

This has the result that 1) we can be assured that the mind supervenes on the brain in some way, and that there's nothing immaterial or non-physical going on, but 2) there are in principle no explanations of brain states and relations which suffice as explanations of anything like thinking, reasoning, inferring etc..

Replies from: Peterdjones
comment by Peterdjones · 2012-10-31T19:28:02.955Z · LW(p) · GW(p)

Davidson's views are widely known rather than widely accepted, I think. I don't recall seeing them beign used for a serious argument for epiphenomenalism, I can see how they could be, if you tie causality to laws. OTOH, I can see how you could argue in the opposite direction: if mental events are identical to physical events, then, by Leibnitz's law, they have the same causal powers as physical events.

comment by RichardChappell · 2012-10-27T18:30:19.716Z · LW(p) · GW(p)

Well, you could talk about how she is covered with soft fur, but it's possible to imagine something fuzzy and not covered with fur, or something covered with fur but not fuzzy. Because it's possible to imagine these things, clearly fuzziness must be non-physical.

Erm, this is just poor reasoning. The conclusion that follows from your premises is that the properties of fuzziness and being-covered-in-fur are distinct, but that doesn't yet make fuzziness non-physical, since there are obviously other physical properties besides being-covered-in-fur that it might reduce to. The simple proof: you can't hold ALL the other physical facts fixed and yet change the fuzziness facts. Any world physically identical to ours is a world in which your cat is still fuzzy. (There are no fuzz-zombies.) This is an obvious conceptual truth.

So, in short, the reason why you can't just "pick any concept and declare it a bedrock case" is that competent conceptual analysis would soon expose it to be a mistake.

Replies from: novalis
comment by novalis · 2012-10-28T03:22:38.632Z · LW(p) · GW(p)

No, I'm saying that you could hold all of the physical facts fixed and my cat might still not be fuzzy. This is somewhat absurd, but I have a tremendously good imagination; if I can imagine zombies, I can imagine fuzz-zombies.

Replies from: RichardChappell
comment by RichardChappell · 2012-10-28T05:12:24.495Z · LW(p) · GW(p)

This is somewhat absurd

More than that, it's obviously incoherent. I assume your point is that the same should be said of zombies? Probably reaching diminishing returns in this discussion, so I'll just note that the general consensus of the experts in conceptual analysis (namely, philosophers) disagrees with you here. Even those who want to deny that zombies are metaphysically possible generally concede that the concept is logically coherent.

Replies from: novalis
comment by novalis · 2012-10-28T16:06:49.050Z · LW(p) · GW(p)

This is somewhat absurd

More than that, it's obviously incoherent. I assume your point is that the same should be said of zombies?

On reflection, I think that's right. I'm capable of imagining incoherent things.

I'll just note that the general consensus of the experts in conceptual analysis (namely, philosophers) disagrees with you here

I guess I'm somewhat skeptical that anyone can be an expert in which non-existent things are more or less possible. How could you tell if someone was ever correct -- let alone an expert? Wouldn't there be a relentless treadmill of acceptance of increasingly absurd claims, because nobody want to admit that their powers of conception are weak and they can't imagine something?

comment by Peterdjones · 2012-10-26T16:44:29.538Z · LW(p) · GW(p)

I doubt we can reductively analyse "thinking"

If we cant even get a start on that, how did we get a start on building AI?

Replies from: RichardChappell
comment by RichardChappell · 2012-10-27T18:20:51.798Z · LW(p) · GW(p)

I'm not sure I follow you. Why would you need to analyse "thinking" in order to "get a start on building AI"? Presumably it's enough to systematize the various computational algorithms that lead to the behavioural/functional outputs associated with intelligent thought. Whether it's really thought, or mere computation, that occurs inside the black box is presumably not any concern of computer scientists!

Replies from: Peterdjones
comment by Peterdjones · 2012-10-30T02:41:27.535Z · LW(p) · GW(p)

I'm not sure I follow you. Why would you need to analyse "thinking" in order to "get a start on building AI"?

Becuase thought is essential to intelligence. Why would you need to analyse intelligence to get a start on building artificial intelliigence? Because you would have no idea what you were tryinng to do if you didn't.

Presumably it's enough to systematize the various computational algorithms that lead to the behavioural/functional outputs associated with intelligent thought.

I fail to see how that is not just a long winded way of saying "analysing thought"

comment by ialdabaoth · 2012-10-25T22:24:17.262Z · LW(p) · GW(p)

Before we answer that question, we need to pick a meaning for "meaning" that means something.

(parse that a few times with your Gödel hat on)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-10-26T01:49:14.901Z · LW(p) · GW(p)

You use the word "mean" twice in that sentence outside quotes, so whatever you mean by it.

comment by betterthanwell · 2012-10-21T03:21:36.534Z · LW(p) · GW(p)

Mainstream status points to /Eliezer_Yudkowsky-drafts/ (Forbidden: You aren't allowed to do that.)

Replies from: beoShaffer
comment by beoShaffer · 2012-10-21T04:47:12.236Z · LW(p) · GW(p)

So does the Mediation link.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-20T08:47:54.695Z · LW(p) · GW(p)

Mainstream status:

I haven't yet happened to run across a philosophical position which says that meaningful correspondences between hypotheses and reality can only be pinned down by following Pearl-style causal links inferred as the simplest explanation of observed experiences, and that only this can allow an agent to consistently believe that its beliefs are meaningful.

In fact, I haven't seen anything at all about referential meaningfulness requiring cause-and-effect links with the phenomenon, just like I haven't seen anything about a universe being a connected fabric of causes and effects; but it wouldn't surprise me if either of those ideas were out there somewhere.

(There's a "causal theory of reference" listed in Wikipedia but it doesn't seem to be about remotely the same subject matter; the theory's tenets seem to be that "a name's referent is fixed by an original act of naming", and that "later uses of the name succeed in referring to the referent by being linked to that original act via a causal chain".)

EDIT: Apparently causal theories of reference have been used to argue against Zombie Worlds so I stand corrected on this point. See below.

Replies from: pragmatist, bryjnar
comment by pragmatist · 2012-10-21T18:39:05.267Z · LW(p) · GW(p)

As bryjnar points out, all the stuff you say here (subtracting out the Pearl stuff) is entailed by the causal theory of reference. The reason quick summaries of that view will seem unfamiliar is that most of the early work on the causal theory was primarily motivated by a different concern -- accounting for how our words acquire their meaning. Thus the focus on causal chains from "original acts of naming" and whatnot. However, your arguments against epiphenomenalism all hold in the causal theory.

It is true that nobody (that I know of) has developed an explicitly Pearlian causal theory of reference, but this is really accounted for by division of labor in philosophy. People working on reference will develop a causal theory of reference and use words like "cause" without specifying what they mean by it. If you ask them what they mean, they will say "Whatever the best theory of causation is. Go ask the people working on causation about that." And among the people working on causation, there are indeed philosophers who have built on Pearlian ideas. Christopher Hitchcock and James Woodward, for instance.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-21T21:23:12.552Z · LW(p) · GW(p)

However, your arguments against epiphenomenalism all hold in the causal theory.

Has anyone made them? I ask because every use I've seen of the 'causal theory of reference' is, indeed, about "accounting for how our words acquire their meaning", something of a nonproblem from the standpoint of somebody who thinks that words don't have inherent meanings.

Replies from: pragmatist
comment by pragmatist · 2012-10-21T21:44:48.707Z · LW(p) · GW(p)

The issue is broached by Chalmers himself in The Conscious Mind (p. 201). He says:

... it is sometimes said that reference to an entity requires a causal connection to that entity; this is known as the causal theory of reference. If so, then it would be impossible to refer to causally irrelevant experiences.

He goes on to reject the causal theory of reference.

Here is a relevant excerpt from the SEP article on zombies:

But, arguably, it is a priori true that phenomenal consciousness, whether actual or possible, involves being able to refer to and know about one's qualia. If that is right, any zombie-friendly account faces a problem. According to the widely accepted causal theory of reference — accepted by many philosophers — reference and knowledge require us to be causally affected by what is known or referred to (Kripke 1972/80); and it seems reasonable to suppose that this too is true a priori if true at all. On that basis, in those epiphenomenalistic worlds whose conceivability seems to follow from the conceivability of zombies — (worlds where qualia are inert) — our counterparts cannot know about or refer to their qualia. That contradicts the assumption that phenomenal consciousness involves being able to refer to qualia, from which it follows that such epiphenomenalistic worlds are not possible after all. Hence zombies are not conceivable in the relevant sense either, since their conceivability leads a priori to a contradiction. To summarize: if zombies are conceivable, so are epiphenomenalistic worlds. But by the causal theory of reference, epiphenomenalistic worlds are not conceivable; therefore zombies are not conceivable.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-21T22:24:38.958Z · LW(p) · GW(p)

Thanks! I stand corrected.

comment by bryjnar · 2012-10-21T10:44:31.856Z · LW(p) · GW(p)

You've got a more sophisticated notion of causality, but otherwise this is very similar to the causal theory of reference. For example, the way they would describe what's going on with the shadow sneeze variables is that when you name "SNEEZE_VAR", there was no causal link that allowed you to pick out the actual SNEEZE_VAR: there would need to be a causal arrow going the other way for that to be possible. (and then any later uses of "SNEEZE_VAR" would have to be linked causally to your earlier naming: if we wiped your brain and rebooted it with random noise that happened to be the same, then you wouldn't succeed in referring) I'm pretty sure I've seen someone use a similar kind of example where you can't decide which of two possible things you're referring to because of the lack of a causal link of the right kind.

They also use pretty similar examples: a classic one is to think of an being on the other side of the galaxy thinking about Winston Churchill. Even if they have the right image, they even happen to think the right things about where he lived, what he did etc. it seems that they don't actually succeed in referring to him because of the lack of a causal link. It's just a coincidence.

With that in mind, there are probably arguments made against the causal theory of reference that may apply to you to, but I don't know any off the top of my head.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-21T21:24:38.569Z · LW(p) · GW(p)

They also use pretty similar examples: a classic one is to think of an being on the other side of the galaxy thinking about Winston Churchill. Even if they have the right image, they even happen to think the right things about where he lived, what he did etc. it seems that they don't actually succeed in referring to him because of the lack of a causal link.

(They could be referring to all objects of class CHURCHILL, but not to our own, particular Churchill, although he happens to be such an object.)

comment by Kaj_Sotala · 2012-10-21T08:01:37.416Z · LW(p) · GW(p)

EDIT: After thinking things through, I concluded that Eliezer was right, and that epiphenomalism was indeed confused and incoherent. Leaving this comment here as a record of how I came to agree with that conclusion.

The closest theory to this which definitely does seem coherent - i.e., it's imaginable that it has a pinpointed meaning - would be if there was another little brain living inside my brain, made of shadow particles which could affect each other and be affected by my brain, but not affect my brain in turn. This brain would correctly hypothesize the reasons for its sensory experiences - that there was, from its perspective, an upper tier of particles interacting with each other that it couldn't affect. Upper-tier particles are observable, i.e., can affect lower-tier senses, so it would be possible to correctly induct a simplest explanation for them. And this inner brain would think, "I can imagine a Zombie Universe in which I am missing, but all the upper-tier particles go on interacting with each other as before." If we imagine that the upper-tier brain is just a robotic sort of agent, or a kitten, then the inner brain might justifiably imagine that the Zombie Universe would contain nobody to listen - no lower-tier brains to watch and be aware of events.

Positing "another little brain" to act as the epiphenomenal component sounds unnecessarily complicated to me. You mentioned earlier the possibility of programming a simulation with balls that bounce off each other, and then some little shadows that followed the balls around. It is not obvious to me why "balls that bounce off each other" couldn't be the physical activity of the neurons, "little shadows that followed the balls around" couldn't be the qualia that were produced as a side-effect of the neurons' physical activity.

I think - though I might have misunderstood - that you are trying to exclude this possibility with the "How could you possibly know about the lower tier, even if it existed?" question, and the suggestion that we could only know about the shadows because we can look at the simulation from the outside, and the notion that there is no flow of causality in our world from which we could infer the existence of the "shadow tier".

I'm not convinced that this is right. We know that there are qualia, for the obvious reason that we are not p-zombies: and we know that consciousness is created via neurons in the brain, for we can map a correspondence between qualia and brain states. E.g. there's a clear shift in both brain activity and subjective experience when we fall asleep, or become agitated, or drink alcohol. So we know that there is an arrow from "changes in the brain" to "qualia/subjective experience", because we correctly anticipate that if somebody changed our brain chemistry, our subjective experience would change.

But there's nothing in our observed physical world that actually explains consciousness in the sense of explaining why there couldn't just be a physical world utterly devoid of consciousness. Yes, you can develop sophisticated theories of strange loops and of how consciousness is self-representation and all... which explains why there could be symbols and dynamics within a cognitive system which would make an entity behave as if it had consciousness. If the programmer is sufficiently sophisticated, it can make a program behave like it had anything at all.

But that's still only an explanation of why it behaved as if it had a consciousness. Well, that's not quite right. It would actually have a consciousness in the sense that it had all the right information-processing dynamics which caused it to have some internal state which would serve the functional role of "sadness" or "loneliness" or "excitement" in influencing its information-processing and behavior. And it would have a pattern recognition engine which analyzed its own experience, which would notice that those kinds of internal states repeated themselves and had predictable effects on its behavior and information-processing.

So it would assign those states labels and introduce symbols corresponding to those labels into its reasoning system, so it could ask itself questions like "every now and then I get put into a state that I have labeled as 'being sad', when does that happen and what does it do?". And as it collected more and more observations and created increasingly sophisticated symbols to represent ever-more-complex concepts, then it seems entirely conceivable that there happened something like... like it noticing that all the symbols it had collected for its own internal states shared the property of being symbols for its own internal states, and its explanation-seeking-mechanism would do what it always did, namely to ask the question of "why do I have this set of states in the first place, instead of having nothing". And then, because that was an ill-defined question in the first place, it would get stumped and fail to answer it, and then it could very well write philosophical papers that tried to attack the question but made no progress.

... and when I started writing this comment, I was originally going to end this by saying "and that explains why it would behave as if it was conscious, but it still wouldn't explain why it couldn't do all of that without having any subjective experience". Except that, upon writing that out, I started feeling like I might just have dissolved the question and that there was nothing left to explain anymore. Um. I'll have to think about this some more.

Replies from: drnickbone
comment by drnickbone · 2012-10-21T09:52:58.929Z · LW(p) · GW(p)

We know that there are qualia, for the obvious reason that we are not p-zombies: and we know that consciousness is created via neurons in the brain, for we can map a correspondence between qualia and brain states

Some questions here. How do you know that other people are not p-zombies? Presumably you believe them when they say they have qualia! But then those speech acts are caused by brain states, and if qualia are epiphenomenal, the speech acts are not caused by qualia. Similarly, the correspondence you describe is between brain states and reported qualia by other people: I doubt you've ever managed to map your own brain states to your own qualia.

Related, how do you know that you were not a p-zombie every day of your life up to yesterday? Or that if you had qualia yesterday, how do you know that you didn't have a green quale when looking at red (stop) traffic lights? Well because you remember having qualia, and you remember them being the same as the qualia you have today! But then, aren't those memories encoded in brain states (neural connections and synaptic strengths)? How could qualia cause those memories to become encoded if they were epiphenomenal to brain states?

Stuff like this makes me pretty sure that epiphenomenalism is false.

Replies from: Kaj_Sotala, None, torekp
comment by Kaj_Sotala · 2012-10-21T13:41:00.920Z · LW(p) · GW(p)

How could qualia cause those memories to become encoded if they were epiphenomenal to brain states?

You have it the wrong way around. In epiphenomenalism, brain states cause qualia, qualia don't cause brain states. When my brain was in a particular past state, the computation of that state produced qualia and also recorded information of having been in that state; and recalling that memory, by emulating the past state, sensibly also produces qualia which are similar to the past state. I can't know for sure that the memory of the experience I have now accurately matches the experience I actually had, of course... but then that problem is hardly unique to epiphenomenalist theories, or even particularly implied by the epiphenomenalist theory.

In general, most of the questions in your comment are valid, but they're general arguments for solipsism or extreme skepticism, not arguments against epiphenomenalism in particular. (And the answer to them is that "consistency is a simpler explanation than some people being p-zombies and some not, or people being p-zombies at certain points of time and not at other points")

Replies from: drnickbone, endoself
comment by drnickbone · 2012-10-21T14:07:37.633Z · LW(p) · GW(p)

How could qualia cause those memories to become encoded if they were epiphenomenal to brain states?

You have it the wrong way around. In epiphenomenalism, brain states cause qualia, qualia don't cause brain states.

The question was rhetorical of course... the point is that if your qualia truly are epiphenomenal, then there is no way you can remember having had them. So you're left with an extremely weak inductive argument from just one data point, basically "my brain states are creating qualia right now, so I'll infer that they always created the same qualia in the past, and that similar brain states in other people are creating similar qualia". It doesn't take extreme skepticism to suspect there is a problem with that argument.

Replies from: khafra, Kaj_Sotala
comment by khafra · 2012-10-22T16:46:12.069Z · LW(p) · GW(p)

Still seems like Occam's Razor would rule against past versions of me and all versions of other people--all of which seem to behave like I do, for the reasons I do--doing so without the qualia I have.

comment by Kaj_Sotala · 2012-10-21T14:26:41.287Z · LW(p) · GW(p)

the point is that if your qualia truly are epiphenomenal, then there is no way you can remember having had them.

I don't see how this follows. Or rather, I don't see how "if qualia are epiphenomenal, there is no way you can remember having had them" is any more or less true than "there is no way you can remember having had qualia, period".

Replies from: drnickbone
comment by drnickbone · 2012-10-21T14:33:32.670Z · LW(p) · GW(p)

So you reject this schema: "I can remember X only if X is a cause of my memories"? Interesting.

Replies from: Kaj_Sotala, Kaj_Sotala, Kawoomba, Gust, CCC
comment by Kaj_Sotala · 2012-10-23T09:45:01.990Z · LW(p) · GW(p)

After pondering both Eliezer's post and your comments for a while, I concluded that you were right, and that my previous belief in epiphenomenalism was incoherent and confused. I have now renounced it, for which I thank you both.

comment by Kaj_Sotala · 2012-10-21T16:49:39.023Z · LW(p) · GW(p)

Hmm. I tried to write a response, but then I noticed that I was confused. Let me think about that for a while.

comment by Kawoomba · 2012-10-22T12:40:10.826Z · LW(p) · GW(p)

Lots of memories are constructed and modified post hoc, sometimes confabulating about events that you cannot have witnessed, or that cannot have formed memories from. (Two famous examples: Memory of seeing both twin towers collapse one after the next as it happened (when it fact the latter was shown only after a large gap), memory of being born / being in the womb.)

I'm not positing that you can have causeless memories, but there is a large swath of evidence indicating that the causal experience does not have to match your memory of it.

As a thought experiment, imagine implanted memories. They do have a cause, but certainly their content need not mirror the causal event.

comment by Gust · 2012-12-26T04:25:12.150Z · LW(p) · GW(p)

Well, you really wouldn't be able to remember qualia, but you'd be able to recall brain states that evoke the same qualia as the original events they recorded. In that sense, "to remember" means your brain enters states that are in some way similar to those of the moments of experience (and, in a world where qualia exist, these remembering-brain-states evoke qualia accordingly). So, although I still agree with other arguments agains epiphenomenalism, I don't think this one refutes it.

comment by CCC · 2012-10-21T19:35:13.193Z · LW(p) · GW(p)

I have, on occasion, read really good books. As I read the descriptions of certain scenes, I imagined them occurring. I remember some of those scenes.

The scene, as I remembered it, is not a cause of my memory because the scene as I remember it did not occur. The memory was, rather, caused by a pattern of ink on paper. But I remember the scene, not the pattern of ink.

Replies from: drnickbone
comment by drnickbone · 2012-10-21T20:28:31.643Z · LW(p) · GW(p)

Well presumably the X here for you Is "my imagining a scene from the book" and that act of imagination was the cause of your memory. So I'm not sure it counts as a counter-example, though if you'd somehow forgotten it was a fictional scene, and became convinced it really happened, then it could be argued as a counter-example.

I said "Interesting" in response to Kaj, because I'd also started to think of scenarios based on mis-remembering or false memory syndrome, or even dream memories. I'm not sure these examples of false memory help the epiphenomenalist much...

comment by endoself · 2012-10-21T14:39:22.554Z · LW(p) · GW(p)

You have it the wrong way around. In epiphenomenalism, brain states cause qualia, qualia don't cause brain states.

If qualia don't cause brain states, what caused the brain state that caused your hands to type this sentence? In order for the actual material brain to represent beliefs about qualia, there has to be an arrow from the qualia to the brain.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-10-21T16:45:40.627Z · LW(p) · GW(p)

See my original comment. It's relatively easy (well, at least it is if you accept that we could build conscious AIs in the first place) to construct an explanation of why an information-processing system would behave as if it had qualia and why it would even represent qualia internally. But that only explains why it behaves as if it had qualia, not why it actually has them.

Replies from: endoself
comment by endoself · 2012-10-21T17:15:31.793Z · LW(p) · GW(p)

I did read that before commenting, but I misinterpreted it, and now I still find myself unable to understand it. The way I read it, it seem to equivocate between knowing something as in representing it in your physical brain and knowing something as in representing it in the 'shadow brain'. You know which one is intended where, but I can't figure it out.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-10-23T09:50:37.647Z · LW(p) · GW(p)

Never mind.

Replies from: khafra
comment by khafra · 2012-10-24T13:11:25.955Z · LW(p) · GW(p)

Can you describe the qualia associated with going from epiphenominalism to functionalism/physicalism/wherever you went?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-10-26T10:46:00.033Z · LW(p) · GW(p)

Not entirely sure what you're asking, but nothing too radical. I just thought about it and realized that my model was indeed incoherent about whether or not it presumed the existence of some causal arrows. My philosophy of mind was already functionalist, so I just dropped the epiphenomenalist component from it.

A bigger impact was that I'll need to rethink some parts of my model of personal identity, but I haven't gotten around that yet.

comment by [deleted] · 2012-10-21T13:41:53.247Z · LW(p) · GW(p)

You might enjoy one of Eliezer's presuppositional arguments against epiphenomenalism.

Replies from: drnickbone
comment by drnickbone · 2012-10-21T14:17:03.288Z · LW(p) · GW(p)

Funny, thanks.

comment by torekp · 2012-10-21T13:31:07.411Z · LW(p) · GW(p)

Even the mental sentence, "I am seeing the apple as red", occurs shortly after the experience that warranted it. The fact that a qualitatively identical experience is happening while I affirm the mental sentence, is a separate fact. So even knowing what I'm feeling right now requires non-epiphenomenal qualia.

But couldn't the mental sentences also be part of the lower-tier shadow realm? Not my mental sentences. My thoughts are the ones I'm typing, and the ones that I act on.

comment by Manfred · 2012-10-21T09:31:15.307Z · LW(p) · GW(p)

Well we do have one-way causal arrows. You just need to draw them through the (dun dun dun) Fourth Dimensionnnnn.

comment by SarahNibs (GuySrinivasan) · 2012-10-20T23:02:56.211Z · LW(p) · GW(p)

I'm not convinced I'm keeping my levels of reference straight, but if I can knowingly consistently accurately talk about epiphenomena, doesn't the structure or contents of the uncausing stuff cause me to think in this way rather than that way? I'm not sure how to formalize this intuition to tell if it's useful or trivial.

Replies from: Benito, EphemeralNight, Nisan, CCC
comment by Ben Pace (Benito) · 2012-10-21T08:29:22.750Z · LW(p) · GW(p)

That's the point, I believe. To call something 'real', and yet say in principle it couldn't affect us causally, is to contradict yourself. If you're contained in a set of nodes that are causally linked, then that is your reality. Any other sets of nodes that can't contact you just don't matter. Even if there was, say, a causal structure beside our universe, of puppies and kittens running around, it wouldn't matter in the slightest, if in principle we could never interact with them. If we could deduce from the laws of physics that they had to exist, then we would be causally linked. The point I am (and perhaps Eliezer is) trying to emphasise is that our reality is everything that can, and does, affect us.

comment by EphemeralNight · 2012-10-20T23:27:46.417Z · LW(p) · GW(p)

...doesn't the structure or contents of the uncausing stuff cause me to...

Um...

...the uncausing stuff cause me...

-.-

Replies from: AlexSchell
comment by AlexSchell · 2012-10-21T04:12:54.398Z · LW(p) · GW(p)

Try reading this charitably as expressing confusion about how we can (knowingly, consistently) talk about epiphenomena, since they (obviously, duh) don't cause us to think in this way rather than that way.

comment by Nisan · 2012-10-21T16:47:24.804Z · LW(p) · GW(p)

It seems to me that the structure of your epiphenomenal model causes you to think about it in the same way that the structure of arithmetic "causes" you to think about arithmetic. So you can infer the existence of an epiphenomenal self as a sort of Platonic form. If you take modal realism seriously, maybe you should infer the "existence" of the epiphenomenal self.

comment by CCC · 2012-10-21T11:00:22.708Z · LW(p) · GW(p)

Then it's not uncausing. It has caused you to think in this way rather than that way.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-20T08:45:56.500Z · LW(p) · GW(p)

Meditation:

If we can only meaningfully talk about parts of the universe that can be pinned down inside the causal graph, where do we find the fact that 2 + 2 = 4? Or did I just make a meaningless noise, there? Or if you claim that "2 + 2 = 4" isn't meaningful or true, then what alternate property does the sentence "2 + 2 = 4" have which makes it so much more useful than the sentence "2 + 2 = 3"?

Replies from: None, chaosmosis, Bundle_Gerbe, endoself, Larks, The_Duck, somervta, selylindi, CCC, AnotherIdiot, Eugine_Nier
comment by [deleted] · 2012-10-21T04:00:24.888Z · LW(p) · GW(p)

PA proves "2 + 2 = 4" using the associative property. PA does not prove "2 + 2 = 3". "2 + 2 = 4" is actually shorthand for "((1+1) + (1+1)) = (((1+1)+1)+1)". Moving stuff next to other stuff in our universe happens to follow the associative property; this is why the belief is useful.

Replies from: kpreid, Eliezer_Yudkowsky, ArisKatsaris, chaosmosis
comment by kpreid · 2012-10-21T16:32:04.151Z · LW(p) · GW(p)

I have myself usually seen Peano arithmetic described with 0 and the successor operation (such as in the context of actually implementing it in a computer). in this case,

  S(S(0)) + S(S(0))
= S(S(S(0))) + S(0)
= S(S(S(S(0)))) + 0
= S(S(S(S(0))))

where the two theorems needed are that x + S(y) = S(x) + y and that x + 0 = x. I find this to have less incidental complexity (given that we are interested in working up from axioms, not down from conventional arithmetic) perhaps because the tree of the final expression has no branches. The first theorem can be looked at as expressing that “moving stuff results in the same stuff”, i.e. a conservation law; note that the expression has precisely the same number of nodes.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-21T06:14:34.555Z · LW(p) · GW(p)

(I like that! The idea that it follows just from the associative property and no other features of PA is quite elegant.)

comment by ArisKatsaris · 2012-10-28T00:13:11.482Z · LW(p) · GW(p)

This post feel surprisingly insight-bringing to me because the "associative property" can in a sense be considered the "insignificance of parentheses"... and hence both the insignificance of groupings, and the lack of the need to define a starting point of calculation... and in turn these concepts feel connected to the concepts of reductionism and relativity...

comment by chaosmosis · 2012-10-27T03:54:23.378Z · LW(p) · GW(p)

I would go further and say that without the associative property the concept of numbers, math, and "2 + 2 = 4" does not make sense.

Replies from: MixedNuts
comment by MixedNuts · 2012-10-27T13:28:12.944Z · LW(p) · GW(p)

Octonion multiplication is not associative. Exponentation isn't either; (2^2)^3 = 4^3 = 64, 2^(2^3) = 2^8 = 256. There's likely some kind of useful math with numberlike objects where no interesting operation is associative.

Replies from: chaosmosis
comment by chaosmosis · 2012-10-27T19:14:56.403Z · LW(p) · GW(p)

How can this math refer to the real world without the associative property? If you can't count the initial "2" then you can't multiply it or anything else, right? Plus, how could we arrive at a concept of exponentation that didn't entail the concept of the associative property?

The pure math might just be too much of an inferential leap for me. I need to see how the math would be created from observations in the real world before I can really understand what you are saying.

Replies from: MixedNuts
comment by MixedNuts · 2012-10-27T19:40:25.689Z · LW(p) · GW(p)

Well, addition of positive integers is associative and has an obvious real-world analogue, so the associophobic math isn't a good choice for describing reality. But if you lived in Bejeweled, addition wouldn't make much sense as a concept - sometimes pushing a thing close to another thing yields two things, sometimes zero. The most fundamental operation would be "flip a pair of adjacent things", which is not associative. (It's sort of a transposition, which would give you group theory, which is full of associative operations, but I don't think you can factor in disappearing rows while preserving associativity - it destroys the bijectivity.)

Replies from: chaosmosis
comment by chaosmosis · 2012-10-27T21:51:02.041Z · LW(p) · GW(p)

Cool, great example.

comment by chaosmosis · 2012-10-20T23:29:54.414Z · LW(p) · GW(p)

2+2=4 isn't a cause. It's a tautological description. Describing things is useful, though.

Replies from: None
comment by [deleted] · 2012-10-27T06:31:26.901Z · LW(p) · GW(p)

I agree...PA was invented based on our observations; our observations aren't just magically predicted by some arbitrary set of rules. PA has only existed since 1889; reality had existed long before that.

comment by Bundle_Gerbe · 2012-10-21T23:57:12.662Z · LW(p) · GW(p)

I think this example brings out how Pearlian causality differs from other causal theories. For instance, in a counterfactual theory of causation, since the negation of a mathematical truth is impossible, we can't meaningfully think of them as causes.

But in the Pearlian causality it seems that mathematical statements can have causal relations, since we can factor our uncertainty about them, just as we can other statements. I think endoself's comment argues this well. I would add that this is a good example of how causation can be subjective. Before 1984, the Taniyama-Shimura-Weil conjecture and Fermat's last theorem existed as conjectures, and some mathematicians presumably knew about both, but as far as I know they had no clue that they were related. Then Frey conjectured and Ribet proved that the TSW conjecture implies FLT. Then mathematician's uncertainty was such that they would have causal graphs with TSW causing FLT. Now we have a proof of TSW (mostly by Wiles) but any residual uncertainty is still correlated. In the future, maybe there will be many independent proofs of each, and whatever uncertainty is left about them will be (nearly) uncorrelated.

I also think there can be causal relations between mathematical statements and statements about the world. For instance, maybe there is some conjecture of fluid dynamics, which if true would cause us to believe a certain type of wave can occur in certain circumstances. We can make inferences both ways, for instance, if we observe the wave we might increase our credence in the conjecture, and if we prove the conjecture, we might believe the wave can be observed somehow. But it seems that the causal graph would have the conjecture causing the wave. Part of the graph would be:

[̶P̶r̶o̶o̶f̶ ̶o̶f̶ ̶c̶o̶n̶j̶e̶c̶t̶u̶r̶e̶ ̶-̶>̶ ̶c̶o̶n̶j̶e̶c̶t̶u̶r̶e̶ ̶-̶>̶ ̶w̶a̶v̶e̶ ̶<̶-̶ ̶(̶f̶l̶u̶i̶d̶ ̶d̶y̶n̶a̶m̶i̶c̶s̶ ̶a̶p̶p̶l̶i̶e̶s̶ ̶t̶o̶ ̶w̶a̶t̶e̶r̶)̶ ̶]̶

[Proof of conjecture <- conjecture -> wave <- (fluid dynamics applies to water) ]

Replies from: endoself, Peterdjones
comment by endoself · 2012-10-23T17:04:53.883Z · LW(p) · GW(p)

Then mathematician's uncertainty was such that they would have causal graphs with TSW causing FLT.

Well the direction of the arrow would be unspecified. After all, not FLT implies not TSW is equivalent to TSW implies FLT, so there's a symmetry here. This often happens in causal modelling; many causal discovery algorithms can output that they know an arrow exists, but they are unable to determine its direction.

Also, conjectures are the causes of their proofs rather than vice versa. You can see this as your degrees of belief in the correctness of purported proofs are independent given that the conjecture is true (or false), but dependent when the truth-value of the conjecture is unknown.

Apart from this detail, I agree with your comment and I find it to be similar to the way I think about the causal structure of math.

Replies from: None, Bundle_Gerbe
comment by [deleted] · 2012-10-29T08:32:51.483Z · LW(p) · GW(p)

This is very different from how I think about it. Could you expand a little? What do you mean by "when the truth-value of the conjecture is unknown"? That neither C nor ¬C is in your bound agent's store of known theorems?

your degrees of belief in the correctness of purported proofs are independent given that the conjecture is true (or false),

Let S1, S2 be purported single-conclusion proofs of a statement C.

If I know C is false, the purported proofs are trivially independent because they're fully determined incorrect?

Why is S1 independent of S2 given C is true? Are you saying that learning S2⊢C puts C in our theorem bank, and knowing C is true can change our estimation that S1⊢C , but proofs aren't otherwise mutually informative? If so, what is the effect of learning ⊨C on P(S1⊢C)? And why don't you consider proofs which, say, only differ after the first n steps to be dependent, even given the truth of their shared conclusion?

Replies from: endoself
comment by endoself · 2012-10-30T18:39:47.999Z · LW(p) · GW(p)

What do you mean by "when the truth-value of the conjecture is unknown"? That neither C nor ¬C is in your bound agent's store of known theorems?

I meant that the agent is in some state of uncertainty. I'm trying to contrast the case where we are more certain of either C or ¬C with that where we have a significant degree of uncertainty.

If I know C is false, the purported proofs are trivially independent because they're fully determined incorrect?

Yeah, this is just the trivial case.

Why is S1 independent of S2 given C is true?

I was talking about the simple case where there are no other causal links between the two proofs, like common lemmas or empirical observations. Those do change the causal structure by adding extra nodes and arrows, but I was making the simplifying assumption that we don't have those things.

comment by Bundle_Gerbe · 2012-10-23T21:58:04.029Z · LW(p) · GW(p)

Hmm, you are right. Thanks for the correction!

comment by Peterdjones · 2012-10-25T08:24:52.893Z · LW(p) · GW(p)

But in the Pearlian causality it seems that mathematical statements can have causal relations, since we can factor our uncertainty about them, just as we can other statements.

There maybe uncertainty about casual relations and about mathemtical statements, but that does not mean mathematics is causal.

I think endoself's comment argues this well. I would add that this is a good example of how causation can be subjective.

The transition probabilites on a causal diagram may be less than 1, but that represents levels of subjective confidence--epistemology--not causality per se. You can't prove that the universe is indeterministic by writing out a diagram.

Before 1984, the Taniyama-Shimura-Weil conjecture and Fermat's last theorem existed as conjectures, and some mathematicians presumably knew about both, but as far as I know they had no clue that they were related. Then Frey conjectured and Ribet proved that the TSW conjecture implies FLT. Then mathematician's uncertainty was such that they would have causal graphs with TSW causing FLT. Now we have a proof of TSW (mostly by Wiles) but any residual uncertainty is still correlated. In the future, maybe there will be many independent proofs of each, and whatever uncertainty is left about them will be (nearly) uncorrelated.

Yes you can write out a diagram with transitions indicating logical relationships and probabilities representing subjective confidence. But the nodes aren't spatio-temporal events, so it isnt a causal diagram. It is another kind of diagram which happens to have the same structure.

I also think there can be causal relations between mathematical statements and statements about the world.

Causal relations hold between events, not statements.

For instance, maybe there is some conjecture of fluid dynamics, which if true would cause us to believe a certain type of wave can occur in certain circumstances.

What causes us to believe is evidence, not abstract truth.

We can make inferences both ways, for instance, if we observe the wave we might increase our credence in the conjecture, and if we prove the conjecture, we might believe the wave can be observed somehow.

The production of a proof, which is a spatio temproal event, can cause a change in beleif-state, which is a spatio temporal event, which causes changes in behaviour....mathematical truth is not involved. Truth without proof causes nothing. If we dont have reason to believe in a conjecture, we don't act on it, even if it is true.

Replies from: endoself
comment by endoself · 2012-10-30T18:43:11.175Z · LW(p) · GW(p)

Taboo spatio-temporal. Why is it a good idea to give one category of statements the special name 'events' and to reason about them differently than you would reason about other events?

Also, what about Newcomb's problem?

Replies from: Peterdjones, Peterdjones
comment by Peterdjones · 2012-10-30T18:47:49.286Z · LW(p) · GW(p)

"Events" aren't a kind of statement. However a subset of statements is about events. The point of separating them out is that this discussion is about causality, and, uncontentiously, causality links events. If something is a Non-event (or a statement is not about an event), that is a good argument for not granting it (or what is is about) causal powers.

Replies from: endoself
comment by endoself · 2012-10-30T18:50:27.980Z · LW(p) · GW(p)

What is an event? What properties do events have that statements do not?

Replies from: Peterdjones
comment by Peterdjones · 2012-10-30T18:54:11.445Z · LW(p) · GW(p)

Are you a native English speaker?

Replies from: endoself
comment by endoself · 2012-10-30T18:55:36.877Z · LW(p) · GW(p)

Yes, I'm trying to get you to reduce the concept rather than take it as primitive. I know what an event is, by I think that the distinction between events and statements is fuzzy, and I think that events are best understood as a subcategory of statements.

Replies from: Peterdjones
comment by Peterdjones · 2012-10-30T19:02:07.385Z · LW(p) · GW(p)

Reduce it to whart? You've already "tabood" spatio-temporal. I can't communicate anything to you without some set of common meanings. It;s a cheap trick to complain that someone can't define something from a basis of shared-nothing, since no one can do that with any term.

I know what an event is, by I think that the distinction between events and statements is fuzzy, and I think that events are best understood as a subcategory of statements.

The diffrerence is screamingly obvious. Statements are verbal communiations. If one asteroid crashes into another, that is an event but not a statement.. Statements are events, because they happen at particular palces and times, but most events are not statements. You've got it the wrong way round.

Replies from: endoself
comment by endoself · 2012-10-30T19:16:10.561Z · LW(p) · GW(p)

I meant 'statement' in the abstract sense of what is stated rather than things like when it is stated and who it is stated by. 'Proposition' has the meanings that I intend without any others, so it would better convey my meaning here.

Reduce it to whart? You've already "tabood" spatio-temporal.

The point of rationalist taboo is to eliminate all the different phrasings we can use to mention a concept without really understanding it and force us to consider how the phenomenon being discussed actually works. Your wording presumes certain intuition about what the physical world is and how it should work by virtue of being "physical", intuitions that are not usually argued for or even noticed. When you say you can't explain what an "event" or something "spatio-temporal" is without reference to words that really just restate the concept, that is giving a mysterious answer. Things work a certain way, and we can determine how.

Replies from: Peterdjones
comment by Peterdjones · 2012-10-30T19:23:50.533Z · LW(p) · GW(p)

I have no idea what "work" means, please explain...

If you are a native english speaker, you will have enough of an understanding of "event" to appreciate my point. You expect me understand terms like "work" without your going through the process of giving a sematic bedrock definition , beyond the common one.

comment by Peterdjones · 2012-10-30T18:56:50.555Z · LW(p) · GW(p)

Newcomb's problem is an irrelevant-to-everything Waste Of Money Brains And Time, AFIAC.

comment by endoself · 2012-10-21T02:20:15.472Z · LW(p) · GW(p)

It is meaningful to talk about mathematical facts causing other mathematical facts. For example, if I knew the complete laws of physics but did not have enough computing power to determine all their consequences (which would be impossible anyways, as I'm living inside of them), my uncertainty about what is going to happen in the universe would be described by the exact same probability distribution as my uncertainty about the mathematical consequences of the laws of physics, and so both distributions would satisfy the causal Markov condition for the same causal graph* (modulo any uncertainty about whether the laws that I believe to be correct actually do describe the universe).

This works the same way with any other set of mathematical facts. I believe that if the abc conjecture is true, then Szpiro's conjecture is also true and I believe that if the abc conjecture is false, then Shinichi Mochizuki's proof of it is flawed. All of these facts can be put into one probability distribution which can then be factored over a Bayesian network. There is no need to separate the mathematical from the nonmathematical.

* Depending on how exactly you phrase the question, I would even say that these distributions are describing my uncertainty about the same thing, but that isn't necessary here.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-10-21T03:20:47.437Z · LW(p) · GW(p)

The thing is that mathematics seems to have an additional causal structure that seems to be (at least partially) independent from the proof structure.

Replies from: endoself
comment by endoself · 2012-10-21T13:58:48.579Z · LW(p) · GW(p)

I agree with this. I didn't mean to give the impression that the causal structure is the same as the proof structure.

comment by Larks · 2012-10-21T15:13:52.203Z · LW(p) · GW(p)

While it seems viable to say that mathematical truths exert causal (albeit time-invariant) influence on the universe, you might think it prima facie unlikely that we could causally influence them. However, it perhaps we can causally affect the outputs of decision algorithms.

comment by The_Duck · 2012-10-21T04:55:59.840Z · LW(p) · GW(p)

On its own, mathematics is just the study of various systems of rules for symbol manipulation. The "unreasonable effectiveness of mathematics" in more practical areas like physics allows us to conclude something about the structure of physical law. Whatever the true laws of physics are, we conclude, they seem to be well described by certain systems of rules for symbol manipulation.

In our causal models, this is an arrow between "laws of physics" and "observations about the behavior of collections of objects" [subject to Eliezer's caveat about a "laws of physics" node]. Once we include this arrow, we can do neat tricks like counting the numbers of rocks in piles A and B, encoding those numbers in symbols, manipulating those symbols, and finally using the result to correctly predict how many rocks we will have once we combine the two piles.

Replies from: Gust
comment by Gust · 2012-12-26T03:17:23.265Z · LW(p) · GW(p)

I think you've taken EY's question too literally. The real question is about the status of statements and facts of formal systems ("systems of rules for symbol manipulation") in general, not arithmetic, specifically. If you define "mathematics" to include all formal systems, then you can say EY's meditation is about mathematics.

comment by somervta · 2012-10-29T04:29:28.629Z · LW(p) · GW(p)

Unfortunately I accidentally looked at some of the other answers first, but I think my answer would have been this anyway:

The fact "2+2=4" is a fact about the products of certain systems of thought and computation. It is also a abstract description of a certain property of objects in reality (namely, that taking two sets of two objects and combining the sets results in a set of four objects), at least at the classical level.

comment by selylindi · 2012-10-22T14:34:19.262Z · LW(p) · GW(p)

There seem to be all sorts of reasons that our distant ancestors' development of a number sense was useful enough to be evolutionarily favored. Do we still have everyone in the tribe? Do we outnumber the enemies? How many predators were chasing me? Are these all of my children? A number sense that told us "2+2=3" could be quite maladaptive.

comment by CCC · 2012-10-21T11:08:22.531Z · LW(p) · GW(p)

I see it as being in the causal graph - not as a node, but as an arrow (actually, a whole class of arrows). If I have two stones, and I put two more stones with them, then this will cause me to have four stones. Note that this doesn't apply in all cases - if I have two piles of sand and I put them together with two more piles of sand, the result is one really big pile of sand and not four piles - but it applies in enough cases that the cases in which it does not apply can be considered exceptions for various reasons.

Replies from: endoself
comment by endoself · 2012-10-21T14:12:01.742Z · LW(p) · GW(p)

The problem with this is that something can both be caused and appear in a lot of causal interactions. For example, if I launch a giant mirror into space to block out the sun, all of the arrows from the sun to brightness everywhere have to be changed. In AI, this is often represented using plate notation, where a rectangle (the plate) is drawn around a group of variables that repeat and an arrow from outside the plate effects every instance of the variables in the plate.

comment by AnotherIdiot · 2012-10-20T23:14:52.807Z · LW(p) · GW(p)

'2+2=4' can be causally linked to reality. If you take 2 objects, and add 2 others, you've got 4, and this can be mapped back to the concept of '2+2=4'. Computers, and your brain, do it all the time.

This argument falls when we start talking about things which don't seem to actually exist, like fractions when talking about indivisible particles. But numbers can be mapped to many things (that's what abstracting things tends to do), so even though fractions don't exist in that particular case, they do when talking about pies, so fractions can be mapped back to reality.

But this second argument seems to fall when talking about things like infinities, which can't be mapped back to reality, as far as I know (maybe when talking about the number of points in a distance?). But in that case, we are just extrapolating rules which we have already mapped from the universe into our models. We know how the laws of physics work, so when we see the spaceship going of into the distance, where we'll never be able to interact with it again, we know it's still there, because we are extrapolating the laws of physics to outside the observable universe. Likewise, when confronted with infinity, mathematicians extrapolated certain known rules, and from that inferred properties about infinities, and because their rules were correct, whenever computations involving infities were resolved to more manageable numbers, they were consistent with everything else.

So our representations of numbers are a map of the territory (actually, many territories, because numbers are abstract).

comment by Eugine_Nier · 2012-10-21T22:33:12.630Z · LW(p) · GW(p)

Mathematics is a lower, actually the lowest, tier.

Replies from: Gust
comment by Gust · 2012-12-26T03:11:00.078Z · LW(p) · GW(p)

Actually, if you think of it as affecting us, but not being affected by us, it is, in EY's words, mathematics is higher. We would be "shadows" influenced by the higher tier, but unable to affect it.

But I don't really think this line of reasoning leads anywhere.

comment by bryjnar · 2012-10-21T10:56:10.681Z · LW(p) · GW(p)

It's not clear to me why you're allowing the possibility of an upper-tier universe. In particular, a similar kind of example to one you give suggests that it's not meaningful to talk about being in a sufficiently good simulation.

Suppose we're wondering whether we're in a simulation or not. But there are a couple of possibilities. In one of them, the alien simulation-runner's mug of coffee is green, and in the other it's blue. But both of these situations have (to abuse notation somewhat) the same causal arrows pointing into our experience, and so we can't meaningfully talk about them (if I understand your argument).

That reminds me a lot of Putnam's argument that we can't be "brains in vats", which also proceeds from similar thoughts about reference. (The above is a butchered translation of it!) I'd recommend reading it if you haven't - it's not the clearest thing ever, but he's riffing on the same themes. But I think the conclusion tells against the argument: we surely can talk meaningfully about things like being "brains in a vat".

comment by David_Gerard · 2012-10-21T08:48:54.025Z · LW(p) · GW(p)

How does "effectively epiphenomenal" affect these considerations? e.g. the Andromeda Galaxy may affect me, but my effect on the Andromeda Galaxy is the sort of thing we tag "epsilon".

Replies from: Pentashagon
comment by Pentashagon · 2012-10-23T00:18:14.092Z · LW(p) · GW(p)

but my effect on the Andromeda Galaxy is the sort of thing we tag "epsilon"

For now.

comment by Gust · 2012-12-26T05:02:15.445Z · LW(p) · GW(p)

Great post as usual.

It brings to mind and fits in with some thoughts I have on simulations. Why isn't this two-layered system you described analogous to the relation between a simulated universe and its simulator? I mean: the simulator sees and, therefore, is affected by whatever happens in the simulation. But the simulation, if it is just the computation of a mathematical structure, cannot be affected by the simulator: indeed, if I, simulator, were to change the value of some bits during the simulation, the results I would see wouldn't be the results of the original mathematical structure I was computing. I would be observing a new object, instead of changin the object I was observing, I think. The simulator, then, is in an epiphenomenal lower level in relation to the simulation

The main problem is that is seems weird to give the simulated stuff (the mathemathical sub-pattern that behaves analogously to a kid kicking a ball and having fun) the same status as the simulator stuff (the electrons implementing the computer). This relates, I think, to the problem of the existence or truth of mathematical facts or statements, and of reality of interpretations of patterns.

Of course, if you think the existence of some universe depends on the fact that its mathematical structure is being computed somewhere (and that that universe has some spark of base-lavel existence), then this "epiphenomenalism" goes away.

Also, related to gwern's comment here.

comment by incariol · 2012-11-02T11:29:35.854Z · LW(p) · GW(p)

"Mass-energy is neither created nor destroyed..." It is then an effect of that rule, combined with our previous observation of the ship itself, which tells us that there's a ship that went over the cosmological horizon and now we can't see it any more.

It seems to me that this might be a point where logical reasoning takes it over from causal/graphical models, which in turn suggests why there are some problems with thinking about the laws of physics as nodes in a graph or even arrows as opposed to... well, I'm not really sure what specifically - or perhaps I'm just overapplying a lesson from the nature of logic where AI researchers tried to squeeze all the variety of cognitive processes into a logical reasoner and spectacularly failed at it.

Causal models, being as powerfull as they are, represent a similar temptation as logic did, and we should be wary not to make the same old (essentialy "hammer & nail") mistake, I think.

(Just thought I'd mention this so I don't forget this strange sense of something left not-quite-completely explained.)

comment by afeller08 · 2012-10-25T00:52:24.338Z · LW(p) · GW(p)

Still, we don't actually know the Real Rules are like that; and so it seems premature to assign a priori zero probability to hypotheses with multi-tiered causal universes.

Maybe I'm misunderstanding something. I've always supposed that we do live in a multi-tiered causal universe. It seems to me that the "laws of physics" are a first tier which affect everything in the second tier (the tier with all of the matter including us), but that there's nothing we can do here in the matter tier to affect the laws of physics. I've also always assumed that this was how practically everyone who uses the phrase 'laws of physics' uses it.

(I realize you were talking about lower tiers in the line that I quoted, and I certainly agree with the arguments and conclusions you made regarding lower tiers. I just found the remark surprising because I place a very high probability on the belief that we are living in a multi-tier causal universe, and I think that that assignment is highly consistent with everything you said.)

I don't know if I'm nitpicking or if I missed a subtlety somewhere. Either way, I found the rest of this article and this sequence persuasive and insightful. My previous definition of "'whether X is true' is meaningful" was "There is something I might desire to accomplish that I would approach differently if X were true than if X were false," and my justification for it was "Anything distinguishably true or false which my definition omits doesn't matter to me." Your definition and justification seem much more sound.

Replies from: None
comment by [deleted] · 2012-10-25T01:35:12.226Z · LW(p) · GW(p)

Maybe I'm misunderstanding something. I've always supposed that we do live in a multi-tiered causal universe. It seems to me that the "laws of physics" are a first tier which affect everything in the second tier (the tier with all of the matter including us), but that there's nothing we can do here in the matter tier to affect the laws of physics. I've also always assumed that this was how practically everyone who uses the phrase 'laws of physics' uses it.

So you mean we live in a multitier universe with no bridging laws and the higher tiers are predictable fully from the lower tiers? Why not just call it a single tier universe then? Especially because your hypothesis is not distinguishable from the single-tier, which is simpler, so you have no good reason to ever have encountered it. "Such and such is true, but that has no causal consequences, but it's truth is still somehow correllated with my belief". (note that that statement violates the markov-whatsit assumption and breaks causality).

Forgive me if I misunderstood.

Replies from: afeller08, Vaniver, Eugine_Nier
comment by afeller08 · 2012-10-25T07:21:23.813Z · LW(p) · GW(p)

You're right. My hypothesis is not really distinguishable from the single tier. I'm pretty sure the division I made was a vestigial from the insanely complicated hacked-up version of reality I constructed to believe in back when I devised a version of simulationism that was meant to allow me to accept the findings of modern science without abandoning my religious beliefs (back before I'd ever heard of rationalism or Baye's theorem when I was still asking the question "Does the evidence permit me to believe, and, if not, how can I re-construe it so that it does?" because that once made sense to me.)

When I posted my question, the distinction between 'laws of physics' and 'everything else' was obvious to me. But obvious or not, the distinction is meaningless. Thanks for pointing that out.

Replies from: None
comment by [deleted] · 2012-10-25T19:37:25.768Z · LW(p) · GW(p)

Baye's

His name was Bayes, not Baye. FYI

Congradulations on throwing out bad religious beliefs.

Replies from: shminux
comment by Shmi (shminux) · 2012-10-25T19:58:42.348Z · LW(p) · GW(p)

His name was Bayes, not Baye. FYI

Congradulations [sic]

Muphry's law strikes again!

Replies from: DaFranker
comment by DaFranker · 2012-10-25T20:11:59.453Z · LW(p) · GW(p)

Sometimes I feel like there should be separate tagvote buttons instead of linear up/down, for things like "+Insightful", "+Well Worded", and "+INSANELY FUNNY".

This is not one of those times. The parent qualifies for all three.

comment by Vaniver · 2012-10-25T01:46:24.100Z · LW(p) · GW(p)

It seems sensible to have something like the permittivity of free space as a node in your map of the universe, which causes various relationships in electromagnetism (which then cause behavior of individual entities), but whose value is invariant, and thus does not have any inputs. (Your estimate of that node, of course, has inputs- but it doesn't seem reasonable to claim your estimate has any causal influence on the actual value.)

This becomes especially meaningful when you consider something like the fine structure constant, which we're not quite certain is a constant! If it's a node, then you can easily have arrows going into it or not, depending on what the experiments show.

EDIT:

Why not just call it a single tier universe then? Especially because your hypothesis is not distinguishable from the single-tier, which is simpler, so you have no good reason to ever have encountered it.

The tiers are a topological property of the graph- you can find a subset of nodes where the causal influences only flow out. The a priori statement that you expect to find that topological property does require special knowledge- but once you have a graph, noticing that the subset exists and identifying it as special doesn't require special knowledge.

Note also that this interpretation plays a big part in the question of "does something outside of my future selves' past light cones exist?", since the argument that conservation of energy applies everywhere rests on the premise that conservation of energy doesn't have any pertinent incoming causal links. It could be the case that there's an arrow from "is it in my future selves' past light cones?" to "conservation of energy," but we think that's implausible because "conservation of energy" is in this special subset of nodes that don't appear to take inputs from the physical universe where I and my future selves reside.

Replies from: None
comment by [deleted] · 2012-10-25T01:50:07.638Z · LW(p) · GW(p)

You mean physical maybe-constants like the FSC should go into the map as nodes, but the rest of physics shouldn't?

  1. I don't understand this causality business enough to know how physics factors in or out.

  2. I'm confused about the relevence of this.

Replies from: Vaniver
comment by Vaniver · 2012-10-25T02:03:08.718Z · LW(p) · GW(p)

I realized the grandparent was not quite on target, and so edited in a more relevant bit.

You mean physical maybe-constants like the FSC should go into the map as nodes, but the rest of physics shouldn't?

I think that having laws physics in as nodes could work- with those nodes then pointing to basically every physical node. Another way to view it would be the laws of physics as a program you could use to generate causal graphs, or as a set of checks that a causal graph must pass to be considered 'reasonable.' The latter view is probably closer to how humans behave, but I haven't thought about it enough to endorse it.

comment by Eugine_Nier · 2012-10-26T02:01:08.464Z · LW(p) · GW(p)

As I mentioned here, if you want the property that correlated things have a common cause to hold, you need to add nodes for the laws of physics, and for mathematics.

Replies from: None
comment by [deleted] · 2012-10-27T03:12:41.901Z · LW(p) · GW(p)

Good point. Simply stated. That clears that up.

comment by Tenoke · 2012-10-31T10:18:36.647Z · LW(p) · GW(p)

I know it doesnt work exactly like that, but I couldnt help but think of Dark matter and energy as something which could plausibly affect us, but not be affected by us, although its probably the case that the gravity and weak force of normal matter affects dark matter just as much as vice versa.

comment by johnsonmx · 2012-10-28T19:59:47.245Z · LW(p) · GW(p)

We can speak of different tiers of stuff, interacting (or not) through unknown causal mechanisms, but Occam's Razor would suggest these different tiers of stuff might actually be fundamentally the same 'stuff', just somehow viewed from different angles. (This would in turn suggest some form of panpsychism.)

In short, I have trouble seeing how we make these metaphysical hierarchizations pay rent. Perhaps that's your point also.

comment by falenas108 · 2012-10-21T13:57:15.001Z · LW(p) · GW(p)

Even if the shadow brain doesn't affect the upper level brain, couldn't there be a third link between upper and lower levels which points to the level connections?

E.g., we discover that physics tells us that for every particle, there is a cooresponding shadow particle that has no effect on regular ones.

Replies from: DaFranker
comment by DaFranker · 2012-10-22T17:40:34.972Z · LW(p) · GW(p)

E.g., we discover that physics tells us that for every particle, there is a cooresponding shadow particle that has no effect on regular ones.

And how, exactly, would we discover that?

If we discovered this meaningfully, then it means that at least one bit in the entire universe is different than it would be if there were no shadow particles. In this case, the existence of shadow particles is inevitably causally linked to that one bit. As such, they are no longer epiphenomenal, because they do have an effect on that one bit of data, which has its own effect on the rest of the universe.

If we discover this without any such things, then AFAICT it's a meaningless discovery, because we can make a discovery of absolutely anything if there is no information.

If you send data somewhere and it disappears once it affects the lower level, then that is an interaction from the lower level to the upper level, since the upper level would have been different if the lower level wasn't there (the data would not have disappeared). Then again, I'm not entirely sure about this. Maybe this is how you'd build a p-zombie detector: Find the ones that don't have random bits of data randomly blink out of existence.

Replies from: Kindly, nshepperd, Eugine_Nier
comment by Kindly · 2012-10-23T03:16:16.863Z · LW(p) · GW(p)

And how, exactly, would we discover that?

It turns out that what you've thought of as consciousness or self-awareness is a process in the shadow-particle world. The reason you find yourself talking about your experiences is that the real world contains particles that duplicate the interactions of your shadow particles. They do not actually interact with your thoughts, but because of the parallel structure maintained in the real world and the shadow-particle world, you don't notice this. Think of the shadow particles as your soul, which corresponds exactly to the real-world particles in your brain, with the only difference being that the shadow particle interactions are the only once you actually experience.

You conduct a particularly clever physics experiment that somehow manages to affect the shadow-particle world but not the real-particle world. Suddenly the shadow particles that make up your soul diverge from the real-world particles that make up your brain! This is a novel experience, but you find yourself unable to report it. It is the brain that determines your body's actions, and for the first time in your life, this actually matters. The brain acts as though the experiment had done nothing.

Once your brain and soul diverge, the change never cancels out and you find yourself living a horrific existence. Because real-world particles do affect shadow particles, you still receive sensory input from your body. However, your brain is now thinking subtly different thoughts from your soul. To you, this feels as though something has hijacked your body, leaving you unable to cry out for help.

Of course, you never have, and never could have, found out about shadow particles. But you are a brilliant physicist, so your soul eventually figures out what happened. Your brain never does, of course; it lives in the real world, where your clever experiment had absolutely no effect, and was written off as a failure.

Replies from: Eliezer_Yudkowsky, Armok_GoB, DaFranker, ciphergoth
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-26T11:08:55.466Z · LW(p) · GW(p)

That actually happened to me last Tuesday!

Replies from: MugaSofer
comment by MugaSofer · 2012-10-26T14:20:59.422Z · LW(p) · GW(p)

Cannot upvote this enough.

comment by Armok_GoB · 2012-10-26T18:30:31.175Z · LW(p) · GW(p)

After concluding that it had no harmfully results, your brain incorporates the effect into a consumer device in which it was mildly usefully and becomes near ubiquitous, and spreads all over the world.

this'd have the makings of a great SCP if not for the obvious problem.

Replies from: Snowyowl
comment by Snowyowl · 2012-11-05T10:39:46.760Z · LW(p) · GW(p)

I'd say it would make a better creepypasta than an SCP. Still, if you're fixed on the SCP genre, I'd try inverting it.

Say the Foundation discovers an SCP which appears to have mind-reading abilities. Nothing too outlandish so far; they deal with this sort of thing all the time. The only slightly odd part is that it's not totally accurate. Sometimes the thoughts it reads seem to come from an alternate universe, or perhaps the subject's deep subconscious. It's only after a considerable amount of testing that they determine the process by which the divergence is caused - and it's something almost totally innocuous, like going to sleep at an altitude of more than 40,000 feet.

comment by DaFranker · 2012-10-23T13:50:17.625Z · LW(p) · GW(p)

That's an awesome response.

I figured it was impossible for anyone to make any "discoveries" like this in the sense of the concept and knowledge being spread out, but this was outside of my expectations.

comment by nshepperd · 2012-10-25T16:07:01.663Z · LW(p) · GW(p)

We could discover that the characteristics of real particles are such that they are best (read: most simply) explained by some process that starts with simple particles and splits them into multiple levels with different properties, some of which epiphenomenal with respect to others.

comment by Eugine_Nier · 2012-10-23T00:48:02.884Z · LW(p) · GW(p)

And how, exactly, would we discover that?

Using reasoning similar to that Eliezer uses to argue for many worlds.

Replies from: Peterdjones
comment by Peterdjones · 2012-10-25T15:32:59.664Z · LW(p) · GW(p)

Spot on! Every decoherent branch is epiphenomenal with respect to any other. And "bits of information" are pretty irrelevant, because its all about the best explanation of the data, not the data itself.

Replies from: Eliezer_Yudkowsky, nshepperd
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-10-26T11:10:04.227Z · LW(p) · GW(p)

There's no epiphenomenal type of stuff in QM. There's just a causal type of stuff, some of which got far enough away that under the standard and observed rules we can't see it anymore. It's no more epiphenomenal than a photon transmitted into space or a ship that went over the horizon.

Deducing an epiphenomenal type of stuff would be more difficult, and AFAICT would basically have to rely on there being structure in the observed laws and types of your world's physics. For example, let's say you're in the seventh layer of a universe with at least seven causal layers. The first layer has seven laws connecting it to the layer below, the second layer has six laws connecting it to a layer below, and then you're in the seventh layer, connected by two laws to the layer above. You might suspect that there's an eighth layer below you, and that the single remaining law is the one required to match the pattern of the seven layers you know about.

Of course, what you're actually doing this case is almost exactly akin to knowing about a ship that went over the horizon - you observed the Laws of Physics Factory, or the code-factory for your Matrix, generalized, and deduced an effect of the previously observed Factory which the generalization says you shouldn't be able to see. You can navigate to the law-data by following a causal reference to the link down from a law-Factory you've previously observed.

Replies from: Peterdjones
comment by Peterdjones · 2012-11-14T19:36:14.937Z · LW(p) · GW(p)

There's no epiphenomenal type of stuff in QM.

Why is that important? The obervable difference between epiphenomenal type of stuff (= never interacts) and quasi-epihenonemal causality (=rarely interacts) isn't necessarily an observable differnce. If branches of the multiverse only interact once every billion years, then multiversal theory predicts effectively nothing about expected future experience. (I don't personally have a problem with saying mutliversal epiphenomenaism is better than substance epiphenomenalism, but that is because I am not commited to the prediction of exepcted observations [warmed-over LP] over and above Best Explanation and even good old fashioned metaphysics).

An why bring up substance anyway? Contemporary epiphenomenalism doens't focus on substance, it focusses it on properties (Jackson, at one time, Chalmers, maybe) or laws (Davidson).

There's just a causal type of stuff, some of which got far enough away that under the standard and observed rules we can't see it anymore. It's no more epiphenomenal than a photon transmitted into space or a ship that went over the horizon.

OK. So, you are willing to countenance theories that don't pay their way in expected observations so long as they pay their way in other ways...

Deducing an epiphenomenal type of stuff would be more difficult, and AFAICT would basically have to rely on there being structure in the observed laws and types of your world's physics. For example, let's say you're in the seventh layer of a universe with at least seven causal layers. The first layer has seven laws connecting it to the layer below, the second layer has six laws connecting it to a layer below, and then you're in the seventh layer, connected by two laws to the layer above. You might suspect that there's an eighth layer below you, and that the single remaining law is the one required to match the pattern of the seven layers you know about.

That was cast pretty much entirely in terms of laws, although the contemporary arguements lean much more heavily on types--on what things are, on what their natures are.

A typical argument would go:

*1. Physical brain states (or at least the physical properties of brain states) are sufficient to explain observable behaviour.

*2. Consciousness (or at least qualia) cannot be directly identified with the physcial properties of brain states... they are different types of thing, their natures are differnt...

*3. Therefore, qualia are not needed to generate behaviour...they are extraneous and idle.

I don't see how causal diagrams help. If you feel that conscious states can be identified with brain states, you would draw a causal diagram with nodes that are psychophyscial, and if you you feel that they can't, you would draw a diagram with a physcial network and a consicous network in parallel. I don't see how causal diagrams tell you how to identify and classify nodes -- they ratherf assume that that has already been sorted out, somehow.

comment by nshepperd · 2012-10-25T16:03:35.439Z · LW(p) · GW(p)

Every decoherent branch is epiphenomenal with respect to any other.

Not really true. The continuity of the schroedinger equation means eventually the existence of any branch should have some effect on the evolution of any other branch. However, it might be impossible to measure in any way as far as a human is concerned, for the simple reasons of complexity of the math and the tiny size of the effect.

Replies from: DaFranker
comment by DaFranker · 2012-10-26T14:22:57.077Z · LW(p) · GW(p)

(...) impossible (...) for the simple reason of (...) complexity of the math (...) tiny size.

While you might be right this time, empirically we've observed that whenever someone said something was impossible because it was too mathematically complex or involved sizes too small, someone came up with Radio Waves or a Theory of General Relativity.

Just something that made me chuckle.