Strong substrate independence: a thing that goes wrong in my mind when exposed to philosophy

post by HonoreDB · 2011-02-18T01:40:18.733Z · LW · GW · Legacy · 46 comments

Certain kinds of philosophy and speculative fiction, including kinds that get discussed here all the time, tend to cause a ridiculous thing to happen: I start doubting the difference between existence and non-existence.  This bothers me, because it's clearly a useless dead end.  Can anyone help with this?

The two concepts that tend to do it for me are

* Substrate independence/strong AI: The idea that a simulation of my mind is still me.  That I could survive the process of uploading myself into a computer running Windows, a cellular automaton run by this guy, or even something that didn't look like a computer, mind, or universe at all to anyone in the outside world.  That we could potentially create or discover a simulated universe that we could have ethical obligations towards.  This is all pretty intuitive to me and largely accepted by the sort of people who think about these things.

* Multiverses: The idea that the world is bigger than the universe.

My typical line of thought goes something like this: suppose I run a Turing Machine that encodes a universe containing conscious beings.  That universe now exists as a simulation within my own.  It's just as real as mine, just more precarious because events in my reality can mess with its substrate.  If I died and nobody knew how it worked, it would still be real (so I should make provisions for that scenario).  Okay, but Turing Machines are simple.  A Turing Machine simulating a coherent universe containing conscious beings can probably arise naturally, by chance.  In that case, those beings are still real even if nobody on the outside, looking at the substrate, realizes what they're looking at.  Okay, but now consider Turing Machines like John Conway's Fractran, which are encoded into an ordered set of rational numbers and run by multiplication.  I think it's fair to say that rational numbers and multiplication occur naturally, everywhere.  Arithmetic lives everywhere.  But furthermore, arithmetic lives *nowhere*.  It's not just substrate-independent; it's independent of whether or not there is a substrate.  2+2=4 no matter whether two bottlecaps are being combined with two other bottlecaps to make four bottlecaps.  So every Turing-computable reality already exists to the extent that math itself does.

I think this is stupid.  Embarrassingly stupid.  But I can't stop thinking it.

46 comments

Comments sorted by top scores.

comment by JoshuaZ · 2011-02-18T01:47:13.096Z · LW(p) · GW(p)

This is essentially the Tegmark ensemble multiverse which some very smart people take very seriously. You don't need to consider yourself stupid for taking this seriously.

Replies from: HonoreDB
comment by HonoreDB · 2011-02-18T01:53:41.501Z · LW(p) · GW(p)

Maybe what I should really be asking is "how do I accept that I'm in a level 4 Tegmark world and still care about things getting created and destroyed within my visible universe?" The concept of measure might be an answer, and I haven't studied it in detail, but I just intuitively doubt it's going to add up to sanity if I go that route.

Replies from: Perplexed, ata, JoshuaZ, Dreaded_Anomaly, MichaelHoward
comment by Perplexed · 2011-02-18T05:12:28.512Z · LW(p) · GW(p)

Two mantras that work for me (YMMV):

  • "Stop worrying about existence vs non-existence, and start worrying about accessible vs inaccessible".

  • "Regardless of how many branches reality has, the one I live in is the one containing the consequences of my actions."

Replies from: cousin_it
comment by cousin_it · 2011-02-18T06:08:18.853Z · LW(p) · GW(p)

the one I live in

The next realization is that you probably live in many branches at once. After all, there's not enough information in your mind-state to single out one branch precisely, so I guess you inhabit the "packet" of branches that differ by little enough that your mind-state is exactly the same in all of them.

Replies from: Vladimir_Nesov, Manfred, AlephNeil
comment by Vladimir_Nesov · 2011-02-18T11:32:51.295Z · LW(p) · GW(p)

And further, you should act according to your own understanding of which possible worlds you influence, even if in fact you influence a much smaller number of them, but you don't know which ones.

comment by Manfred · 2011-02-18T17:26:18.866Z · LW(p) · GW(p)

Well yeah. Branches have branches have branches. "Branches" are only a convenient approximation to a reality that is continuous along some dimensions.

comment by AlephNeil · 2011-02-18T12:15:00.127Z · LW(p) · GW(p)

Perhaps this is tangential, but I'm not keen on the idea that there's a "mind-state" that can be either 'exactly the same' or not, because the boundaries of a mind are always going to be 'fuzzy'. How far down from the diencephalon to the spinal cord do we have to go before "mind-state" supervenes on "brain state"? Surely any answer one gives here is arbitrary.

Some philosophers even speculate that you need to take into account Leonard Shelby's tattoos in order to uniquely determine his mental state.

Does Leonard Shelby inhabit only those branches where his tattoos are as they are, or all branches containing 'his body' but perhaps with different tattoos? Suppose we say 'well when he asks himself this, his tattoos aren't part of the computation, so he belongs to all such branches'. That's all well and good, but almost none of what we think of as "Shelby's mind" is "part of the computation". So perhaps 'he' is really everyone in the universe who has ever had that train of thought? But what exactly is 'that train of thought'?

It gets difficult...

Replies from: Armok_GoB
comment by Armok_GoB · 2011-02-18T16:33:57.350Z · LW(p) · GW(p)

Yea, everyone who has the same train of thought are "the same person" according to many valid definitions. If they weren't Eliazers solution to the prisoners dilemma wouldn't work. I don't see any problem with this.

Replies from: AlephNeil
comment by AlephNeil · 2011-02-19T03:39:00.206Z · LW(p) · GW(p)

I don't see any problem with [the idea that everyone who has the same train of thought are "the same person"].

The problem is that the boundaries of a 'train of thought' (by which I really mean "the criteria for determining when two beings share the same train of thought") are, if anything, even more perplexing than the boundaries of a mind.

Perhaps we can ignore these difficulties in particular decision problems by reasoning 'updatelessly', but answering the simple question "what am I?" ("what is a mental state?" "when does a system contain a 'copy of my mind'?") seems hopelessly out of reach.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-02-19T16:13:03.114Z · LW(p) · GW(p)

It's fuzzy and subjective, there is no "what is actually part of your mind" just "things that you consider part of your mind". I don't see a problem with this either.

Replies from: AlephNeil
comment by AlephNeil · 2011-02-19T16:23:12.270Z · LW(p) · GW(p)

I entirely agree with you, but notice what follows from this: Person X's decision procedure (and his assignments of subjective probabilities, if we're serious about the latter) ought not to have a "discontinuity" depending on whether some numerically distinct being Y is either "exactly the same" or "ever so slightly different" from X.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-02-19T17:03:49.671Z · LW(p) · GW(p)

Sounds right, in most cases only the broadest strokes of the algorithm matters. For simple things with a low number of possible states like almost all game theory example the notion of personhood does not really have any use. There are however computations that use things like large swats of your memory or your entire visual field simultaneously, and those tends to be the ones were the concepts do matter.

comment by ata · 2011-02-18T03:16:15.382Z · LW(p) · GW(p)

I think it's safe to say that the measure problem is still pretty wide open — not just at the compelling-but-speculative level IV, but even at lower levels that are pretty well-established as actually existing, as in the Born probabilities in MWI — so until that's solved comprehensively enough that level IV multiverse itself either adds up to normality or adds up to confusion that can be dissolved, I don't recommend deriving any apparently normality-defying conclusions from it, particularly sanity-impacting and morality-impacting ones. For now it's just a fascinating problem to solve, and the kind of thinking that keeps leading people to invent the level IV multiverse appears to be a step in the right direction, but it's by no means a complete solution — existence is still decidedly a confusing problem, and the nature and implications of a level IV multiverse are not yet clear enough that there really is any bullet to bite.

comment by JoshuaZ · 2011-02-18T01:58:41.814Z · LW(p) · GW(p)

For what it is worth, if one does accept level 4 Tegmark it isn't clear what measure is the correct one. It isn't clear that it adds up to normality much less sanity.

comment by Dreaded_Anomaly · 2011-02-18T03:27:53.902Z · LW(p) · GW(p)

I think you are forgetting Egan's Law.

If the universe is correctly described by Tegmark's mathematical universe hypothesis, it's still the universe.

Edit to add: Considering a philosophy should worry you only if you suspect that your motivation for caring about your visible universe is based on some principle or idea that is likely to be invalidated by that philosophy, if it proves correct. I don't think such motivations have a large amount of crossover with multiverse-type theories.

comment by MichaelHoward · 2011-02-18T20:26:20.927Z · LW(p) · GW(p)

how do I accept that... and still care about things getting created and destroyed within my visible universe?

OK, I'll give it a shot... :-)

There will be mental entities in other universes also thinking about this, some using a similar decision process to you. Many of them will also have realized that there are others. Welcome to this club.

Those using the same decision theory will tend to make the same decision. So if you can commit to caring about your universe and trying to make it a better place, and go through with it, it's more likely that other members of the club will too.

Save a googolplex universes here, a googolplex there, and eventually you're talking real utility.

comment by David_Gerard · 2011-02-18T11:28:31.639Z · LW(p) · GW(p)

I feel that way at times. I increasingly wonder if "consciousness" is actually a useful concept, the belief in which will make any difference to anticipated outcomes. (Does anyone have a handy list of anticipated outcomes that "consciousness" makes a difference to?) This is like saying "I don't exist", but that doesn't automatically make it false.

Replies from: TheOtherDave, Manfred
comment by TheOtherDave · 2011-02-21T19:59:12.931Z · LW(p) · GW(p)

If a non-conscious system is understood to be kind of like a p-zombie, then no, of course not.

If a non-conscious system is understood to be more like my arm for a few days after my stroke, where it would do things that were clearly related to various motivations that I had, but where I was not aware of myself as directing it to do those things, then I expect it to make a difference to motivation.

For example, I found it much easier to do PT exercises with that arm during that period, because I was in some sense not aware of it as my arm. I could form the desire to do exercises with that arm, and the arm would do those exercises, and I would experience fatigue and pain from the arm, but I didn't experience the same connection between those sensations and a reduced desire to keep performing the exercise as I would with my uninjured arm.

I have no idea whether what I just described makes any sense to anyone else, though. It was a very surreal experience.

I also don't know whether what I experienced is at all related to what you mean to refer to by "consciousness".

Replies from: David_Gerard
comment by David_Gerard · 2011-02-21T22:48:17.596Z · LW(p) · GW(p)

With some thought over the past few days, I think I'm saying I am increasingly treating people as more or less predictable systems based on past behaviour and stimulus/response, and ignoring the noises that come out of their mouths. I used to treat the noises as having anything much to do with what the people do, but the evidence is scant.

So it's conversational cynicism-signaling, but with a slightly useful point ;-)

Replies from: TheOtherDave
comment by TheOtherDave · 2011-02-21T22:52:17.614Z · LW(p) · GW(p)

Ah! Yeah, agreed that the stuff people say is often unrelated to our other behaviors, and in particular our accounts of why we do what we do are often simply false. In fact, often the narratives we accept as accounts of why we do what we do aren't any such thing in the first place, even false ones.

comment by Manfred · 2011-02-18T17:22:50.185Z · LW(p) · GW(p)

"Are you conscious?" "Why did you do what you just did?" "Who is that person in the mirror?"

Consciousness helps with all sorts of useful things. The problem is that they're so basic we take them for granted, so it becomes tough to define.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-02-18T03:14:59.346Z · LW(p) · GW(p)

"Permutation City", followed by http://www.fanfiction.net/s/5389450/1/The_Finale_of_the_Ultimate_Meta_Mega_Crossover

Replies from: JGWeissman
comment by JGWeissman · 2011-02-18T03:37:02.386Z · LW(p) · GW(p)

Will those make readers more or less confused?

Replies from: HonoreDB
comment by HonoreDB · 2011-02-18T03:53:35.441Z · LW(p) · GW(p)

I've read them both, which sort of answers that.

Edit: Not to be dismissive! I enjoyed them both, as fiction and as well-reasoned explorations of the subject. In fact, it's possible the first time I started thinking about this was when reading the short story Permutation City is based on.

Replies from: JenniferRM
comment by JenniferRM · 2011-02-19T06:12:19.859Z · LW(p) · GW(p)

Interesting. I was going to link to permutation city but found this thread when checking to make sure I wouldn't duplicate someone else's comment... but then it didn't work to clarify things for you which leaves me kind of stumped, and furthermore blocked from exploring how I'm stumped by problems with spoilers.

Actually this comes up often enough that I think maybe it would be worth making a discussion area to talk about it... how about right here :-)

comment by Armok_GoB · 2011-02-18T16:40:49.524Z · LW(p) · GW(p)

These thoughts are not stupid. In fact, that you can't stop thinking them shows that you have the virtue of not being able to ignore obvious truths due to them seeming weird and having unpleasant consequences. You should remove the stupid tag from these ideas and instead apply it to whatever heuristic incorrectly marked them as stupid in the first place!

This is the best summary I know of the ideas: http://lesswrong.com/lw/1zt/the_mathematical_universe_the_map_that_is_the/ Here is somehting useful for living with them: http://lesswrong.com/lw/15m/towards_a_new_decision_theory/

comment by David_Allen · 2011-02-18T14:34:47.910Z · LW(p) · GW(p)

Arithmetic lives everywhere. But furthermore, arithmetic lives nowhere. It's not just substrate-independent; it's independent of whether or not there is a substrate. 2+2=4 no matter whether two bottlecaps are being combined with two other bottlecaps to make four bottlecaps.

No. This is a wrong idea that seems to have been accepted on this site.

Arithmetic is an abstraction. It is a useful way to carve meaning out of our perceptions. It exists in systems implementing that abstraction, such as our mind. It has no meaning or existence beyond that.

This becomes evident when you consider perspective. What perspective are you adopting when you say "two bottlecaps are being combined with two other bottlecaps to make four bottlecaps"? You have adopted the perspective of somebody who can see, identify, and count the bottle caps in some particular area. You are in fact modeling this perspective in your mind. In this case 2+2=4 isn't in the arrangement of bottle caps, it is in your mind.

Replies from: RobinZ, TheOtherDave, AlephNeil
comment by RobinZ · 2011-02-18T20:44:33.900Z · LW(p) · GW(p)

I don't know if this is agreement, but the way I have thought about it for a while now is that mathematics, including arithmetic, is more like a game than anything else. Rules like the Peano Postulates exist simply because those are rules that we think are appropriate to the game - they handle cases like accumulating bottlecaps in an elegant fashion - not because they have a seperate, Platonic reality.

comment by TheOtherDave · 2011-02-18T16:27:52.736Z · LW(p) · GW(p)

If squirrel A puts a peanut in an empty nook in a tree, and then later puts another nut there, and does that twice more, and nothing removes nuts from that nook, and later a squirrel B comes along and eats all the nuts in that nook, my expectation is that squirrel B will eat four nuts.

That expectation depends on certain facts about the world, and among those facts is something that can be expressed as "1+1+1+1=4," which I would label a statement of mathematics. The squirrels don't necessarily have access to that fact, and certainly don't have access to that expression of it.

What is your expectation of how many nuts B eats? Do you agree that "1+1+1+1=4" expresses something that's related in some way to that expectation? If so, do you agree that that's a statement of mathematics?

If the thing "1+1+1+1=4" expresses exists only in minds, whose mind does it exist in, in this case? Squirrel A? Squirrel B? Somewhere else? What would be different if it existed somewhere else?

Replies from: David_Allen
comment by David_Allen · 2011-02-18T18:27:12.632Z · LW(p) · GW(p)

The statement "1+1+1+1=4" exists in your mind. Your mind generated this expression as a result of contemplating your observations (from the story). This statement is a abstraction of your observations, specified in the terms of arithmetic. Your observations were formed based on your sensory input combined with your prior experience. Your sensory input depended on your relative context, your body's physical capabilities, and the physical laws of the universe.

The statement is only related to the physical reality of the nuts through a chain of inference. The statement does not represent anything about the nuts directly; it only represents something about the state of your mind.

Even the identification that nuts are individual items that can be counted is an abstraction that you hold in your mind. If you consider the nuts from the perspective of a single photon, the nut abstraction vanishes. With a single photon's perspective we can't tell a nut from a bear, much less count the number of nuts.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-02-18T19:18:28.768Z · LW(p) · GW(p)

Ah, I see what you mean now.

Agreed that the notion that arithmetic primitives relate to a relationship between nuts is unintelligible from a perspective that does not allow for nuts, or objects in general, or relationships among objects, in the first place.

And, yes, the existence of objects and relationships among them is an accepted idea on this site, which makes that perspective pretty much incompatible with most discussion here.

Replies from: David_Allen
comment by David_Allen · 2011-02-18T20:17:50.654Z · LW(p) · GW(p)

And, yes, the existence of objects and relationships among them is an accepted idea on this site, which makes that perspective pretty much incompatible with most discussion here.

I am not arguing for the non-existence of objects and relationships among them. Actually, the nature of relationships is key to my arguments.

I am arguing against the idea that arithmetic has an existence outside of any meaningful context. I am arguing that arithmetic is an abstraction that only exists in the contexts of its actual implementations. Arithmetic isn't occurring when sheep wander or squirrels store nuts. Arithmetic occurs when we interpret our observations of those circumstances.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-02-18T21:53:04.008Z · LW(p) · GW(p)

There exists a relationship between how many nuts squirrel B eats, and how many times squirrel A deposited a nut in the tree.

That relationship does not depend on my observations.

"1+1+1+1=4" is a statement of arithmetic that expresses one aspect of that relationship; specifically, the aspect of it related to counting.

"1+1+1=3" is a different statement of arithmetic that expresses the same aspect of a different relationship, one that could be implemented in a different story, and likely was.

"1000+1000+1000=3000" is yet another statement of arithmetic that expresses the same aspect of a different relationship, one that has probably never been implemented in terms of nuts and squirrels, although in principle it could be.

"1+1+1=4" expresses the same aspect of yet another relationship, one which probably has never been implemented that way, and which probably can't be.

And there are other kinds of relationships, implementable and otherwise, which can be expressed by other kinds of statements of mathematics.

None of those relationships depend on my observations, either. And you say that none of those relationships are arithmetic relationships, precisely because they don't involve us interpreting our observations.

For convenience, let's call them X, instead. You aren't denying the existence of X, merely asserting that X isn't arithmetic.

Well, OK. I'm not sure what I would expect to experience differently if those relationships were or weren't arithmetic, so I don't know how to evaluate the truth or falsehood of that statement.

But I will say that if that's true, then arithmetic isn't very interesting, except perhaps linguistically. Sure, maybe arithmetic only occurs in minds, or in human minds, or in English-speaking minds. I can't see why I ought to care much about that.

The interesting thing is X.

Replies from: David_Allen
comment by David_Allen · 2011-02-19T01:03:37.958Z · LW(p) · GW(p)

Thanks for sticking with this, I am trying to hone my arguments on this topic and you are helping.

There exists a relationship between how many nuts squirrel B eats, and how many times squirrel A deposited a nut in the tree.

That relationship does not depend on my observations.

Yes it does.

You are implying that there is some sense of reality that is independent of how we think about it. I agree with that. But your statement adopts a "human mind" centric interpretation which makes it false.

For example, from the perspective of the universe at the level of quarks, the reality within the story's space-time is unchanged by our later observations of the written story. It is independent of our observations.

However, the relationship that you identified has no meaning from the quark perspective. We wouldn't know if a squirrel ate a nut or if a nut ate a squirrel. At that level, there are no concepts for squirrels and nuts -- or counting; those are higher level abstractions.

For convenience, let's call them X, instead. You aren't denying the existence of X, merely asserting that X isn't arithmetic.

The relationship you identified is real and it has meaning; but that meaning is found within the context of your mind and does not describe some intrinsic property of the universe, it describes an interpretation of your observations.

But I will say that if that's true, then arithmetic isn't very interesting, except perhaps linguistically. Sure, maybe arithmetic only occurs in minds, or in human minds, or in English-speaking minds. I can't see why I ought to care much about that.

The interesting thing is X.

Here is why you should care:

Here at LW we are working toward rationality. We want to improve the correspondence between our map and the territory. We want to know what the truth is and how to carve reality and its joints. We want to make ourselves immune to obvious fallacies such as the mind projection fallacy.

My claim is that the context principle -- that all meaning is context dependent -- is essential to understanding existence, truth and knowledge; it provides traction for solving problems and toward achieving our goals.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-02-19T02:23:05.616Z · LW(p) · GW(p)

Consider a particular system, S1, of a squirrel eating a nut.

S1 can be described in a lot of different ways. The way I just described it is, I agree with you, a human-mind-centric description.

But I could also, equally accurately, describe it as a particular configuration, C1, of cells. Or a particular configuration, A1, of atoms. Or a particular configuration, Q1, of quarks.

Those aren't particularly human-mind-centric descriptions, but they nevertheless describe the same system. Q1 is, in fact, a description of a squirrel eating a nut, even though there's no way I could tell from analyzing Q1 whether it describes a squirrel eating a nut, or a nut eating a squirrel, or a bushel of oranges.

That I am using a human-level description to refer to it does not make it somehow an exclusively human-level as opposed to quark-level system, any more than the fact that I'm using an English-language description to refer to it makes it an English-language-level system.

And Q1continues to be a quark-level description of a system comprising a squirrel eating a nut even if nobody observes it.

Replies from: David_Allen
comment by David_Allen · 2011-02-19T20:17:43.996Z · LW(p) · GW(p)

Essentially you are saying that Q1=S1. This is certainly not true.

Clearly Q1 and S1 are related. If we could vanish a large contiguous chunk of Q1, we might see a chunk of squirrel disappear in S1; so they have some time-space context in common.

But Q1 describes a system of quarks and S1 describes a system of a squirrel and a nut. They are represented in different "languages"; to compare them you must convert them to a common "language". The relationship between Q1 and S1 is this process of language conversion -- it is the layered process of interactions and interpretations that result in S1, for some context that includes Q1.

The process that generates S1 -- in part from observations ultimately derived from Q1 -- includes the recognition of squirrels and nuts; and that part of the process occurs within the human mind.

But I could also, equally accurately, describe it as a particular configuration, C1, of cells. Or a particular configuration, A1, of atoms. Or a particular configuration, Q1, of quarks.

No. In general you are not guaranteed "equally accurate" descriptions when you convert from one language to another, from one perspective to another, from one domain abstraction to another. For example the fraction 1/9 is exact, but its decimal representation limited to three decimal places, 0.111, is only approximate.

Q1 is, in fact, a description of a squirrel eating a nut

I addressed this above. Q1 is a system of quarks that is part of the context that led to S1, it is not S1.

That I am using a human-level description to refer to it does not make it somehow an exclusively human-level as opposed to quark-level system, any more than the fact that I'm using an English-language description to refer to it makes it an English-language-level system.

For the purpose of efficient communication mixing perspectives in this way is generally fine. To answer certain questions on existence and meaning -- for example to identify if arithmetic has an existence that is independent of humans and our artifacts -- we need to be more careful.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-02-19T22:40:37.088Z · LW(p) · GW(p)

You seem to be failing to attend here to the difference between descriptions and the systems they describe.

I'm not saying Q1=S1. That's a category error; Q1 is a description of S1. The map is not the territory.

I am saying that Q1 and "a squirrel eating a nut" are two different descriptions of the same system, and that although "a squirrel eating a nut" depends on a human mind to generate it, the system it describes (which Q1 also describes) does not depend on a human mind to generate it.

Agreed that there are gains and losses in going from one form of representation to another. But the claim "'a squirrel eating a nut' is a description of that system over there" is just as accurate as the claim "Q1 is a description of that system over there." So I stand by the statement that I can as accurately make one claim as the other.

Replies from: David_Allen
comment by David_Allen · 2011-02-23T23:09:07.178Z · LW(p) · GW(p)

The map is not the territory. ... I am saying that Q1 and "a squirrel eating a nut" are two different descriptions of the same system...

The map and territory perspective is effective when pointing out that the map is not the territory. A map of Texas is not Texas. However it would be wrong to conclude that a road map of Texas describes the same territory as an elevation map of Texas. Although both maps have a similar geographic constraint, they are not based on the same source data. They do not describe the same territory.

Consider this case. We show a picture E (evidence) to Frank and Glen. Frank's response is "cat". Glen's response is "cute".

By your prior statements I assume that you would say that "cat" and "cute" are both accurate descriptions of E, the picture.

Then Frank says "No, Glen is wrong -- that funny looking cat is ugly!"

Glen responds, "No, Frank is wrong -- that is a small fluffy dog!"

This conflict is caused by a false belief -- not by a false belief about E -- but by a false belief about what "cat" and "cute" actually describe.

Frank's response "cat" describes F(E) -- Frank's interpretation of the evidence. Glen's response "cute" describes G(E) -- Glen's interpretation of the evidence. Both statements are correct in that they are reasonable expressions of personal belief. From this perspective there is no conflict.

It is wrong to arbitrarily split out E and claim that any high level interpretation describes it.

Let's say that Frank and Glen talk, and then they both conclude that E is a picture of a "cute dog". Are they now describing E? No -- and they are still not describing the same thing. When Frank says "cute dog" he is thinking about how he finds small dogs cute. When Glen says "cute dog" he is thinking about how he finds fluffy animals cute. So even though they have both encoded their conclusion to the same phrase "cute dog", they do not mean the same thing.

Back to squirrel's and quarks.

The chain of inference that leads to Q1 and the chain that leads to "a squirrel eating a nut" are different, even if at some level they share similar time-space constraints. Therefore Q1 and "a squirrel eating a nut" are not two different descriptions of the same system -- they are different descriptions of different systems.

I know that this perspective violates our common understand of the world, but it is our understanding that is wrong.

although "a squirrel eating a nut" depends on a human mind to generate it, the system it describes (which Q1 also describes) does not depend on a human mind to generate it.

We seem to agree that some stuff doesn't need the human mind to exist -- but perhaps we disagree on how to carve the world into what does and what doesn't.

For clarity on this problem, let's formalize it a bit: Let S1 refer to the description "a squirrel eating a nut". Let Z refer to the system that S1 describes.

You claim that Z does not depend on a human mind to generate it; however Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. This human body/mind portion is everything from the moment that photons start entering the eye to the point where we come to the conclusion "hey, that's a squirrel eating a nut". So Z does depend in part on a human mind.

To deal with this, let's split Z into two parts: Let Ze refer to the part of Z that is entirely outside of the human body -- the environment. Let Zh refer to the rest of Z -- the part that occurs within the human body.

Also, existence requires context. There are reasonable normative contexts that we could assume for this case, but let's be specific: Let R refer to the physical reality of the universe (whatever that is).

From this perspective I think that we can agree -- both Ze and Zh exist within R and that the existence of Ze within R does not depend in any way on the processing that occurs within Zh. For that matter the existence of Zh within R doesn't depend on the processing that occurs within Zh.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-02-24T01:05:04.902Z · LW(p) · GW(p)

I'm not really following your overall line of reasoning, so here's a few responses to specific points:

  • Agreed that F(E) = "an ugly funny-looking cat" and G(E) = "a cute small fluffy dog" are both descriptions of E.

  • Not agreed that they are accurate descriptions. E is neither a cat nor a dog; E is a picture.

  • Agreed that to claim that F(E), or G(E), or any other "high-level interpretation" of E, fully describes E, is simply false. But I would say that F(E) and G(E) are (incomplete) descriptions of E. I understand that we disagree on this point.

  • I'm not at all sure what you mean by "arbitrarily splitting out E" in this example.

  • Agreed that if F2(E)="a picture of a cute-by-virtue-of-being-small dog", and G2(E)="a picture of a cute-by-virtue-of-being-a-fluffy-animal dog," then F2(E) != G2(E) -- that is, Frank and Glen don't actually agree. It helps to not confuse their internal descriptions (F2 and G2), which are different, with their utterances ("E is a picture of a cute dog"), which are the same.

  • So, agreed that they "do not mean the same thing" -- that is, their descriptions are not identical. But, again, I say that they are describing the same thing (E), although their descriptions (F2(E) and G2(E)) are different. Again, I understand that we disagree on this point.

  • I agree that the chain of inference that leads to formulating Q1 and the chain that leads to formulating "a squirrel eating a nut" are different. I don't see how it follows that "they are [..] descriptions of different systems."

Let S1 refer to the description "a squirrel eating a nut". Let Z refer to the system that S1 describes.

  • OK, though I want to point out explicitly that S1 now refers to something different from what S1 previously referred to in this discussion.

  • I don't think Z necessarily includes portions of the human mind and body, including intermediate mind generated meanings. But I agree that Z can include that and still be described by S1. And I agree that Z as you've defined it depends on a human mind.

  • But you seem to be asserting that (the old value of) S1 is the same system that Z is, and I disagree with that. (Old) S1 doesn't include any photons or human eyes or human conclusions, and Z does.

  • I agree that Ze and Zh exist within R (although I don't see how that expresses anything different than saying that Ze and Zh exist), and that Ze doesn't depend on Zh. I also agree that the existence of Zh doesn't depend on the specific processing performed by Zh, probably, though if we wanted to build on that statement it would likely be worthwhile to phrase it in a less confusing way.

comment by AlephNeil · 2011-02-18T18:33:37.266Z · LW(p) · GW(p)

Why does arithmetic have to "exist" "in" places and times?

You can meaningfully say that 2+2=4 (considered as an abstract number-theoretic proposition rather than a string of symbols) is true, that it incorporates a two-place function, that it's quantifier-free, or that such-and-such is a proof of 2+2=4 in Peano Arithmetic.

But asking whether 2+2=4 is "only true in your mind" (or whether it was "true before people existed") is like asking whether an octopus is true or false.

Replies from: David_Allen
comment by David_Allen · 2011-02-18T18:55:42.258Z · LW(p) · GW(p)

But asking whether 2+2=4 is "only true in your mind" (or whether it was "true before people existed") is like asking whether an octopus is true or false.

Absolutely -- but what is missing is a discussion of context. It isn't enough to just say that 2+2=4 is true, or that a particular octopus is false; we need to know what the context of evaluation is.

Why does arithmetic have to "exist" "in" places and times?

We need context for the same reason that Bayes' theorem needs priors. Without context we don't have meaning. In many cases we assume context in a way that is transparent to us; but whether explicit or implicit, a context is still in use.

Replies from: AlephNeil
comment by AlephNeil · 2011-02-18T19:23:54.746Z · LW(p) · GW(p)

Absolutely -- but what is missing is a discussion of context. It isn't enough to just say that 2+2=4 is true, or that a particular octopus is false; we need to know what the context of evaluation is.

2+2=4 has a standard context, namely the natural numbers N. "2+2=4" (without qualification) asserts that N satisfies 2+2=4. So the fact that one can imagine a non-standard context where "2+2=4" means something false (like "Paris is the capital of the UK") doesn't really have any bearing.

In my use of the expression "2+2=4" I refer not merely to a function that maps contexts to propositions, but to one specific proposition, which has meaning in and of itself. (That's basically what a proposition is - a little chunk of semantics.)

And about that proposition it is meaningless to affirm or deny that it exists only in people's minds. To be fair, I think it's equally meaningless to say that the proposition "exists in" physical processes where someone puts two nuts next to two other nuts and then has four nuts.

Replies from: David_Allen
comment by David_Allen · 2011-02-18T22:20:25.117Z · LW(p) · GW(p)

2+2=4 has a standard context, namely the natural numbers N...

Agreed. For efficiency in communication we often assume normative contexts. For the statement "2+2=4" it makes sense for us to rely on its implicit context. To make sense of a statement like "Is that octopus true or false?", we will need to make the context of evaluation explicit.

In my use of the expression "2+2=4" I refer not merely to a function that maps contexts to propositions, but to one specific proposition, which has meaning in and of itself.

I'm not certain I understand this as you mean it, so I'll respond generally and see how you reply.

The idea that something can have "a meaning in and of itself" is false. This is equivalent to "objective truth". All meaning is relative to some context.

You can certainly have a conception of "a proposition that has meaning in and of itself", but that conception exists within the context of your mind, and the proposition with that nature is non-existent.

Perhaps you believe in dualism?

comment by XiXiDu · 2011-02-18T09:20:12.363Z · LW(p) · GW(p)

I think this is stupid. Embarrassingly stupid. But I can't stop thinking it.

Many people are thinking about it, it is fun to think about it. I don't think it is stupid. The worst case scenario is that it is bad science fiction but good fantasy.

Steven Landsburg seems to be taking the idea very seriously as well. The basic tenet of his new book seems to be that mind is biology, biology is chemistry, chemistry is physics, physics being math. Mind perceives math, thus the universe exists physically. Erase the “baggage” and all that’s left is math.

All sorts of people think about this idea:

There is something almost mystical about this: any sequence of digits, for example, randomly conceived in the mind, must correspond to a sequence of digits in the unknowable expansion of Pi (in that realm over 10^1000 digits into the expansion), based on the laws of probability.

— Garth Kroeker, Irrational Numbers Metaphor

Also see:

comment by Jonathan_Graehl · 2011-02-18T17:30:48.665Z · LW(p) · GW(p)

Maybe many things other than this universe exist, but they don't exist in a way that matters to me.

comment by Alex Flint (alexflint) · 2011-02-18T09:36:58.978Z · LW(p) · GW(p)

A side note: you should expect yourself to be confused when contemplating these ideas. Concepts foreign to our evolutionary environment, particularly deeply foreign concepts such as these, are very difficult to get a firm steady grasp on, and that's exactly as we should expect. Do the math, factor that into your decisions as much as possible, but don't be surprised if your intuition has to be dragged kicking and screaming.