The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence

post by TheTripleAffirmative · 2019-04-14T21:20:43.522Z · LW · GW · 14 comments

Contents

14 comments

Epistemic Status: The following seems plausible to me, but it's complex enough that I might have made some mistakes. Moreover, it goes against the beliefs of many people much smarter than myself. Thus caution is advised, and commentary is appreciated.

I.

In this post, I aim to make a philosophical argument that we (or anyone) cannot use simulation to create new consciousnesses (or, for that matter, to copy existing people's consciousnesses so as to give them simulated pleasure or pain). I here make a distinction between "something that acts like it is conscious," (e.g. what is commonly known as a 'p-zombie') and "something that experiences qualia." Only the latter is relevant to what I mean when I say something is 'conscious' throughout this post. In other words, consciousness here refers to the quality of 'having the lights on inside', and as a result it relates as well to whether or not an entity is a moral patient (i.e. can it feel pain? Can it feel pleasure? If so, it is important that we treat it right).

If my argument holds, then this would be a so-named 'crucial consideration' to those who are concerned about simulation. It would mean that no one can make the threat of hurting us in some simulation, nor can one promise to reward us in such a virtual space. However, we ourselves might still exist in some higher world's simulation (in a manner similar to what is described in SlateStarCodex's 'The View from the Ground Level'). Finally, since one consequence of my conclusion is that there is no moral downside to simulating beings that suffer, one might prefer to level a Pascal's Wager-like argument against me and say that under conditions of empirical and moral uncertainty, the moral consequences of accepting this argument (i.e. treating simulated minds as not capable of suffering) would be extreme, whereas granting simulated minds too much respect has fewer downsides.


Without further ado...


II.

Let us first distinguish two possible worlds. In the first, simulating consciousnesses [in any non-natural state] is simply impossible. That is to say, the only level on which consciousnesses may exist, is the real, physical level that we see around us. No other realms may be said to 'exist'; all other spaces are mere information -- they are fiction, not real. Nature may have the power to create consciousnesses, but not us: No matter how hard we try, we are forever unable to instantiate artificial consciousnesses. If this is the world we live in, then the Cacophony hypotheses is already counterfactually proven.


So let us say that we live in the second type of world: One where consciousnesses may exist not merely in what is directly physical, but may be instantiated also in the realm of information. Ones and zeroes by themselves are just numbers, but if you represent them with transistors and interpret them with the right rules, then you will find that they define code, programs, models, simulations --- until, finally, the level of detail (or complexity, or whatever is required) is so high that consciousnesses are being simulated.


In this world, what is the right substrate (or input) on which this simulation may take place? And what are the rules by which it may be calculated?

Some hold that the substrate is mechanical: ones and zeroes, embedded on copper, lead, silicon, and gold. But the Church-Turing thesis tells us that all sufficiently advanced computers are equally powerful. What may be simulated on ones and zeroes, may be simulated as well by combinations of colours, or gestures, or anything that has some manner of informational content. The effects -- that is, the computations that are performed -- would remain the same. The substrate may be paint, or people, or truly anything in the world, so long as it is interpreted in the right way. (See also Max Tegmark's explanation of this idea, which he calls Substrate-Independence.)

And whatever makes a simulation run -- the functions that take such inputs, and turn them into alternate simulated realities where consciousnesses may reside -- who says that the only way this could happen is by interpreting a string of bits in the exact way that a computer would interpret it? How small is the chance that out of all infinite possible functions, the only function that actually works is exactly that function that we've arbitrarily chosen to apply to computers, and which we commonly accept as having the potential for success?


III.

There are innumerably many interpretations of a changing string of ones and zeroes, of red and blue, of gasps and sighs. Computers have one consistent ruleset which tells them how to interpret bits; we may call this ruleset, 'R'. However, surely we might have chosen many other rulesets. Simple ones, like "11 means 1 and 00 means 0, and interpret the result of this with R" are (by the Church-Turing thesis) equally powerful insofar as their ability to eventually create consciousnesses goes. Slightly more complex ones, such as "0 means 101 and 1 means 011, and interpret the result of this with R" may also be consistent, provided that we unpack the input in this manner. And we need not limit ourselves to rulesets that make use of R: Any consistent ruleset, no matter how complex, may apply. What about the rule, "1 simulates the entirety of Alice, who is now a real simulated person"? Is this a valid function? Is there any point at which increasing the complexity of an interpretation rule, given some input, makes it lose the power to simulate? Or may anything that a vast computer network can simulate, be encoded into a single bit and unpacked from this, provided that we read it with the right interpretation function? Yes ---- of course that is the case: All complexity that may be contained in some input data 'X', may instead be off-loaded into a function which says "Given any bit of information, I return that data 'X'."


We are thus led to an inexorable conclusion:

  1. Every possible combination of absolutely anything that exists, is valid input.
  2. Any set of functions is a valid set of functions -- and the mathematical information space of all possible sets of functions, is vast indeed.
  3. As such, an infinite number of simulations of all kinds are happening constantly, all around us. After all, if one function (R) can take one type of input (ones and zeroes, encoded on transistors) and return a simulation-reality, then who is to say that not for all inputs there exist infinitely many functions that can operate on it to this same effect?

Under this view, the world is a cacophony of simulations, of realities all existing in information space, invisible to our eyes until they we may access them through functional interpretation methods.

IV.

This leads us to the next question: What does it mean for someone to run a simulation, now?


In Borges' short story, "The Library of Babel," there exists a library containing every book that could ever be: It is a physical representation of the vast information space that is all combinations of letters, punctuation marks, and special characters. It is now nonsensical to say that a writer creates a book: The book has always existed, and the writer merely gives us a reference to some location within this library at which the book may be found.


In the same way, all simulations already exist. Simulations are after all just certain configurations of information, interpreted in certain informational ways -- and all information already exists, in the same realm that e.g. numbers (which are themselves information) inhabit. One does not create a simulation; one merely gives a reference to some simulation in information space. The idea of creating a new simulation is as nonsensical as the idea of creating a new book, or a new number; all these structures of information already exist; you cannot create them, only reference them.


But could not consciousnesses, like books, be copied? Here we run into the classical problem of whether there can exist multiple instances of a single informational object. If there may not be, and all copies of a consciousness are merely pointers to a single 'real' consciousness, in the same way that all copies of a book may be understood to be pointers to a single 'real' book, then this is not a problem. We then would end up with the conclusion that any kind of simulation is powerless: Whether you simulate some consciousness or not, it (and indeed everything!) is already being simulated.


So suppose instead that multiple real, valid copies of a consciousness may exist. That is to say: the difference between there being one copy of Bob, and there being ten copies of Bob, is that in the latter situation, there exists more pain and joy -- namely that which the simulated Bobs are feeling -- than there is in the former situation. Could we then not still conclude that running simulations creates consciousnesses, and thus the act of running a simulation is one that has moral weight?


To refute this, a thought experiment. Suppose that a malicious AI shows you that it is running a simulation of you, and threatens to hurt sim!you if you don't do X. What power does it now have over you? What differences are there between the situation where it hurts sim!you, and the one where it rewards sim!you?

The AI is using one stream of data and interpreting it in one way (probably with ruleset R); this combination of input and processing rules results in a simulation of 'you'. In particular, because it has access to both the input and the interpretation function, it can view the simulation and show it to you. But on that same input there acts, invisibly to us, another set of rules (specified here out of infinitely many sets of rules, all of which are simultaneously acting on this input), which results in a slightly different simulation of you. This second set of rules is different in such a way, that if the AI hurts sim!you (an act which, one should note, changes the input; ruleset R remains the same), then in the second simulation, based on this input, you are rewarded, and vice versa. Now there are two simulations ongoing, both real and inhabited by a simulated version of you, both running on a single set of transistors. The AI cannot change that in one of these two simulations, you are hurt, and in the other, you are not; it can only change which one it chooses to show you.


Indeed: For every function which simulates, on some input, a consciousness that is suffering, there is another function which, on this same input, simulates that same consciousness experiencing pleasure. Or, more generally and more formally stated: Whenever the AI decides to simulate X, then for any other possible consciousness or situation Y that is not X, there exists a function which takes the input of "The AI is simulating X", and which subsequently simulates Y. (Incidentally, the function which takes this same input, and which then returns a simulation of X, is exactly that function that we usually understand to be 'simulation', namely R. However, as noted, R is just one out of infinitely many functions.) 


V.

As such, in this second world, reality is currently running uncountable billions of copies of any simulation that one may come up with, and any attempt to add one simulation-copy to reality, results instead in a new reality-state in which every simulation-copy has been added. Do not fret, you are not culpable: after all, any attempt to do anything other than adding a simulation-copy, also results in this same new reality-state. This is because any possible input, when given to the set of all possible rules or functions, yields every possible result; thus it does not matter what input you give to reality, whether that is running simulation X, or running simulation Y, or even doing act Z, or not doing act Z.


Informational space is infinite. Even if we limit our physical substrate to transistors set to ones or zeroes, we may still come up with limitless functions besides R, that together achieve this above result. In running computations, we don't change what is being simulated, we don't change what 'exists'. We merely open a window onto some piece of information. In mathematical space, everything already exists. We are not actors, but observers: We do not create numbers, or functions, or even applications of functions on numbers; we merely calculate, and view the results.


To summarize:

  1. If simulation is possible on some substrate with some rule, then it is possible on any substrate with any rule. Moreover, simulation space, like Borges' Library and number space, exist as much as they're ever going to exist; all possible simulations are already extant and running.
  2. Attempting to run 'extra' simulations on top of what reality is already simulating, is useless, because your act of simulating X is interpreted by reality as input on which it simulates X and everything else, and your act of not simulating X, is also interpreted by reality as input on which it simulates X and everything else.

It should be noted that simulations are still useful, in the same way that doing any kind of maths is useful: Amidst the infinite expanses of possible outputs, mathematical processes highlight those outputs which you are interested in. There are infinitely many numbers, but the right function with the right input can still give you concrete information. In the same way, if someone is simulating your mind, then even though they cannot cause any pain or reward that would not already 'exist' anyway, they can now nonetheless read your mind, and from this gain much information about you.


Thus simulation is still a very powerful tool.


But the idea that simulation can be used to conjure new consciousnesses into existence, seems to me to be based on a fundamental misunderstanding of what information is.


[A note of clarification: One might argue that my argument does not successfully make the jump from physically-defined inputs, such as a set of transistors representing ones and zeroes, to symbolically-defined meta-physical inputs, such as "whether or not X is being simulated." This would be a pertinent argument, since my line of reasoning depends crucially on this second type of input. To this hypothetical argument, I would counter that any such symbolic input has to exist fully in natural, physical reality in some manner: "X is being simulated" is a statement about the world which we might, given the tools (and knowing for each function what input to search for -- this is technically computable), physically check to be true or false, in the same way that one may physically check whether a certain set of transistors currently encodes some given string of bits. The second input is far more abstract, and more complex to check, than the first; but I do not think they exist on qualitatively different levels. Finally, one would not need infinite time to check the statement "X is being simulated"; just pick the function "Given the clap of one's hands, simulate X", and then clap your hands.]


VI.

Four final notes, to recap and conclude:

  1. My argument in plain English, without rigour or reason, is this: If having the right numbers in the right places is enough to make new people exist (proposition A), then anything is enough to make anything exist (B). It follows that if we accept A, which many thinkers do, then everything -- every possible situation -- currently exists. It is moreover of no consequence to try and add a new situation to this 'set of all possible situations, infinite times', because your new situation is already in there an infinite amount of times, and furthermore, abstaining from adding this new situation counts as 'anything' and thus, by B, would also add the new situation to this set.
  2. You cannot create a book or a number; you're merely providing a reference to some already extant book in Babel's Library, or to some extant number in number space. In the same way, running a simulation, the vital part of which (by the Church-Turing thesis) has to be entirely based on non-physical information, should no longer be seen as the act of creating some new reality; it merely opens a window into a reality that was already there.
  3. The idea that every possible situation, including terrible, hurtful ones, is real, may be very stressful. To people who are bothered by this, I offer the view that perhaps we do live in the ground level, and simulating artificial, non-natural consciousnesses, may be impossible: Our own world may well be all that there is. The Cacophony Hypothesis is not suitable to establish that the idea of "reality is a cacophony of simulations" is necessarily true; rather, it was written to argue that if and only if we accept that some kind of simulation is possible, then it would be strange to also deny that every other kind of simulation is possible.
  4. A secondary aim is to re-center the discussion around simulation: To go from a default idea of "Computation is the only method through which simulation may take place," to the new idea, which is "Simulations may take place everywhere, in every way." The first view seems too neat, too well-suited to an accidental reality, strangely and unreasonably specific; we are en route to discovering one type of simulation ourselves, and thus it was declared that this was the only type, the only way. The second view -- though my bias should be noted! -- strikes me as being general and consistent; it is not formed specifically around the 'normal', computer-influenced ideas of what forms computation takes, but rather allows for all possible forms of computation to have a role in this discussion.I may well be wrong, but it seems to be that the burden of proof should not be on those who say that "X may simulate Y"; it should be on those who say "X may only be simulated by Z." The default understanding should be that inputs and functions are valid until somehow proven invalid, rather than the other way around. (Truthfully, to gain a proof either way is probably impossible, unless we were to somehow find a method to measure consciousness -- and this would have to be a method that recognizes p-zombies for what they are.)

Thanks goes to Matthijs Maas for helping me flesh out this idea through engaging conversations and thorough feedback.

14 comments

Comments sorted by top scores.

comment by nshepperd · 2019-04-16T07:41:05.793Z · LW(p) · GW(p)

This idea is, as others have commented, pretty much Dust theory.

The solution, in my opinion, is the same as the answer to Dust theory: namely, it is not actually the case that anything is a simulation of anything. Yes, you can claim that (for instance) the motion of the atoms in a pebble can be interpreted as a simulation of Alice, in the sense that anything can be mapped to anything... but in a certain more real sense, you can't.

And that sense is this: an actual simulation of Alice running on a computer grants you certain powers - you can step through the simulation, examine what Alice does, and determine certain facts such as Alice's favourite ice cream flavour (these are logical facts, given the simulation's initial state). If the simulation is an upload of your friend Alice, then by doing so you learn meaningful new facts about your friend.

In comparison, a pebble "interpreted" as a simulation of Alice affords you no such powers, because the interpretation (mapping from pebble states to simulation data) is entirely post-hoc. The only way to pin down the mapping---such that you could, for instance, explicitly write it down, or take the pebble's state and map it to an answer about Alice's favourite ice cream---is to already have carried out the actual simulation, separately, and already know these things about Alice.

In general, "legitimate" computations of certain logical facts (such as the answers one might ask about simulations of people) should, in a certain sense, make it easier to calculate those logical facts then doing so from scratch.

A specific formalization of this idea would be that a proof system equipped with an oracle (axiom schema) describing the states of the physical system which allegedly computed these facts, as well as its transition rule, should be able to find proofs for those logical facts in less steps than one without such axioms.

Such proofs will involve first coming up with a mapping (such as interpreting certain electrical junctions as nand gates), proving them valid using the transition rules, then using induction to jump to "the physical state at timestep t is X therefore Alice's favourite ice cream colour is Y". Note that the requirement that these proofs be short naturally results in these "interpretations" being simple.

As far as I know, this specific formalization of the anti-Dust idea is original to me, though the idea that "interpretations" of things as computations ought to be "simple" is not particularly new.

Replies from: Bunthut
comment by Bunthut · 2019-04-17T21:39:52.973Z · LW(p) · GW(p)

I think youve given a good analysis of "simulation", but it doesnt get around the problem OP presents.

The only way to pin down the mapping---such that you could, for instance, explicitly write it down, or take the pebble's state and map it to an answer about Alice's favourite ice cream---is to already have carried out the actual simulation, separately, and already know these things about Alice.

Its also possible to do those calculations during the interpretation/translation. You may have meant that, I cant tell.

Your idea that the computation needs to happen somewhere is good, but in order to make it work you need to specify a "target format" in which the predictions are made. "1" doesnt really simulate Alice because you cant read the predictions it makes, even when they are technically "there" in a mathematical sense, and the translation into such a format involves what we consider the actual simulation.

This means though, that whether something is a simulation is only on the map, and not in the territory. It depends on a what that "target format" is. For example a description in chinese is in a sense not a real description to me, because I cant process it efficiently. Someone else however, may, and to them it is a real descripton. Similarly one could write a simulation in a programming language we dont know, and if they dont leave us a compiler or docs, we would have a hard time noticing. So whether something is a simulation can depend on the observer.

If we want to say that simulations are conscious and ethically relevant, this seems like something that needs to be adressed.

Replies from: nshepperd
comment by nshepperd · 2019-04-21T22:33:33.034Z · LW(p) · GW(p)

That's not an issue in my formalization. The "logical facts" I speak of in the formalized version would be fully specified mathematical statements, such as "if the simulation starts in state X at t=0, the state of the simulation at t=T is Y" or "given that Alice starts in state X, then <some formalized way of categorising states according to favourite ice cream flavour> returns Vanilla". The "target format" is mathematical proofs. Languages (as in English vs Chinese) don't and can't come in to it, because proof systems are language-ignorant.

Note, the formalized criterion is broader than the informal "could you do something useful with this simulation IRL" criterion, even though the latter is the 'inspiration' for it. For instance, it doesn't matter whether you understand the programming language the simulation is written in. If someone who did understand the language could write the appropriate proofs, then the proofs exist.

Similarly, if a simulation is run under Homomorphic_encryption, it is nevertheless a valid simulation, despite the fact that you can't read it if you don't have the decryption key. Because a proof exists which starts by "magically" writing down the key, proving that it's the correct decryption key, then proceeding from there.

An informal criterion which maybe captures this better would be: If you and your friend both have (view) access to a genuine computation of some logical facts X, it should be possible to convince your friend of X in fewer words by referring to the alleged computation (but you are permitted unlimited time to think first, so you can reverse engineer the simulation, bruteforce some encryption keys, learn Chinese, whatever you like, before talking). A bit like how it's more efficient to convince your friend that 637265729567*37265974 = 23748328109134853258 by punching the numbers into a calculator and saying "see?" than by handing over a paper with a complete long multiplication derivation (assuming you are familiar with the calculator and can convince your friend that it calculates correctly).

Replies from: Bunthut
comment by Bunthut · 2019-04-24T12:40:50.854Z · LW(p) · GW(p)

In that case, the target format problem shows up in the formalisation of the physical system.

A specific formalization of this idea would be that a proof system equipped with an oracle (axiom schema) describing the states of the physical system which allegedly computed these facts, as well as its transition rule, should be able to find proofs for those logical facts in less steps than one without such axioms.
Such proofs will involve first coming up with a mapping (such as interpreting certain electrical junctions as nand gates), proving them valid using the transition rules, then using induction to jump to "the physical state at timestep t is X therefore Alice's favourite ice cream colour is Y".

How do you "interpret" certain electrical junctions as nand gates? Either you already have

a proof system equipped with an axiom schema describing the states of the physical system, as well as its transition rule

or this is a not fully formal step. Odds are you already have one (your theory of physics). But then you are measuring proof shortness relative to that system. And you could be using one of countless other formal systems which always make the same predictions, but relative to which different proofs are short and long. To steal someone elses explanation:

Let us imagine a white surface with irregular black spots on it. We then say that whatever kind of picture these make, I can always approximate as closely as I wish to the description of it by covering the surface with a sufficiently fine square mesh, and then saying of every square whether it is black or white. In this way I shall have imposed a unified form on the description of the surface. The form is optional, since I could have achieved the same result by using a net with a triangular or hexagonal mesh. Possibly the use of a triangular mesh would have made the description simpler: that is to say, it might be that we could describe the surface more accurately with a coarse triangular mesh than with a fine square mesh (or conversely), and so on.

And which of these empirically indistinguishable formalisations you use is of course a fact about the map. In your example:

A bit like how it's more efficient to convince your friend that 637265729567*37265974 = 23748328109134853258 by punching the numbers into a calculator and saying "see?" than by handing over a paper with a complete long multiplication derivation (assuming you are familiar with the calculator and can convince your friend that it calculates correctly).

The assumption (including that it takes in and puts out in arabic numerals, and uses "*" as the multuplication command, and that buttons must be pressed,... and all the other things you need to actually use it) includes that.

Replies from: nshepperd
comment by nshepperd · 2019-04-25T17:19:39.715Z · LW(p) · GW(p)

Yes, you need to have a theory of physics to write down a transition rule for a physical system. That is a problem, but it's not at all the same problem as the "target format" problem. The only role the transition rule plays here is it allows one to apply induction to efficiently prove some generalization about the system over all time steps.

In principle a different more distinguished concise description of the system's behaviour could play the a similar role (perhaps, the recording of the states of the system + the shortest program that outputs the recording?). Or perhaps there's some way of choosing a distinguished "best" formalization of physics. But that's rather out of scope of what I wanted to suggest here.

But then you are measuring proof shortness relative to that system. And you could be using one of countless other formal systems which always make the same predictions, but relative to which different proofs are short and long.

It would be a O(1) cost to start the proof by translating the axioms into a more convenient format. Much as Kolmogorov complexity is "language dependent" but not asymptotically because any particular universal turing machine can be simulated in any other for a constant cost.

The assumption (including that it takes in and puts out in arabic numerals, and uses “*” as the multuplication command, and that buttons must be pressed,… and all the other things you need to actually use it) includes that.

These are all things that can be derived from a physical description of the calculator (maybe not in fewer steps than it takes to do long multiplication, but certainly in fewer steps than less trivial computations one might do with a calculator). There's no observer dependency here.

Replies from: Bunthut
comment by Bunthut · 2019-04-26T11:57:26.597Z · LW(p) · GW(p)
It would be a O(1) cost to start the proof by translating the axioms into a more convenient format. Much as Kolmogorov complexity is "language dependent" but not asymptotically because any particular universal turing machine can be simulated in any other for a constant cost.

And the thing that isnt O(1) is to apply the transition rule until you reach the relevant time step, right? I think I understand it now: The calculations involved in applying the transition rule count towards the computation length, and the simulation should be able to answer multible questions abouth the thing it simulates. So if object A simulates object B, we make a model X of A, prove it equivalent to the one in our theory of physics, then prove it equivalent to your physics model of B, then calculate forward in X, then translate the result back into B with the equivalence. And then we count the steps all this took. Before I ask any more questions, am I getting that right?

comment by Charlie Steiner · 2019-04-15T08:38:27.943Z · LW(p) · GW(p)

Comment status: long.

Before talking about your (quite fun) post, I first want to point out a failure mode exemplified by Scott's "The View From Ground Level." Here's how he gets into trouble (or begins trolling): first he is confused about consciousness. Then he postulates a unified thing - "consciousness" proper - that he's confused about. Finally he makes an argument that manipulates this thing as if it were a substance or essence. These sorts of arguments never work. Just because there's a cloud, doesn't mean that there's a thing inside the cloud precisely shaped like the area obscured by the cloud.

Okay, on to my reaction to this post.

When trying to ground weird questions about point-of-view and information, one useful question is "what would a Solomonoff inductor think?" The really short version of why we can take advice from a Solomonoff inductor is that there is no such thing as a uniform prior over everything - if you try to put a uniform prior over everything, you're trying to assign each hypothesis a probability of 1/infinity, which is zero, which is not a good probability to give everything. (You can play tricks that effectively involve canceling out this infinite entropy with some source of infinite information, but let's stick to the finite-information world). To have a probability distribution over infinite hypotheses, you need to play favorites. And this sounds a lot like Solomonoff's "hypotheses that are simple to encode for some universal Turing machine should be higher on the list."

So what would a Solomonoff inductor think about themselves? Do they think they're the "naive encoding," straightforwardly controlling a body in some hypothesized "real world?" Or are they one of the infinitely many "latent encodings," where the real world isn't what it seems and the inductor's perceptions are instead generated by some complicated mapping from the state of the world to the memories of the inductor?

The answer is that the Solomonoff inductor prefers the naive encoding. We're pretty sure my memories are (relatively) simple to explain if you hypothesize my physical body. But if you hypothesize that my memories are encoded in the spray from a waterfall, the size of the Turing machine required to translate waterfall-spray into my memories gets really big. One of the features of Solomonoff inductors that's vital to their nice properties is that hypotheses become more unlikely faster than they become more numerous. There are an infinite number of ways that my memories might be encoded in a waterfall, or in the left foot of George Clooney, or even in my own brain. But arranged in order of complexity of the encoding, these infinite possibilities get exponentially unlikely, so that their sum remains small.

So the naive encoding comes out unscathed when it comes to myself. But what about other people? Here I agree the truth has to be unintuitive, but I'd be a bit more eliminitavist than you. You say "all those experiences exist," I'd say "in that sense, none of them exist."

From the point of view of the Solomonoff inductor, there is just the real world hypothesized to explain our data. Other people are just things in the real world. We presume that they exist because that presumption has explanatory power.

You might say that the Solomonoff inductor is being hypocritical here. It assumes that my body has some special bridging law to some sort of immaterial soul, some Real Self that is doing the Solomonoff-inducting, but it doesn't extend that assumption to other people. To be cosmopolitan, you'd say, we should speculate about the bridging laws that might connect experiences to our world like hairs on a supremely shaggy dog.

I'd say that maybe this is the point where me and the Solomonoff inductor part ways, because I don't think I actually have an immaterial soul, it's just a useful perspective to take sometimes. I'd like to think I'm actually doing some kind of naturalized induction that we don't quite know how to formalize yet, that allows for the fact that the thing doing the inducting might actually be part of the real world, not floating outside it, attached only by an umbilical cord.

I don't just care about people because I think they have bridging laws that connect them to their Real Experiences; any hypotheses about Real Experiences in my description of the world are merely convenient fictions that could be disposed of if only I was Laplace's demon.

I think that in the ultimate generalization of how we care about things, the one that works even when all the weirdnesses of the world are allowed, things that are fictional will not be made fundamental. Which is to say, the reason I don't care about all the encodings of me that could be squeezed into every mundane object I encounter isn't because they all cancel out by some phenomenal symmetry argument, it's because I don't care about those encodings at all. They are, in some deep sense, so weird I don't care about them, and I think that such a gradient that fades off into indifference is a fundamental part of any realistic account of what physical systems we care about.

comment by Dagon · 2019-04-14T23:09:13.202Z · LW(p) · GW(p)

This seems to hinge on something you haven't defined nor attempted to measure: consciousness. You've left out some possibilities:

  • perhaps nothing is conscious.
  • perhaps only you are conscious and the rest of who you perceive are automata.
  • perhaps everything is conscious. every calculation does indeed feel.
  • perhaps consciousness and qualia aren't even close to what we've considered.

Basically, why would you expect that ANY consciousness exists, if a simulation/calculation doesn't have it? all the creatures you see are just neural calculations/reactions, aren't they? I'll grant that you may have special knowledge that you yourself exist, but what makes you believe anyone else does?

comment by Slider · 2019-04-16T22:11:34.305Z · LW(p) · GW(p)

If people are biological computers and simulation can't spring up new consciousness, doesn't that mean that a baby can't have a conciouness? In a way baby isn't meant to simulate but I do think that the internal world shouldn't be designated as illusory. That is we don't need or ask for detailed brainstate histories of babies to designate them as conscious.

comment by Nebu · 2019-05-02T05:36:13.755Z · LW(p) · GW(p)

I see some comments hinting at towards this pseudo-argument, but I don't think I saw anyone make it explicitly:

Say I replace one neuron in my brain with a little chip that replicates what that neuron would have done. Say I replace two, three, and so on, until my brain is now completely artificial. Am I still conscious, or not? If not, was there a sudden cut-off point where I switched from conscious to not-conscious, or is there a spectrum and I was gradually moving towards less and less conscious as this transformation occurred?

If I am still conscious, what if we remove my artificial brain, put it in a PC case, and just let it execute? Is that not a simulation of me? What if we pause the chips, record each of their exact states, and instantiate those same states in another set of chips with an identical architecture?

If there consciousness is a spectrum instead of a sudden cut off point, how confident are we that "simulations" of the type that you're claiming are "not" (as in 0) conscious, aren't actually 0.0001 conscious?

comment by countingtoten · 2019-04-16T08:46:37.422Z · LW(p) · GW(p)

Comment status: I may change my mind on a more careful reading.

Other respondents have mentioned the Mathematical Macrocosm Hypothesis. My take differs slightly, I think. I believe you've subtly contradicted yourself. In order for your argument to go anywhere you had to assume that an abstract computation rule exists in the same sense as a real computer running a simulation. This seems to largely grant Tegmark's version of the MMH (and may be the first premise I reject here). ETA: the other branch of your dilemma doesn't seem to engage with the functionalist view of qualia, which says that the real internal behavior or relationships within a physical system are what matter.

Now, we're effectively certain that our world is fundamentally governed by mathematical laws of physics (whether we discover the true laws or not). Dualist philosophers like Chalmers seem to grant this point despite wanting to say that consciousness is different. I think Chalmers freely grants that your consciousness - despite being itself non-physical, on his view - is wholly controlled by physical processes in your brain. This seems undisputed among serious people. (You can just take certain chemicals or let yourself get hungry, and see how your thoughts change.)

So, on the earlier Tegmark IV premise, there's no difference between you and a simulation. You are a simulation within an abstract mathematical process, which exists in exactly the same way as an arithmetical sequence or the computational functions you discuss. You are isomorphic to various simulations of yourself within abstract computations.

Chalmers evidently postulates a "bridging law" in the nature of reality which makes some simulations conscious and not others. However, this seems fairly arbitrary, and in any case I also recall Chalmers saying that a person uploaded (properly) to a computer would be conscious. I certainly don't see anything in his argument to prevent this. If you don't like the idea of this applying to more abstract computations, I recommend you reject Tegmark and admit that the nature of reality is still technically an open problem.

comment by a gently pricked vein (strangepoop) · 2019-04-15T21:13:08.842Z · LW(p) · GW(p)

Responses to your four final notes:

1. This is, as has been remarked in another comment, pretty much Dust theory. See also Moravec's concise take on the topic, referenced in the Dust theory FAQ. Doing a search for it on LW might also prove helpful for previous discussions.

2. "that was already there"? What do you mean by this? Would you prefer to use the term 'magical reality fluid' instead of "exists"/"extant"/"real"/"there" etc, to mark your confusion [LW · GW] about this? If you instead feel like you aren't confused about these terms, please provide (a link to) a solution. You can find the problem statement in The Anthropic [LW · GW] Trilemma [LW · GW].

3. Eliezer deals [LW(p) · GW(p)] with this using average utilitarianism, depending on whether or not you agree with rescuability (see below).

4. GAZP vs GLUT [LW · GW] talks about the difference between a cellphone transmitting information of consciousness vs the actual conscious brain on the other end, and generalizes it to arbitrary "interpretations". That is, there are parts of the computation that are merely "interpreting", informing you about consciousness and others that are "actually" instantiating. It may not be clear what exactly the crucial difference is yet, but I think it might be possible to rescue the difference, even if you can construct continuums to mess with the notion. This is of course deeply tied to 2.

----

It may seem that my takeaway from your post is mostly negative, this is not the case. I appreciate this post, it was very well organized despite tackling some very hairy issues, which made it easier to respond to. I do feel like LW could solve this somewhat satisfactorily, perhaps some people already have and don't bother pointing the rest of us/are lost in the noise?

Replies from: strangepoop
comment by a gently pricked vein (strangepoop) · 2019-04-15T21:19:13.437Z · LW(p) · GW(p)

To further elaborate 4: your example of the string "1" being a conscious agent because you can "unpack" it into an agent really feels like it shouldn't count: you're just throwing away the "1" and replaying a separate recording of something that was conscious. This sounds about as much of a non-sequitur as "I am next to this pen, so this pen is conscious".

We could, however, make it more interesting by making the computation depend "crucially" on the input. But what counts?

Suppose I have a program that turns noise into a conscious agent (much like generative models can turn a noise vector into a face, say). If we now seed this with a waterfall, is the waterfall now a part of the computation, enough to be granted some sentience/moral patienthood? I think the usual answer is "all the non-trivial work is being done by the program, not the random seed", as Scott Aaronson seems to say here. (He also makes the interesting claim of "has to participate fully in the arrow of time to be conscious", which would disqualify caching and replaying.)

But this can be made a little more confusing, because it's hard to tell which bit is non-trivial from the outside: suppose I save and encrypt the conscious-generating-program. This looks like random noise from the outside, and will pass all randomness tests. Now I have another program with the stored key decrypt it and run it. From the outside, you might disregard the random-seed-looking-thingy and instead try to analyze the decryption program, thinking that's where the magic is.

I'd love to hear about ideas to pin down the difference between Seeding and Decrypting in general, for arbitrary interpretations. It seems within reach, and like a good first step, since the two lie on roughly opposite ends of a spectrum of "cruciality" when the system breaks down into two or more modules.

comment by avturchin · 2019-04-14T22:24:31.358Z · LW(p) · GW(p)

Your idea, in a nutshell, looks like for me as the idea of Boltzmann brains in the mathematical universe. That is, if simulation is possible, then random minds should dominate. But as I am not a random chaotic mind, thus random minds are not dominating, and thus simulations is impossible.

The weak point of it, in my opinion, is that one can't prove that she is not a random chaotic mind, and more over, even random observer-moments could form "chains", as was describe by Egan in his dust theory, by Wei Dai and by Mueller's article "Law without law".

In other words, we could be now inside a simulation without a creator, just a random book in the Babel library, and there is no way we could prove that we are not: the world we observe could be completely random, but our thinking process is also random, so we can conclude that world is not random in 50 per cent of cases.