The conscious tape

post by PhilGoetz · 2010-09-16T19:55:51.997Z · LW · GW · Legacy · 119 comments

Contents

  Option 1: Consciousness is computed
  Option 2: Consciousness is computation
  Option 3: Consciousness is the result of quantum effects in microtubules
  ADDED
None
119 comments

This post comprises one question and no answers.  You have been warned.

I was reading "How minds can be computational systems", by William Rapaport, and something caught my attention.  He wrote,

Computationalism is - or ought to be - the thesis that cognition is computable ... Note, first, that I have said that computationalism is the thesis that cognition is computable, not that it is computation (as Pylyshyn 1985 p. xiii characterizes it). ... To say that cognition is computable is to say that there is an algorithm - more likely, a collection of interrelated algorithms - that computes it.  So, what does it mean to say that something 'computes cognition'? ... cognition is computable if and only if there is an algorithm ... that computes this function (or functions).

Rapaport was talking about cognition, not consciousness.  The contention between these hypothesis is, however, only interesting if you are talking about consciousness; if you're talking about "cognition", it's just a choice between two different ways to define cognition.

When it comes to consciousness, I consider myself a computationalist.  But I hadn't realized before that my explanation of consciousness as computational "works" by jumping back and forth between those two incompatible positions.  Each one provides part of what I need; but each, on its own, seems impossible to me; and they are probably mutually exclusive.

Option 1: Consciousness is computed

If consciousness is computed, then there are no necessary dynamics.  All that matters is getting the right output.  It doesn't matter what algorithm you use to get that output, or what physical machinery you use to compute it.  In the real world, it matters how fast you compute it; but surely you can provide a simulated world at the right speed for your slow or fast algorithm.  In humans today, the output is not produced all at once - but from a computationalist perspective, that isn't important.  I know "emergence" is wonderful, but it's still Turing-computable.  Whatever a "correct" sequence of inputs and outputs is, even if they overlap in time, you can summarize the inputs over time in a single static representation, and the outputs in a static representation.

So what is conscious, in this view?  Well, the algorithm doesn't matter - remember, we're not asking for O(consciousness); we're saying that consciousness is computed, and therefore is the output of a computation.  The machine doing the computing is one step further removed than the algorithm, so it's certainly not eligible as the seat of consciousness; it can be replaced by an infinite number of computationally-equivalent different substrates.

Whatever it is that's conscious, you can compute it and represent it in a static form.  The simplest interpretation is that the output itself is conscious.  So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious.  Or the information on the tape, or - whatever it is that's conscious, it is a static thing, not a living, dynamic thing.  If computation is an output, process doesn't matter.  Time doesn't enter into it.

The only way out of this is to claim that an output that, when coming out of a dynamic real-time system, is conscious, becomes unconscious when it's converted into a static representation, even if the two representations contain exactly the same information.  (X and Y have the same information if an observer can translate X into Y, and Y into X.  The requirement for an observer may be problematic here.)   This strikes me as not being computationalist at all.  Computationalism means considering two computational outputs equivalent if they contain the same information, whether they're computed with neurons and represented as membrane potentials, or computed with Tinkertoys and represented by rotations of a set of wheels.  Is the syntactic transformation from a dynamic to a static representation a greater qualitative change than the transformation from tinkertoys to neurons?  I don't think so.

Option 2: Consciousness is computation

If consciousness is computation, then we have the satisfying feeling that how we do those computations matters.  But then we're not computationalists anymore!

A computational analysis will never say that one algorithm for producing a series of outputs produces an extra computational effect (consciousness) that another method does not.  If it's not output, or internal representational state, it doesn't count.  There are no other "by-products of computation".  If you use a context-sensitive grammar to match a regular expression, it doesn't make the answer more special than if you used a regular grammar.

Don't protest that a human talks and walks and thereby produces side-effects during the computation.  That is not a computational analysis.  A computational analysis will give the same result if you translate whatever the algorithm and machine running it is, onto tape in a Turing machine.  Anything that gives a different result is not a computational analysis.  If these side-effects don't show up on the tape, it's because you forgot to represent them.

An analysis of the actual computation process, as opposed to its output, could be a thermodynamic analysis, which would care about things like how many bits the algorithm erased internally.  I find it hard to believe that consciousness is a particular pattern of entropy production or waste heat.  Or it could be a complexity or runtime analysis, that cared about how long it took.  A complexity analysis has a categorical output; there's no such thing as a function being "a little bit recursively enumerable", as I believe there is with consciousness.  So I'd be surprised if "conscious" is a property of an algorithm in the same way that "recursively enumerable" is.  A runtime analysis can give more quantitative answers, but I'm pretty sure you can't become conscious by increasing your runtime.  (Otherwise, Windows Vista would be conscious.)

Option 3: Consciousness is the result of quantum effects in microtubules

Just kidding.  Option 3 is left as an exercise for the reader, because I'm stuck.  I think a promising angle to pursue would be the necessity of an external observer to interpret the "conscious tape".  Perhaps a conscious computational device is one that observes itself and provides its own semantics.  I don't understand how any process can do that; but a static representation clearly can't.

ADDED

Many people are replying by saying, "Obviously, option 2 is correct," then listing arguments for, without addressing the problems with option 2.  That's cheating.

119 comments

Comments sorted by top scores.

comment by Jonii · 2010-09-18T15:05:10.660Z · LW(p) · GW(p)

Or the information on the tape, or - whatever it is that's conscious, it is a static thing, not a living, dynamic thing. If computation is an output, process doesn't matter. Time doesn't enter into it.

Time is right where it should be in at: In the tape. We, the Matrix-Lords, are above the time of the tape, but it doesn't mean the consciousness within the tape isn't living and dynamic and all that.

The question presented is very intriguing, thank you for it.

comment by Nisan · 2010-09-16T23:20:01.111Z · LW(p) · GW(p)

This question arises when I consider the moral status of intelligent agents. If I encounter a morally-significant dormant Turing machine with no input devices, do I need to turn it on?

If yes, notice that state N of the machine can be encoded as the initial state of the machine plus the number N. Would it suffice to just start incrementing a counter and say that the machine is running?

If I do not need to turn anything on, I might as well destroy the machine, because the Turing machine will still exist in a Platonic sense, and the Platonic machine won't notice if I destroy a manifestation of it.

David Allen notes that consciousness ought to be defined relative to a context in which it can be interpreted; somewhat similarly, Jacob Cannell believes that consciousness needs some environment in order to be well-defined.

I think the answer to my moral question is that the rights of an intelligent agent can't be meaningfully decomposed into a right to exist and a right to interact with the world.

Replies from: jacob_cannell, David_Allen, Armok_GoB, MatthewW
comment by jacob_cannell · 2010-09-17T00:36:50.162Z · LW(p) · GW(p)

For your moral questions, I think it would help if you replace "morally significant dormant turing machine with no input devices" with "comatose human".

If yes, notice that state N of the machine can be encoded as the initial state of the machine plus the number N. Would it suffice to just start incrementing a counter and say that the machine is running?

Notice that the state N of a comatose human patient can be encoded as the initial state plus the number N. Would it suffice to just start incrementing a stopwatch and say that the patient is well?

If I do not need to turn anything on, I might as well destroy the machine, because the Turing machine will still exist in a Platonic sense, and the Platonic machine won't notice if I destroy a manifestation of it.

If I do not need to turn anything on, I might as well destroy the patient, because the patient will still exist in a Platonic sense, and the Platonic patient won't notice if I destroy a manifestation of it.

The only platonic sense in which things still exist after being destroyed is in the sense of us remembering and thinking about them - a very weak form of simulation. If we could think in much more precision and with vastly more power, then we could make thought-things 'real'. But until we have simulations of such power, all we have is the real world. And nonetheless, everything that exists, even in simulation, must be encoded somewhere with matter/energy in the universe.

Replies from: Nisan
comment by Nisan · 2010-09-17T07:45:40.948Z · LW(p) · GW(p)

For your moral questions, I think it would help if you replace "morally significant dormant turing machine with no input devices" with "comatose human".

Ah, but presumably if we were to wake up the comatose person, they would start interacting with the world; and their output would depend on the particulars of the state of the world. In that case I clearly want to wake them up.

I was thinking of a morally significant dormant Turing machine that was not designed to have input devices. For example, a comatose person with no sensory organs. If they woke up, they would awaken to a life of dreams and dark solitude, proceeding deterministically from their initial state. Let's assume there is absolutely no way to restore this person's senses. It's not clear to me that it's morally desirable to wake them up.

comment by David_Allen · 2010-09-17T00:05:39.311Z · LW(p) · GW(p)

David Allen notes that consciousness ought to be defined relative to a context in which it can be interpreted; somewhat similarly, Jacob Cannell believes that consciousness needs some environment in order to be well-defined.

Good summary. Yes my statements are in part a recasting of the functionalism philosophy mentioned by Jacob Cannell, in terms of the context principle, which I describe here.

comment by Armok_GoB · 2010-09-19T20:47:12.649Z · LW(p) · GW(p)

If this is an ACTUAL situation as described, rather than the contrived one you intended, you should copy the contents to somewhere you have good control over, then run it and meddle with it to give it I/O devices, or run it for as far as the agent(s) in it would have wanted it to run and then add I/O devices, or extract the agents as citizens in your FAI optimized place to have fun, or something along those lines like.

comment by MatthewW · 2010-09-16T23:44:03.536Z · LW(p) · GW(p)

It seems to me that the arguments so lucidly presented elsewhere on Less Wrong would say that the machine is conscious whether or not it is run, and indeed whether or not it is built in the first place: if the Turing machine outputs a philosophical paper on the question of consciousness of the same kind that human philosophers write, we're supposed to take it as conscious.

Replies from: lukstafi, lukstafi
comment by lukstafi · 2011-03-21T19:51:01.399Z · LW(p) · GW(p)

It is useful to distinguish the properties "a subsystem C of X is conscious in X" and "C exists in a conscious way" (which means that additionally X=reality). I think Nisan expresses that idea in the parent comment.

comment by lukstafi · 2011-03-21T19:43:53.012Z · LW(p) · GW(p)

The machine considered has the property of being conscious in its context X (i.e. X = the system containing the machine, the producers of its input and consumers of its output). The machine exists in a conscious way if additionally X = reality.

comment by DanielVarga · 2010-09-19T00:37:44.809Z · LW(p) · GW(p)

ADDED: Many people are replying by saying, "Obviously, option 2 is correct," then listing arguments for, without addressing the problems with option 2. That's cheating.

Phil, I have to say that I don't think the problems with option 2 are actually presented in you post. But that does not mean that we are allowed to dodge the question implicit in your post: how to formally distinguish between two computational processes, one conscious, the other not. Let me start my attempt with a quote:

"Consciousness is overrated. What we call consciousness now is a very imperfect summary in one part of the brain of what the rest is doing." - Marvin Minsky

I believe with Minsky that consciousness is a very anthropocentric concept, inheriting much of the complexity of its originators. I actually have no problem with an anthropocentric approach to consciousness, so I like the following intuitive "definition": X is conscious if it is not silly to ask "what is it like to be X?". The subtle source of anthropocentrism here, of course, is that it is humans who do the asking. As materialists, we just can't formalize this intuitive definition without mapping specific human brain functions to processes of X. In short, we inherently need human neuroscience. So it is not too surprising that we will not find a nice, clean decision procedure to distinguish between two computational processes, one conscious the other not.

Most probably you are not happy with this anthropocentric approach. Then you will have to distill some clean, mathematically tractable concept from the messy concept of consciousness. If you agree with Hofstadter and Minsky, then you will probably reach something related to self-reflection. This may or may not work, but I believe that you will lose the spirit of the original concept during such a formalization. Your decision procedure will probably give unexpected results for many things: various simple, very unintelligent computer programs, hive minds, and maybe even rooms full of people.

comment by DanArmak · 2010-09-17T19:31:27.119Z · LW(p) · GW(p)

What is consciousness, and what kinds of things are conscious?

I've seen this debated many times, and I suspect that people are merely arguing over the meaning of a word that does not happen to carve reality at the joints.

What is it in the physical universe that you call consciousness and which might be present or absent in a computational device? What prediction is being made by any of the theories 1 through 3 in this post, that I can go out and test?

comment by orthonormal · 2010-09-17T00:10:55.005Z · LW(p) · GW(p)

I endorse the first alternative; the intuition at first felt wrong (in a Chinese Room sort of way), but that feeling disappeared when I realized the following:

I was envisioning a tape (call it Tape A) which only recorded some very small end result of Turing Machine A, like the numerical output of a calculation or the move Deep Blue makes. And that seems too "small" somehow to encapsulate consciousness— I felt that I needed the moving Turing machine to make it "live" in all its detail.

But of course, it's trivial to write a different Turing machine which writes on a tape (call it Tape B) the entire history of Machine A's computation (as well as its output), and this indeed has the required richness for me to be comfortable in calling Tape B conscious.

Replies from: David_Allen
comment by David_Allen · 2010-09-17T00:39:35.438Z · LW(p) · GW(p)

But of course, it's trivial to write a different Turing machine which writes on a tape (call it Tape B) the entire history of Machine A's computation (as well as its output), and this indeed has the required richness for me to be comfortable in calling Tape B conscious.

In what context can Tape B be labeled conscious?

A history of consciousness does not seem to me to be the same as consciousness. A full debug trace of a program is simply not the same thing as the original program.

If however you create a Machine C that replays Tape B, I would grant that Machine C reproduces the consciousness of Machine A.

Replies from: orthonormal
comment by orthonormal · 2010-09-17T00:50:04.693Z · LW(p) · GW(p)

This gets into hairy territory with no clear "conscious"/"not conscious" boundary between a spectrum of different variations, but I'd say that the interpretive framework needed to trace a thought from the log on Tape B is essentially the same as the interpretive framework needed to trace it from the action of Machine A on the start tape. They're isomorphic mathematical objects.

Replies from: David_Allen
comment by David_Allen · 2010-09-17T02:12:10.604Z · LW(p) · GW(p)

I agree with everything you say here.

I claim that the "interpretive framework" you refer to is essential in the labeling of Tape B as conscious. Without specifying the context, the consciousness of Tape B is unknown.

Replies from: orthonormal, PhilGoetz
comment by orthonormal · 2010-09-18T17:47:15.966Z · LW(p) · GW(p)

I claim that the "interpretive framework" you refer to is essential in the labeling of Tape B as conscious. Without specifying the context, the consciousness of Tape B is unknown.

You might be interested in the thought experiment of a so-called "joke interpretation", which maps the random molecule oscillations in (say) a rock onto a conscious mind, and asks what the difference is between this and a more "reasonable" map from a brain to a mind. There's a good discussion of this in Good and Real.

Replies from: David_Allen
comment by David_Allen · 2010-09-20T23:36:40.200Z · LW(p) · GW(p)

I skimmed the material and see what you mean.

I would restate the thought experiment as such. A state sequence measured from a rock is used to generate a look-up table that maps from the rock state sequence to a pre-measured consciousness state sequence. This is essentially an encryption of the consciousness state sequence using the rock state sequence as a one-time pad. The consciousness state sequence can be generated by replaying the rock state sequence through the look-up table. With the final question being: is the rock conscious?

In the model I've outlined in my comments, consciousness exists at the level the consciousness abstraction is present. In this case that abstraction is not present at the level of the rock, but only at the level of the system that uses the look-up table, and only for the duration of the sequence. The states measured from the rock are used to generate the consciousness, but they are not the consciousness.

Replies from: bogus
comment by bogus · 2010-09-21T00:22:34.671Z · LW(p) · GW(p)

In this case that abstraction is not present at the level of the rock, but only at the level of the system that uses the look-up table

What is the "system that uses the look-up table"? Do you require a particular kind of physical system in order for consciousness to exist? If not, what if the "system" which replays the sequence is a human with a pen and paper? Does the system truly exhibit the original consciousness sequence, in addition to the human's existing consciousness?

Replies from: David_Allen
comment by David_Allen · 2010-09-21T03:01:41.813Z · LW(p) · GW(p)

Ah, Chinese room questions.

The system that replays the sequence can be anything, including a human with pen and paper.

Does the system truly exhibit the original consciousness sequence, in addition to the human's existing consciousness?

Yes, assuming that the measured consciousness sequence captured the essential elements of the original consciousness.

To "see" the original consciousness in this system you must adopt the correct context; the context that resolves the consciousness abstraction within the system. From that context you will not see the human. If you see a human following instructions and making notes, you will not see the consciousness he is generating.

Consider a chess program playing a game against itself. If we glance at the monitor we would see the game as it progresses. If instead we only could examine the quarks that make up the computer, we would be completely blind to the chess program abstraction.

comment by PhilGoetz · 2010-09-17T18:06:59.583Z · LW(p) · GW(p)

Is a mono-consciousness then impossible?

Replies from: David_Allen, David_Allen
comment by David_Allen · 2010-09-20T23:59:57.834Z · LW(p) · GW(p)

Thanks for the clarification.

In my comments I have been working on the idea that consciousness is an abstraction. The context in which the consciousness abstraction exists, is where consciousness can be found.

So a mono-consciousness would still have a context that supports a consciousness abstraction. I don't see any problem with that. However the consciousness might be like a feral child, no table manners and very strange to us.

How about this. If a consciousness tells a joke in a forest where no other consciousness can hear it, is the joke still funny?

comment by David_Allen · 2010-09-17T19:35:47.344Z · LW(p) · GW(p)

I'll need more details. What is a mono-consciousness?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-09-20T15:57:56.277Z · LW(p) · GW(p)

I was thinking of a magnetic monopole. A single consciousness, that does not interact with any others

comment by Mass_Driver · 2010-09-16T22:59:47.815Z · LW(p) · GW(p)

Your question is well-posed, but I doubt that it really attacks the problem of consciousness.

I don't understand what it could possibly mean for an output or an algorithm to be consciousness. Consciousness, whatever it might be caused by or composed of, means subjective awareness of qualia.

In this sense, "3" is not even slightly conscious. Neither is "output=0; if input=3, let output=1." Neither is "output=0; if input=3, let output=1, get input by scanning source code of self and reducing it to a number." The last example will behave as if it were aware of its own algorithm, but there is no reason to think that it is actually conscious in the sense of having subjective qualia. I do not understand how any amount of complication, recursion, or nuance could take these basic elements and turn them into something that would experience qualia.

It's not that consciousness is or ought to be mysterious; on the contrary, we ought to ask questions about consciousness whenever and however we can. So far, though, I think the only fair response to those questions is "I don't know." It is part of the strength of a rationalist that she will notice when she is confused; our attempt to apply concepts like "algorithm" and "output" to the problem of consciousness ought to result in confusion, for the simple reason that the concepts already in our toolbox do not adequately correspond to the concept we are trying to investigate.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-17T00:26:46.091Z · LW(p) · GW(p)

I don't understand what it could possibly mean for an output or an algorithm to be consciousness. Consciousness, whatever it might be caused by or composed of, means subjective awareness of qualia.

Everything that exists can be described precisely by some physics or algorithm, down to the point where it's actually meaningless to differentiate between the algorithm and the process itself. If consciousness exists, there exists some algorithm that is exactly equivalent to it.

Saying the only response to any questions about consciousness is "I don't know" is equivalent to shielding consciousness with the holy eternal veil of mystery.

Within the fields of computational neuroscience and AI, we actually do know a great deal about the algorithms underlying the supposed mystery of 'consciousness'. We dont' know enough yet to engineer a conscious machine, but that isn't so far away.

We had a general idea of how the heart worked long before we made an artificial heart, and we are in a similar situation with the brain. But the brain happens to be more complex than the heart.

Replies from: Mass_Driver, Vladimir_M
comment by Mass_Driver · 2010-09-17T01:06:11.367Z · LW(p) · GW(p)

Everything that exists can be described precisely by some physics or algorithm, down to the point where it's actually meaningless to differentiate between the algorithm and the process itself.

How do you know?

Replies from: orthonormal, jacob_cannell
comment by orthonormal · 2010-09-17T01:11:24.728Z · LW(p) · GW(p)

Because that thesis has made better predictions than every rival theory which seemed at the time more reasonable (superstition, vitalism, Cartesian dualism, etc). The Pythagoreans, amidst all their lunacy, stated perhaps the world's best-confirmed audacious hypothesis, that the world is a mathematical object.

Replies from: kodos96, Mass_Driver, Mass_Driver
comment by kodos96 · 2010-09-17T01:40:03.185Z · LW(p) · GW(p)

Because that thesis has made better predictions than every rival theory which seemed at the time more reasonable (superstition, vitalism, Cartesian dualism, etc)

When it comes to the question of consciousness, I humbly submit that "i-don't-know-ism" has made better predictions (i.e. none) than any rival theory.

Replies from: orthonormal
comment by orthonormal · 2010-09-17T01:54:30.312Z · LW(p) · GW(p)

The thesis doesn't predict that every reduction is going to be easy. And "I don't know" really masks a good bit of knowledge, unless you're equally surprised by all new data. Physicalism directly predicts a good many of the things we consider too obvious to categorize as "mysterious" (e.g. that brain damage can cause personality change).

Replies from: Mass_Driver
comment by Mass_Driver · 2010-09-20T04:32:42.911Z · LW(p) · GW(p)

Sure! And I would take decent odds on physicalism...at 20:1 in my favor I wouldn't have to think too hard; I'd be pretty sure to take the bet, because there is some pretty convincing evidence, like how brain damage works and how simple formulas about e.g. mechanics or radiation explain phenomena in what we might naively assume to be different realms, e.g. solar sails and roof albedo and warm light bulbs. If you can formulate a hypothesis with one set of data and test it using several other sets and get confirmation, it makes sense to guess that it works on all sets. And if you put a gun to my head and said "guess a theory of everything," I'd guess physicalism...as I said earlier, "of course physicalism or whatever you want to call it is the most plausible known and articulated theory of everything."

My only point is that we don't yet have enough evidence to be sure of physicalism in its broadest senses so as to justify shutting down alternative avenues of exploration for standing questions such as the origin of the universe, the nature of consciousness, and the computability of matter.

comment by Mass_Driver · 2010-09-20T04:21:37.154Z · LW(p) · GW(p)

The results of science are indeed quite impressive.

Suppose you wanted to compare the Pythagorean hypothesis "the world is a mathematical object" with the slightly broader hypothesis "the world consists largely of objects following mathematical laws."

Are there scientific results that would be predicted by one hypothesis but not the other?

Replies from: orthonormal
comment by orthonormal · 2010-09-21T14:27:40.953Z · LW(p) · GW(p)

No– this is a variant of the "green/grue" problem. However, the Pythagorean hypothesis puts higher probability on the things we've actually observed, because it doesn't waste any on claiming that this thing or that is non-mathematical.

By Bayes' Law, this means it's continually gaining support against the rival candidate.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-09-22T06:32:59.488Z · LW(p) · GW(p)

Would you be so kind as to define "mathematical object"? Possibly I agree with you on everything but semantics, a field in which I am almost always happy to compromise.

Replies from: orthonormal
comment by orthonormal · 2010-09-22T22:52:12.834Z · LW(p) · GW(p)

Er, a set with a simple definition, like the Mandelbrot set or the set of solutions to the Schrödinger equation on a given manifold? Honestly, I'd be surprised if this is the point you're stuck on.

What I suspect might help is the distinction here between epistemology and ontology: it's a meaningful hypothesis that we live in such a mathematical object, even if there doesn't exist a mind sufficient to exhaustively verify this, and yet our smaller minds can acquire enough evidence about the world's structure to raise that hypothesis to near certainty (modulo some chance of being in a simulation that's more complicated than the laws we seek, but whose creators want us not to notice the seams).

Replies from: Mass_Driver
comment by Mass_Driver · 2010-09-23T02:04:05.500Z · LW(p) · GW(p)

I think you're right that what we disagree about is

the distinction here between epistemology and ontology.

The dichotomy you've provided seems to me to be an excellent definition of the difference between mathematical epistemological proof and empirical epistemological proof...it happens all the time that we may not be able to rigorously show N, but we nevertheless have extremely good reason to believe N with near-certainty, and even stronger reason to act as if we believed N.

If I hear you correctly, you think that we could plug in "the Universe is merely a mathematical object" for N.

I disagree. For me, the difference between epistemology and ontology is that there is a difference between what we can know and what exists. There might be things that exist about which we know nothing. There could even be things that exist about which we cannot know anything. One could reasonably call for scientists to ignore all such hypothetical objects, but, philosophically speaking, it doesn't stop the objects from existing.

It boggles my mind to hear the claim that a mathematical object, as you have just defined it in your last comment, "exists in this second, ontological sense. The mandelbrot set expresses a relationship among points. If several small spheres exist and it turns out that the points approximate the relationship defined by the Mandlebrot set, then we might say that a Mandlebrot-ish shape of spheres exists. But the set itself doesn't have any independent existence. This result doesn't seem to me to depend on whether we use spheres or rays or standing waves -- you still have to be vibrating something* if you want to talk about things that actually exist. I'm not the sort of nut that believes in good old-fashioned aether, but mathematical relationships alone won't get you a flesh-and-blood universe where things actually exist...they'll just get you a blueprint for one. Even if, epistemologically, we can know everything about the blueprint and model all of its parameters, it still won't exist unless it's made of something.

That, at any rate, is my modestly informed opinion. If you can see any flaws in my analysis, I would be grateful to you for pointing them out.

Replies from: orthonormal, Will_Newsome
comment by orthonormal · 2010-09-23T14:12:15.351Z · LW(p) · GW(p)

It continually amazes me that people think "physical existence" is somehow less mysterious and more fundamental than the existence of a mathematical object!

Replies from: Mass_Driver
comment by Mass_Driver · 2010-09-23T15:25:26.842Z · LW(p) · GW(p)

Er, no, it's not less mysterious -- we understand mathematical objects better than we understand physical existence; mathematical objects can be treated with, well, math, and physical existence gets dealt with by jokes like philosophy.

I'm not sure what you mean by more fundamental, but physical existence does seem to be roughly as important as mathematical objects...at any rate, it matters a lot to me whether things exist in fact or merely in theory.

Replies from: orthonormal
comment by orthonormal · 2010-09-23T23:03:16.614Z · LW(p) · GW(p)

We've been given special evidence in our own case, but if we step away from that for a moment, what I mean should be clear. Let's take a hypothetical Universe X, which is very different from ours.

Saying "Universe X is a simple mathematical object" is pretty well comprehensible.

Saying "Universe X exists in some special way, distinct from just being a mathematical object, and in fact it might not be describable as a simple mathematical object" is just plain mysterious. It's up for debate whether it's even a meaningful statement.

Apply that to discussion of our own universe.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-24T01:39:14.499Z · LW(p) · GW(p)

But, but, you don't understand. Math isn't reeeeeeeaaaaaaaaaaaaaal!

I have an idea. Maybe it'd be more convincing if you said "Universe X is a simple computation." People feel like computations are more real, and who knows, maybe they're right. Maybe reality is computation, just a subset of mathematics. It seems a lot easier for people to envision that, at any rate. Or take Eliezer who (I think?) seems to think (or at least seemed to think) that reality juice is magically related to acyclic causal graphs.

You'll still probably get the same objections, though: "Computations aren't reeeeeeeaaaaaaaal, they have to be computed on something! Where's the something coming from?" But that seems a little bit more silly, because the Something that is computing can be infinitely far back in the chain of computation. All of a sudden it feels more arbitrary to be postulating a Something that is Real. And real metaphysicists know that things shouldn't feel arbitrary.

Replies from: gwern
comment by gwern · 2010-09-24T01:43:48.372Z · LW(p) · GW(p)

All of a sudden it feels more arbitrary to be postulating a Something that is Real. And real metaphysicists know that things shouldn't feel arbitrary.

Now you're just ripping off the last chapter of Drescher's Good and Real. You know he comments here sometimes - he'd be so hurt at such plagiarism. :)

Replies from: Will_Newsome
comment by Will_Newsome · 2010-09-24T01:45:52.818Z · LW(p) · GW(p)

I am? I only read the decision theory chapters of Good and Real, the day before he showed up at SIAI house for the decision theory workshop. I'll definitely read the last chapter when I get back to California.

comment by Will_Newsome · 2010-09-24T01:28:35.163Z · LW(p) · GW(p)

I think your intuition is relying a little too much on the absurdity heuristic (e.g., "It boggles my mind...") and flat out assertion (e.g., "But the set itself doesn't have any independent existence."). Metaphysical intuition is really misleading. I think most people underestimate that, especially because the absurdity heuristic is strong and therefore it's easy to reach a reductio ad absurdum that is nonetheless true. I'll give an example.

Once upon a time I didn't think copies 'counted' in a multiverse, either morally or for purposes of anthropic reasoning. 200 Jacks had the same weight as 1 Mary. The opposite was absurd, you see: You're claiming that 3 copies of the exact same computation are worth more than 2 computations of 2 different people, leading separate and diverse lives? Absurd! My moral and metaphysical intuition balks at such an idea! I came up with, like, 3 reductio ad absurdums to prove my point. Eliezer, Wei Dai, Steven Kaas, Nick Bostrom, what did they know? And there was some pride, too, because they way I was thinking about it meant I could easily deal with indexical uncertainty, and the others seemed clueless. ... Well, turns out those reductios weren't absurd: I just hadn't learned to think like reality. I had to update, because that's where the decision theory led, and it's hard to argue with mathematics. And it came to my attention that thinking doubled computations had the same measure had a lot of problems as well. Since then, I've been a lot more careful about asserting my intuition when it disagrees with people who seem to have thought about it a lot more than I have.

In the case of the Mathematical Universe Hypothesis or permutations thereof (Eliezer seems to think the mysterious 'reality fluid' or 'measure' has a lot to do with directed acyclic graphs, for instance), there's a lot of mental firepower aimed against you. Why do you believe what you believe? If it turns out the reason is metaphysical intuition, be on guard. Acknowledge your intuition, but don't believe everything you think.

comment by Mass_Driver · 2010-09-17T01:24:43.112Z · LW(p) · GW(p)

Look, of course physicalism or whatever you want to call it is the most plausible known and articulated theory of everything.

But why would you assign physicalism nontrivial probability as against (a) theories that are as yet unknown or unarticulated, or (b) the possibility that the Universe does not behave neatly in accordance with a single coherent, comprehensible theory?

Isn't the concept-space of "single coherent Theory of Everything" vastly smaller than the total concept-space of concepts that could describe our reality?

Replies from: orthonormal, orthonormal, Furcas
comment by orthonormal · 2010-09-17T01:33:19.299Z · LW(p) · GW(p)

The thesis at hand predicts that we should find complex things to be intricate arrangements of simple things, acting according to mathematically simple rules. We have discovered this to be true to a staggering degree, and to the immense surprise of the intellectual tradition of Planet Earth. (I mean, when even Nietzsche acknowledges this— I'll reply later with the quote— that's saying something!)

Your (b) makes no such specific predictions, and so the likelihood ratio should now be immensely in physicalism's favor. Only a ridiculous prior could make it respectable at the moment.

As for (a), I'm talking about the general principle that the world is a mathematical object, not any particular claim of which object it is. (If I knew that, I'd go down and taunt the string theorists all evening.)

comment by orthonormal · 2010-09-19T16:40:35.586Z · LW(p) · GW(p)

Our amazement.— It is a profound and fundamental good fortune that scientific discoveries stand up under examination and furnish the basis, again and again, for further discoveries. After all, this could be otherwise. Indeed, we are so convinced of the uncertainty and fantasies of our judgments and of the eternal change of all human laws and concepts that we are really amazed how well the results of science stand up.

  • Nietzsche, The Gay Science I.46

(NB: in this passage, "we" signifies modern atheists, not people in general.)

comment by Furcas · 2010-09-17T01:32:42.679Z · LW(p) · GW(p)

Isn't the concept-space of "single coherent Theory of Everything"

As opposed to what? Two or more incoherent theories? Isn't that just a strange way to talk about an impossible reality?

comment by jacob_cannell · 2010-09-17T01:12:12.879Z · LW(p) · GW(p)

In short, because physics is so successful.

In long, because no matter how far off physics is from the ultimate algorithm, we can continue to narrow in on it indefinetly. Mathematically at least, even an infinite algorithm is possible. As a curious side note, I remember physcist Frank Tipler has a GUT of physics that is infinite. He claims this TOE has been known for a while, but avoided for obvious reasons. He then puts on a magic space cap and claims that this TOE proves Christianity is correct, but the TOE is interesting nonetheless (at least the idea of it - I am not a physicist).

I don't know for certain that physics is computable, but from what I have read on that matter, all current indications are positive.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-09-17T01:30:38.070Z · LW(p) · GW(p)

In short, because physics is so successful.

Successful at what, exactly? At modeling the behavior of the stuff that humans can easily observe using basic industrial technology over the span of 100 to 400 years? Why would you want to extrapolate from that to "everything that exists?"

In long, because no matter how far off physics is from the ultimate algorithm, we can continue to narrow in on it indefinetly.

Right, but what makes you think there is an "ultimate algorithm" to be found?

Replies from: jacob_cannell, Tiiba
comment by jacob_cannell · 2010-09-17T23:49:00.326Z · LW(p) · GW(p)

A single universal physics is adequate to explain all that we can observe, and a necessary derivation of that universal physics is a vast quantity of space and time which we can not directly observe but which we predict is also driven by the same universal physics. This is the "everything that exists" - whose existence is in some fact dependent on the universal physics itself.

Whether there is or is not an ultimate algorithm is not even the right question. It is true by default. We can continue to refine physics indefinetly. In other words, of course there is an ultimate algorithm, because we can invent it.

In fact, given any sequence of finite observations O, there is an infinite set of algorithms A that perfectly predict/compute the sequence O. Physics is concerned largely with finding the minimally complex algorithm that fully predicts O.

So yes, mathematically it is trivially true that there is an infinite set of ultimate algorithms.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-09-19T04:38:38.045Z · LW(p) · GW(p)

Thank you for one (of several) intelligent responses.

a necessary derivation of that universal physics is a vast quantity of space and time which we can not directly observe

This isn't quite right. The only thing that makes the derivation "necessary" is your adjective "universal." We could just as easily say that there is a supergalactic physics that explains all we can observe, and that same physics could plausibly explain what is happening in the space and time that we cannot or have not observed. Note that the unobservable realms are not merely those outside our past light cone, but also those within the limits of the Heisenberg uncertainty principle, beneath the smallest structures that we can repeatedly observe, and, for all practical purposes, the space beyond the nearest nebula and/or the objects too dull for our Earth-bound telescopes to detect. It would be remarkably bad science to voluntarily choose to sample only one kilobyte from one address out of thousands of terabytes of data and assume that the kilobyte is representative. The fact that all known scientific resources are clustered in the same tiny portion of spacetime forces us to use such a sample, but it cannot and should not force us to assume that the sample is representative.

whose existence is in some fact dependent on the universal physics itself.

I don't understand what you mean. Intelligent minds with an ability to manipulate matter or energy can 'create' patterns in that matter/energy by rearranging it according to the laws of physics. However, I cannot think of any sense in which physics itself could be said to create its own patterns. Physics is the pattern in which all known matter is currently arranged, but physics does not create the matter -- it merely arranges it. Physics does not explain why there is something instead of nothing; it would be perfectly consistent with the laws of physics for there to be no electrons orbiting no protons over a volume of no space-time. How then can "everything that exists" be dependent on physics?

given any sequence of finite observations O, there is an infinite set of algorithms A that perfectly predict/compute the sequence O.

Right, but who says our observations are finite? What if important phenomenon, like, e.g., consciousness (cough) turn out to depend on infinitely small particles? What if the fate of the universe in a cosmological sense turns out to depend on what happens over infinitely long periods of time? There is no rule that I know of that says that the Universe is not allowed to clog its equations with infinities.

Physics is concerned largely with finding the minimally complex algorithm that fully predicts O.

A noble goal, but who says that sufficient simplicity to allow for computability is possible? Suppose our universe contains some true randomness beyond its initial seeding? Suppose that limits on our ability to gather information (particles that put effective distance between themselves and our present location at faster than the speed of light due to cosmic inflation; ineradicable error rates in technologically perfect computers) mean that while the universe is computable in principle, we cannot perfectly compute even a portion of our universe from the inside?

I don't mean to suggest that it's implausible that everything is governed by a universal physics. That's a respectable hypothesis. I just get frustrated when people assert, without evidence that's apparent to me, that physics will surely explain everything that we might wish to know. This is a remarkably bold claim for a discipline that predicts that most of what exists is "dark energy" but cannot say what dark energy is. Physicalism should be classed as a statement of faith, I think, and not as a justification for specific predictions about the hard problem of consciousness.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-19T15:48:57.770Z · LW(p) · GW(p)

a necessary derivation of that universal physics is a vast quantity of space and time which we can not directly observe

This isn't quite right. The only thing that makes the derivation "necessary" is your adjective "universal." We could just as easily say that there is a supergalactic physics that explains all we can observe,

Physics is generally held to be universal, instead of just 'supergalactic'. For one, there is the multiverse. But in general, the idea is, as I discuss later, to find the most parsimonious explanation for everything. This is the optimal strategy, and universality is a necessary consequence of this strategy. Any other physics or system which does not explain all observations is of course incomplete and inferior.

It would be remarkably bad science to voluntarily choose to sample only one kilobyte from one address out of thousands of terabytes of data and assume that the kilobyte is representative.

Not at all. You seem to be applying the analogy that at the cosmic scale the universe is some sort of probabilistic urn that generates galactic-sized space-time slices at random whim. It is not.

There are an infinite set of potential physics that have widely different properties in regions we can not observe. There are strong reasons why these are all necessarily inferior, by the principle of Ockham's razor and the low-complexity bias in Solonomoff induction.

given any sequence of finite observations O, there is an infinite set of algorithms A that perfectly predict/compute the sequence O.

Right, but who says our observations are finite?

Elementary physics. There are a finite number of humans, the earth has finite mass, finite information storage potential, and we have finite knowledge.

What if important phenomenon, like, e.g., consciousness (cough) turn out to depend on infinitely small particles?

If you want to believe something like this is true before you begin, that consciousness is somehow different and special, then you are abandoning rationality from the start.

There are no privileged hypothesizes and no predefined targets in the quest for knowledge.

What if the fate of the universe in a cosmological sense turns out to depend on what happens over infinitely long periods of time? There is no rule that I know of that says that the Universe is not allowed to clog its equations with infinities.

Sure, infinities are possible, although they generally are viewed to signal a problem in physics when they come up in one's math.

But that's all besides the point: our observations are obviously finite. And furthermore, infinities are not at all an obstacle towards a universal physics.

Physics is concerned largely with finding the minimally complex algorithm that fully predicts O.

A noble goal, but who says that sufficient simplicity to allow for computability is possible?

There is no such complexity limit whatsoever to computability - it is not as if a phenomena has to be sufficiently 'simple' for it to be computable in theory (although practical computability is a more complex issue).

Suppose our universe contains some true randomness beyond its initial seeding?

True randomness comes up immediately in quantum mechanics. This isn't an obstacle to computability, whether theoretical or practical. People unfamiliar with computing often have the notion that it must be deterministic. This is not so. Computation can be nondeterministic and randomness is an optimal strategy in many algorithms.

Beyond that, the randomness in quantum mechanics is typically squashed by the central limit theorem; a vast quantity of non-deterministic quantum events become increasingly deterministic at the macro scale.

while the universe is computable in principle, we cannot perfectly compute even a portion of our universe from the inside?

This is true - we can't perfectly compute very much of our universe from within it, but perfect computation is highly overrated, and regardless this has little bearing on whatever original track we once were on.

I don't mean to suggest that it's implausible that everything is governed by a universal physics

It is trivially true, tautological - it is implied by the very meaning of universal physics.

It sounds to me that you have a mystery (consciousness) that you would like to protect.

I just get frustrated when people assert, without evidence that's apparent to me, that physics will surely explain everything that we might wish to know.

This also is trivially true, and is the main point I have been attempting to communicate. Anything that you could possibly want to know can be explained by some model. This fact doesn't require much evidence at all.

If there is some new series of observations that physical science can truly not explain, then it is physical science which changes until it does explain them.

Replies from: Mass_Driver
comment by Mass_Driver · 2010-09-20T04:15:32.307Z · LW(p) · GW(p)

OK, thank you for talking with me.

I've lost interest in the conversation, partly because of your minor ad hominem attack ("sounds to me like you have a mystery that you would like to protect"), but mostly because I see your arguments as dependent on assumptions that I do not share: you see it as "obvious" that physics is universal and that theories favored by Solonomoff simplicity are automatically and lexically superior to all other theories, and I do not.

If you care to defend or explain these assumptions, I might regain interest, or I might not. Proceed at your own risk of wasting your time.

In any case, thank you for a stimulating debate.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-20T12:11:56.243Z · LW(p) · GW(p)

Sorry for the minor ad hominem, I jumped to the conclusion based on prior experience.

We can generate explanations for anything. Science has found that the universe appears to operate on a universal set of underlying principles - everything reduces to physics. We could have lived in a universe where this wasn't so. But we don't.

The computer I am working on right now is solid proof of physic's success.

When you have two theories (algorithms) that both accurately prediction an observation sequence, you need some other criteria to guide you - and here ockham's razor comes in to play.

There are always an infinite number of more complex theories that explain a series of observations, but only one that is minimally simple.

But again, I think the universality of physics stems just from the simple fact that there are an infinite number of algorithms (theories) that can explain any possible sequence of observations - so universality is always possible.

comment by Tiiba · 2010-09-17T14:49:53.298Z · LW(p) · GW(p)

"Why would you want to extrapolate from that to "everything that exists?""

That's all we've got?

comment by Vladimir_M · 2010-09-17T03:29:15.502Z · LW(p) · GW(p)

jacob_cannell:

Everything that exists can be described precisely by some physics or algorithm, down to the point where it's actually meaningless to differentiate between the algorithm and the process itself.

What exactly is an "algorithm," according to your usage of the term?

Replies from: torekp
comment by torekp · 2010-09-18T18:27:24.824Z · LW(p) · GW(p)

I too would like the definition of "algorithm" here, with attention to the difference (Tegmark be damned) between mathematical objects and physical objects/processes.

comment by jimrandomh · 2010-09-16T20:32:35.856Z · LW(p) · GW(p)

In humans today, the output is not produced all at once -but from a computationalist perspective, that isn't important.

I don't think this is valid; the complexity you're discarding here may actually be essential. In particular, interacting with some sort of universe that provides inputs and responds to outputs may be a necessary condition for consciousness. Perhaps consciousness ought to be a two-place predicate. If you have a universe U1 containing a mind M, and you simulate the physics of U1 and save the final state on a tape that exists in universe U2, then conscious(M,U1) but ~conscious(M,U2). On the other hand, if U1 interacts with U2 while it's running, then conscious(M,U2).

Replies from: PhilGoetz
comment by PhilGoetz · 2010-09-16T21:24:37.106Z · LW(p) · GW(p)

Is the universe not something that can be represented as information? Do you mean U1 includes the specific materials used for its reality? That would be taking Searle's "conscious brains must be made of conscious-brain stuff" argument, and changing it to "conscious brains must be surrounded by consciousness-inducing universe stuff".

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-16T21:48:50.069Z · LW(p) · GW(p)

The universe can be represented as computation, but it appears that requires a time element. You can not define a turing machine based on just a tape - it intrinsically requires a time dynamic in the form of the moving head.

So in digital physics and computationalism, time is fundemental - it really exists and can not be abstracted away. At the most core the universe is something that changes. Described as a turing machine it consists of information (the tape) and time - the mover.

Replies from: PhilGoetz, DanArmak
comment by PhilGoetz · 2010-09-16T23:07:28.680Z · LW(p) · GW(p)

That sound right - but that leads to Option 2.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-17T00:01:36.745Z · LW(p) · GW(p)

Yes your option 2 sounds almost right, but see my amendments/disagreements.

comment by DanArmak · 2010-09-17T19:18:36.135Z · LW(p) · GW(p)

You can not define a turing machine based on just a tape - it intrinsically requires a time dynamic in the form of the moving head.

A TM has an infinite working tape, but the problem of human-like consciousness has only finite input and output sizes. Therefore, you could define an (infinite) function by simply listing all possible pairs of (finite) inputs and outputs. This is a completely static, time-less representation that is still powerful enough to compute anything with bounded inputs and outputs.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-17T20:32:43.990Z · LW(p) · GW(p)

Of course you can collapse any function into a static precomputation, but that is still itself a function, and it still requires at least one step, and it still requires the turing machine head, so you have not removed time.

I'm aware of no time-less representation of a turing machine, it seems impossible in principle.

Furthermore, for the system to exist in the real world, it will have to produce outputs for particular inputs at particular times - the time requirement is also imposed by the fact of time in our universe.

Replies from: DanArmak
comment by DanArmak · 2010-09-17T21:12:15.000Z · LW(p) · GW(p)

What you say is true. I find myself unsure how it applies to the original subject. Possibly my comment wasn't on topic... So feel free to ignore it.

comment by Will_Sawin · 2010-09-17T01:57:19.908Z · LW(p) · GW(p)

Suppose we view consciousness as both a specific type of computation and a specific range of computable functions. For any N, there will always be a lookup table that appears conscious for a length of time N, in particular, "the lifetime of the conscious creature being simulated". A lookup table, as Eliezer once argued, is more like a cellphone than a person - it must have been copied off of some sort of real, conscious entity.

Is the feature of a lookup table that makes it unconscious its improbability without certain types of computation, that is, the large amount of code? Is consciousness computing some function without using much code? That doesn't seem right.

A Turing machine is much more reasonable than a lookup table.

Premise 1: I am conscious. Premise 2: The physical universe I am part of may be computable. Therefore: The physical universe I am part of may be a Turing machine. Conclusion: Aspects of a Turing machine computation may be conscious.

What are aspects of a computation? I know them when I see them, or at least do occasionally. There is no reason I know of that a rigorous definition is impossible.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-09-17T02:47:37.424Z · LW(p) · GW(p)

However, in physical reality, there exist no Turing machines. There are only finite state machines whose behavior emulates a quasi-Turing machine with a finite tape (or an analogous crippled finite version of some other Turing-complete theoretical construct).

Now, every finite-state machine can be implemented using a lookup table and a transition function that simply performs a lookup based on the current state and input. Any computers we have now or in the future can be only clever optimizations of this model. For example, von Neumann machines (i.e. computers as we know them) avoid the impossibly large lookup table by implementing the transition function in the form of a processor that, at each step, examines one small subset of the state and produces a new state that differs only by another small subset based on simple rules. (I'm describing the effect of a single machine instruction, of course.)

So, the question is: what exactly makes a lookup table deficient compared to a "real" computer, whatever that might be?

Replies from: PhilGoetz
comment by PhilGoetz · 2010-09-17T18:02:03.425Z · LW(p) · GW(p)

Ouch. That hurts. This may be a better way to state the problem, because it doesn't intersect with the mysteries of time vs. static, and needing an observer.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-09-17T19:17:35.678Z · LW(p) · GW(p)

You might also be interested in this recent comment of mine, if you haven't read it already:

http://lesswrong.com/lw/2m8/consciousness_of_simulations_uploads_a_reductio/2hky

comment by Paul Crowley (ciphergoth) · 2010-09-17T07:25:13.134Z · LW(p) · GW(p)

A powerful computer in a sealed box is about to be fired away from the Earth at the speed of light; it will never produce output and we'll never see it again. From the point of view of perspective 1, the whole program is thus equivalent to a gigantic no-op. Nonetheless, I'd rather that the program running on it simulated conscious beings in Utopia than conscious beings in Hell. This I think forces me to perspective 2: that actually doing the calculations makes a moral difference.

EDIT: the "speed of light" thing was a mistake. Make that "close to the speed of light".

Replies from: Vladimir_M, PhilGoetz
comment by Vladimir_M · 2010-09-17T19:58:17.719Z · LW(p) · GW(p)

I don't think your thought experiment is logically consistent. You're using a physical theory, namely special relativity, to discuss a case in which the theory explicitly refuses to say what happens, because it's considered unphysical within that theory.

If the computer moves at exactly the speed of light, and assuming special relativity, the time in which the computer will reach a given step of its program becomes undefined, not "never." In any physically possible case, in which the computer's speed can be arbitrarily close to c, things develop completely normally in the computer's own reference frame.

Moreover, if you observe the code of a program, it's just a string of bits (assuming a binary computer). And a string of bits can be interpreted as implementing any arbitrary program, given an arbitrary choice of the interpreter. Therefore, until an actual interpretation happens, what makes your hypothetical "hell" and "Utopia" essentially different?

Replies from: ciphergoth, Nisan, wedrifid
comment by Paul Crowley (ciphergoth) · 2010-09-18T08:09:08.359Z · LW(p) · GW(p)

The "speed of light" qualification was a mistake. I was just trying to get the computer out of our light cone somewhere we can never observe it.

comment by Nisan · 2010-09-18T19:43:51.607Z · LW(p) · GW(p)

And a string of bits can be interpreted as implementing any arbitrary program, given an arbitrary choice of the interpreter.

You could discriminate between heaven and hell by considering the minimum length of an interpreting program. Such a program would have to produce output that would be directly comprehendible by us, and it would have to be written in a language that we wouldn't regard as crazy.

In order to see heaven in hell, your interpreter probably has to contain hell.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-09-18T20:30:28.828Z · LW(p) · GW(p)

Fair enough; that's true when it comes to an arbitrary program. However, consider a program that contains both heaven and hell in different branches, and will take one of these different branches depending on the interpreter. Or, alternatively, consider a program simulating a "good" world that will, given some small tweak in the orignal interpreter, simulate a much worse world because some simple but essential thing will be off. Such thought experiments, as far as I see, override this objection.

comment by wedrifid · 2010-09-18T04:13:42.358Z · LW(p) · GW(p)

In any physically possible case, in which the computer's speed can be arbitrarily close to c, things develop completely normally in the computer's own reference frame.

For the right right value of 'arbitrary' the computer never performs a single operation. The entire box is obliterated by collision with a stray electron before the processor can tick. The collision releases arbitrarily large amounts of energy and from there things just start getting messy.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-09-18T07:13:27.034Z · LW(p) · GW(p)

In a thought experiment, you can assume anything, however unrealistic, as long as it's logically consistent with the theory on which you're basing it. Assuming away stray electrons is therefore OK in this particular thought experiment, since the assumption of a universe that would provide an endless completely obstacle-free path would still be consistent with special relativity. In fact, among the standard conventions for discussing thought experiments is not to bring up objections about such things, since it's presumed that the author is intentionally assuming them away to make a more essential point about something else.

In contrast, introducing objects that move at exactly the speed c into a thought experiment based on special relativity results in a logical inconsistency. It's the same mistake as if you assumed that Peano axioms hold and then started talking about a natural number such that zero is its successor. Since the very definition of such an object involves a logical contradiction, nothing useful can ever come out of such a discussion.

Replies from: Vladimir_M, wedrifid
comment by Vladimir_M · 2010-09-18T19:27:42.223Z · LW(p) · GW(p)

Could someone please explain why this was downvoted? (I don't care about losing score, but I am concerned about the possibility that I wrote something stupid that I'm unaware of.)

comment by wedrifid · 2010-09-18T07:22:15.277Z · LW(p) · GW(p)

Assuming away stray electrons is therefore OK in this particular thought experiment, since the assumption of a universe that would provide an endless completely obstacle-free path would still be consistent with special relativity.

Of course it is OK. But since it was unspecified it was a whole lot more interesting to imagine the effects of a cataclysmic collision with arbitrarily large energy. (Because any discussion of a question of consciousness that goes for more than 3 paragraphs before dissolving the question or finding an interesting tangent is at least two and a half paragraphs too long!)

Now I'm wondering whether such a collision would release enough light to obliterate Earth from an arbitrarily large (but within light cone) distance away. I'm thinking it would.

comment by PhilGoetz · 2010-09-17T17:49:51.446Z · LW(p) · GW(p)

The speed of light qualification is interesting, because it may relate to the static aspect of the "conscious tape". The computer is conscious in its reference frame; but since that clock is stopped, that consciousness will never begin.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-09-18T08:09:54.634Z · LW(p) · GW(p)

As I observe to Vladimir_M, the "speed of light" thing was a mistake. I just wanted to make sure no-one ever observed any output from the computer under any circumstances.

Replies from: wedrifid, PhilGoetz
comment by wedrifid · 2010-09-18T14:48:32.366Z · LW(p) · GW(p)

I just wanted to make sure no-one ever observed any output from the computer under any circumstances.

You could always just hide it in the forest near the oft-considered fallen tree.

comment by PhilGoetz · 2010-09-20T15:56:23.008Z · LW(p) · GW(p)

No, I still think it's interesting. Define time as a function of entropy (eg., "one second" means the time over which entropy increases by a constant amount). Time is stopped in the piece of paper's reference frame, because the paper is static, and therefore has no entropy change, and therefore no passage of time.

comment by Kaj_Sotala · 2010-09-16T21:52:45.702Z · LW(p) · GW(p)

I don't understand the difference between "computed" and "computation", here.

Replies from: Snowyowl
comment by Snowyowl · 2010-09-16T22:20:26.071Z · LW(p) · GW(p)

"Computed" means that only the input and output are important: as long as you can get from "2+2" to "4", it doesn't matter how you do it. "Computation" means that it's the algorithm you use that is important.

If a computer can give the response a human would have to a given situation, despite that computer using an AI which operates on different principles from the human brain (simulating a universe containing a human brain is sufficient), is that computer thinking/conscious? If yes, then thought/consciousness can be computed. If no, then thought/consciousness is the computation.

This is related to the Turing Test, in which a computer is deemed conscious if it can produce responses indistinguishable from those of a human, regardless of the algorithm used.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-09-17T07:01:24.656Z · LW(p) · GW(p)

Ahhh, got it. Thanks!

comment by inklesspen · 2010-09-16T21:52:24.201Z · LW(p) · GW(p)

I think the best definition of consciousness I've come across is Hofstadter's, which is something like "when you are thinking, you can think about the fact that you're thinking, and incorporate that into your conclusions. You can dive down the rabbit hole of meta-thinking as many times as you like." Even there, though, it's hard to tell if it's a verb, a noun, or something else.

If we want to talk about it in computing terms, you can look at the stored-program architecture we use today. Software is data, but it's also data that can direct the hardware to 'do something' in a way that most data cannot. There is software that can introspect itself and modify its own code (this is used both for clever performance hacks and for obfuscation).

My view is that consciousness is a property of my thought processes — not every thought will have the same level of introspection, or even introspection at all. It's something that my mind is doing (or isn't doing, depending on what I'm thinking about). The property we ascribe to entities that we call 'consciousness' I would instead term 'the ability to think consciously' or 'the ability to have consciousness'. It seems to me that my thought processes are software running on the hardware that is the human brain. If my mind were uploaded, and its software state written to permanent storage and then stopped running, I would say that this recorded state still has the ability to think consciously, but it is not doing so, since it's not thinking at all, so at that time it is not conscious. (But of course it could be, if started back up.)

Replies from: PhilGoetz, jacob_cannell
comment by PhilGoetz · 2010-09-16T23:04:10.842Z · LW(p) · GW(p)

I think the best definition of consciousness I've come across is Hofstadter's, which is something like "when you are thinking, you can think about the fact that you're thinking, and incorporate that into your conclusions. You can dive down the rabbit hole of meta-thinking as many times as you like."

I can write programs that can do that.

Replies from: thomblake, ShardPhoenix
comment by thomblake · 2010-09-16T23:10:52.479Z · LW(p) · GW(p)

Philosophers love to make overly simplistic statements about what computers can't do, even when they're pro-tech. "Someday, we will have computers that can program themselves!" Meanwhile, a C program I wrote the other day wrote some js, and I did not feel like it was worth a Nobel Prize.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2010-09-17T01:58:04.084Z · LW(p) · GW(p)

I think they mean a computer that can translate imprecise requirements into precise programs in the same way that a human can, not just code that outputs code. I do agree that philosophers can tend to underestimate what a computer can theoretically do/overestimate how wonderful and unique humans are, though.

comment by ShardPhoenix · 2010-09-17T01:57:29.797Z · LW(p) · GW(p)

I don't think anyone can yet write a program that can reflect on itself in quite the same way a human can.

Replies from: Perplexed
comment by Perplexed · 2010-09-17T02:06:39.165Z · LW(p) · GW(p)

On the other hand, I don't know any humans who know themselves quite as precisely as does an optimizing compiler which has just compiled its own source to machine code.

comment by jacob_cannell · 2010-09-17T00:08:32.602Z · LW(p) · GW(p)

I like Hofstadter's works, but I think he over-focuses on recursion and meta-thinking.

At a much more basic level, we use the word 'conscious' to describe the act of being aware of and thinking about something - I was conscious of X.

Some drugs can silence that conscious state, and some (such as alcohol) can even induce a very interesting amnesiac state where you appear conscious but are not integrating long term memories, and can awake later to have no memory of large portions of the experience. Were you thus conscious? Clearly after awaking and forgetting, you are no longer conscious of the events forgotten.

So perhaps our 'consciousness' is the set of all mindstuff we are conscious of, and thus it is clearly based on our memory (both long and short term). Even thinking about what you are thinking about is really thinking about what you were just thinking about, and thus involves short term memory. Memory is the key of consciousness, but it also involves some recursive depth - layering thoughts in succession.

But ultimately 'consciousness' isn't very distinct from 'thinking'. It just has more mystical connotations.

comment by jacob_cannell · 2010-09-16T21:41:48.454Z · LW(p) · GW(p)

I understand Computationalism as a direct consequence of digital physics and materialism: all of physics is computable and all that exists is equivalent to the execution of the universal algorithm of physics (even if an exact description of said algorithm is unknowable to us).

Thus strictly speaking everything that exists in the universe is computation, and everything is computable. Your option 1 seems to make a false distinction that something non-computational could exist - that consciousness could be something that is computable but is not itself computation. This is impossible - everything that is computable and exists is necessarily some form of computation. Computation is the underlying essence of reality - another word for physics.

But the word 'consciousness', to the extent it has useful meaning, implies a particular dynamic process of computation. Consciousness is the state of being conscious, the state of being conscious of things, a state of active cognition. Thus any static system can not be conscious. A mind frozen in time would necessarily be unconscious.

If consciousness is computed, then there are no necessary dynamics. All that matters is getting the right output. It doesn't matter what algorithm you use to get that output, or what physical machinery you use to compute it.

This doesn't seem quite correct. There are necessary dynamics - out of the space of all dynamics (all potential computational or physical processes), some set of them are 'conscious'. There is no single correct output. There is a near infinite set of correct outputs - the definition of which is based on what the correct dynamics computes based on the inputs.

Consciousness is not the output anymore than a car is it's exhaust or microsoft windows is a word document.

You can use the input->output mappings to understand and define the black-box process within because the black-box process is physical and so it is governed by some algorithm. But the algorithm is not just it's output.

Computationalism means considering two computational outputs equivalent if they contain the same information, whether they're computed with neurons and represented as membrane potentials, or computed with Tinkertoys and represented by rotations of a set of wheels.

No, not quite - computationalism via functionalism means considering two processes functionally equivalent if they produce the same outputs for the same inputs. It's not just about outputs.

A key idea in functionalism is that one physical system can realize many different functional algorithms simultaneously. A computer is a computational system running physics on the most base level, but it also can have other programs running at an entirely different functional encoding level - like words written in letters that are themselves composed of sentences or entire books.

If consciousness is computation, then we have the satisfying feeling that how we do those computations matters. But then we're not computationalists anymore!

Err yes and no. Consciousness is always strictly defined in relation to an environment, so speed is always important. But beyond that, how you do the comptuations matters not at all. There are an infinite number of equivalent algorithms and computations in theory, but the set of realizable equivalent algorithms and computational processes that enact them is finite in reality because of physics.

Another way of looking at:

There are many possible patterns of matter/energy that are all automobiles, or dinosaurs, or brains.

There are many possible patterns of matter/energy that are all conscious - defined as patterns of matter/energy that enact a set of intelligence algorithms we label "conscious". The label is necessarily functional.

Replies from: David_Allen, PhilGoetz
comment by David_Allen · 2010-09-16T22:27:30.725Z · LW(p) · GW(p)

Your option 1 seems to make a false distinction that something non-computational could exist

William Rapaport, in the paper PhilGoetz refers to, appears to exclude the idea that the universe is performing computation.

He states:

... it could also be said that it is Kepler’ s laws that are computable and that describe the behaviour of the solar system, yet the solar system does not compute them, i.e. the behaviour of the solar system is not a computation, even though its behaviour is computable.

I would agree with you, that the universe is performing computation.

comment by PhilGoetz · 2010-09-16T23:08:34.953Z · LW(p) · GW(p)

I think you're basically saying "Option 2".

Replies from: David_Allen
comment by David_Allen · 2010-09-16T23:54:02.317Z · LW(p) · GW(p)

The idea is that everything in the universe is a computation run by the universe. So yes, option 2 certainly.

But functionalism describes a philosophy where the mind is formed by levels of abstraction. The substrate that performs the computation for any particular level is not important. This is option 1.

So option 1 and 2 are not incompatible. They are context specific perspectives.

comment by DSimon · 2010-09-20T21:52:21.597Z · LW(p) · GW(p)

(Warning: I expect that the following comment has at least one major error, since this topic is well outside my usual area of knowledge. Please read it as a request for edification, not as an attempt to push forward the envelope.)

Until we can detect or explain qualia in the wild, how can we make rational claims about its computability?

To make a simple analogy, suppose we have a machine which consists of a transparent box, a switch, and a speaker. Inside the box is a lightbulb and a light sensor. The switch controls the light, and the light sensor is hooked up to the speaker and makes it emit a tone IFF light is detected.

Suppose a species with no sight or concept of light is attempting to reverse-engineer the device. A reverse-engineered simulation of this machine could achieve the same external output as the original by connecting the switch more directly to the speaker, without going through a light-emitting-and-detecting phase. That would be an equivalent algorithm to the one that the original machine is executing, from the perspective of the observers, but it wouldn't be doing the same thing.

Similarly, qualia has an effect on the world and it can be detected, but at the moment only in a tentative and indirect way. In particular, we don't have tests that can distinguish false positives from true positives very well (just as the sightless scientists in the example haven't figured out how to distinguish the tone from the light).

To put it another way, the simulated machine is kind of a zombie machine; it has all the proper external observable stuff going on, but not the correct internal process. From an evolutionary perspective a zombie is unlikely, but it seems like a naive reverse engineer could make one pretty easily, since they have no way of verifying if they've got the important part working if the external indicators for the important part can be easily accidentally faked.

Executive summary: Until we can detect naturally occurring qualia, it seems plausible that any simulations we create might accidentally be zombies and we wouldn't be able to tell.

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-20T23:24:22.212Z · LW(p) · GW(p)

I think you are onto the right idea with your analogy, but if you work through the implications, it should be clear that if qualia are truly not functionally important, than we shouldn't value them.

I mean, to use your analogy - if we discover brains that lack the equivalent of the pointless internal light bulb, should we value them any different?

If they are important, then it is highly likely our intelligent machines will also have them.

I find it far more likely that qualia are a necessary consequence of the massively connected probablistic induction the brain uses, and our intelligent machines will have similar qualia.

Evolution wouldn't have created the light bulb type structures - complex adaptations must pay for themselves.

Replies from: DSimon
comment by DSimon · 2010-09-22T13:37:04.457Z · LW(p) · GW(p)

If they are important, then it is highly likely our intelligent machines will also have them.

I agree that qualia probably have fitness importance (or are the spandrel of something that does), but I'm not very sure that algorithms in general that implement probabilistic induction similar to our brain's are also likely to have qualia. Couldn't it plausibly be an implementation-specific effect, that would not necessarily be reproduced by a similar but non-identical reverse-engineered system?

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-22T15:40:19.229Z · LW(p) · GW(p)

It is possible, but I don't find it plausible, partly because I understand qualia to be nearly unavoidable side effects of the whole general category of probabilistic induction engines like our brain, and I belive that practical AGI will necessarily use similar techniques.

Qualia are related to word connotations and the subconscious associative web: everything that happens in such a cognitive engine, every thought, experience or neural stimulus, has a huge web of pseudo-random complex associations that impose small but measurably statistical influence across the whole system.

The experience of perceiving one wavelength of light will have small but measurable differences on every cognitive measure, from mood to types of thoughts one may experience afterwards, and so on. Self-reflecting on how these associative traces 'feel' from the inside leads to qualia.

comment by novalis · 2010-09-16T23:01:56.732Z · LW(p) · GW(p)

One of my favorite Philip Dick stories ("Gur Ryrpgevp Nag") is about the consciousness tape.

Replies from: Blueberry
comment by Blueberry · 2011-01-19T21:26:11.625Z · LW(p) · GW(p)

That title is rot-13ed, in case that confused anyone.

comment by cata · 2010-09-16T20:12:47.801Z · LW(p) · GW(p)

Whatever it is that's conscious, you can compute it and represent it in a static form. The simplest interpretation is that the output itself is conscious. So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious.

I don't see the contradiction here, although the sheer scale might be throwing off intuition. I'm perfectly willing to say that if you actually had such a tape, it would be conscious. Needless to say, the tape would be inconceivably huge if you really wanted to represent the astronomical amount of inputs and outputs that make up something we would call "conscious."

Replies from: jacob_cannell
comment by jacob_cannell · 2010-09-17T00:20:40.499Z · LW(p) · GW(p)

Let's see - the retina absorbs roughly 10^7 bits/second, say you had to simulate just a day of possible input, that gives at least 2^1000000000 possible input/output combinations.

There is 'possible in theory' and 'possible in any multiverse remotely similar to ours'. I tend to favor the 2nd usage of the world 'possible'.

Replies from: DanArmak
comment by DanArmak · 2010-09-17T19:15:45.728Z · LW(p) · GW(p)

Since the algorithm can be compressed well (it fits into a human brain), and since that form of the algorithm takes its input a few bits at a time (and not a day's worth in a single go), it seems likely that a fully static representation can also be highly compressed and would not need to take the full 2^(10^7) bits. Especially so if you allow the algorithm to be slightly imprecise in its output.

Replies from: cata
comment by cata · 2010-09-17T22:14:35.906Z · LW(p) · GW(p)

Jacob was drastically oversimplifying, because the algorithm (assuming we restrict ourselves to responses to visual stimuli) does not convert one retinal image to some particular, constant output; a conscious being would never respond in the same way to the same image all the time.

Instead, it converts one input brain state plus one retinal image to one output brain state, and brain states consist of a similarly enormous amount of information.

Replies from: DanArmak
comment by DanArmak · 2010-09-17T22:33:26.019Z · LW(p) · GW(p)

brain states consist of a similarly enormous amount of information.

Perhaps the difference between succeeding brain states, induced by visual input, isn't all that enormous.

comment by lukstafi · 2011-03-21T19:23:55.649Z · LW(p) · GW(p)

Methinks the discussion suffers from lacking the distinction between consciousness-as-property: "subsystem C of system X is conscious-in-X" and consciousness-as-ontological-fact: "C exists in a conscious way" . Consciousness (of "C" in "X") is option-1-computed in the sense that it is a Platonic entity (as a property of platonically considered "subsystem C of system X"). It is option-2-computation in the sense that all such entities "C" could be said to perform various computations in "X" (and it is the ensemble of computations that the property detects). To draw moral conclusions ("C exists in a conscious way"), one needs to take X=reality.

comment by datadataeverywhere · 2010-09-17T06:13:29.034Z · LW(p) · GW(p)

Sort of both. I think they reconcile more easily than you think.

Consciousness entities have behavior, including internal behavior (thoughts). Behavior obviously doesn't exist in stasis, which seems to be the point that you don't like about 1.

Consciousness is not the algorithm that generates that behavior, but rather that algorithm (or any equivalent algorithm) in action. It requires inputs, so that behavior can't just be read as a series of outputs and determined to be "the consciousness"; rather, consciousness is a property that entities operating inside an environment have in relation to that environment. This environment-entity combination could be read off as a series of numbers, but a series of numbers, even ones perfectly representing a thing, are not the thing itself. That's just a way of specifying something, not duplicating it.

Lastly, remember that consciousness is an illusion, but like free will, it is an illusion both difficult not to hold and useful to play along with. Much more so than for free will, consciousness can be a very useful descriptor of how a system behaves, so good definitions and understanding of it are worthwhile.

comment by MartinB · 2010-09-16T23:45:40.531Z · LW(p) · GW(p)

Option 4: there is no such thing as consciousness. Its all just an elaborate hoax our minds play on us. In reality we are 'just' state machines with complicated caches that give the appearance of a conscious mind acting. The illusion is good enough to fool us all, and works well enough for real life.

The more i learn off neuro science the more I get the impression that there is no real 'person' hidden in the body, just a shell, that runs lots of software over data storage, and seems to act somewhat consistent. So the closer you look, the less consciousness is left to see.

Please note that I still have trouble understanding how qualia work.

Replies from: NancyLebovitz, Tiiba
comment by NancyLebovitz · 2010-09-20T16:35:47.714Z · LW(p) · GW(p)

My problem with that sort of explanation is that I don't see how there can be illusion without a consciousness present to be mistaken.

Replies from: atucker
comment by atucker · 2010-09-22T03:56:13.350Z · LW(p) · GW(p)

Is an illusion something other than a system failing to correctly represent its otherwise accurate sensory input?

Particularly if it happens in a specific and reproducible way, so that even though the information it gathers is accurate, said information is internally represented as being something different and inaccurate.

Because I can write programs that have bugs that are pretty much that.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-09-22T16:10:49.025Z · LW(p) · GW(p)

I'm not sure, but that's an interesting and possibly valid point.

comment by Tiiba · 2010-09-17T14:57:14.518Z · LW(p) · GW(p)

Something is making us talk about consciousness. Until we come up with a definition, we really have no business saying that this very real something is a fake version of some undefined ball of vagueness. It's like saying that trees are just big wooden things pretending to be snumbas. It's a lot better to just find out what makes us talk about consciousness and say, "Let's call it consciousness!"

Replies from: MartinB
comment by MartinB · 2010-09-17T15:42:09.621Z · LW(p) · GW(p)

Well yes. A good definition matters.

Option 2 makes the least sense to me. It should be possible to have consciousness with any kind of computational device.

comment by David_Allen · 2010-09-16T22:01:46.836Z · LW(p) · GW(p)

Consciousness is a roughly defined and (leaky) abstraction.

So this leads to the conclusion that, if a Turing machine computes consciousness and summarizes its output in a static representation on a tape, the tape is conscious.

Without context the content of the tape has no meaning. So the consciousness that has been output on the tape, is only a consciousness in the context that can use it to generate the consciousness abstraction.

It is the set of "stuff" that produces the consciousness abstraction that can be called conscious. In a Turing machine, this "stuff" would be the tape plus the machine that gives the tape the necessary context.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-09-17T18:13:36.052Z · LW(p) · GW(p)

As Nisan asked above: Is this Turing machine conscious if you don't run it?

Replies from: David_Allen
comment by David_Allen · 2010-09-17T19:31:13.107Z · LW(p) · GW(p)

It seems that consciousness requires some type of thought, and that thought requires the system to self-modify. A static representation of the Turing machine then does not meet this requirement.

So a Turing machine that is not running is not conscious.

Is there another perspective to consider?

comment by jacob_cannell · 2010-09-16T21:43:14.911Z · LW(p) · GW(p)

I understand Computationalism as a direct consequence of digital physics and materialism: all of physics is computable and all that exists is equivalent to the execution of the universal algorithm of physics (even if an exact description of said algorithm is unknowable to us).

Thus strictly speaking everything that exists in the universe is computation, and everything is computable. Your option 1 seems to make a false distinction that something non-computational could exist - that consciousness could be something that is computable but is not itself computation. This is impossible - everything that is computable and exists is necessarily some form of computation. Computation is the underlying essence of reality - another word for physics.

But the word 'consciousness', to the extent it has useful meaning, implies a particular dynamic process of computation. Consciousness is the state of being conscious, the state of being conscious of things, a state of active cognition. Thus any static system can not be conscious. A mind frozen in time would necessarily be unconscious.

If consciousness is computed, then there are no necessary dynamics. All that matters is getting the right output. It doesn't matter what algorithm you use to get that output, or what physical machinery you use to compute it.

This doesn't seem quite correct. There are necessary dynamics - out of the space of all dynamics (all potential computational or physical processes), some set of them are 'conscious'. There is no single correct output. There is a near infinite set of correct outputs - the definition of which is based on what the correct dynamics computes based on the inputs.

Consciousness is not the output anymore than a car is it's exhaust or microsoft windows is a word document.

You can use the input->output mappings to understand and define the black-box process within because the black-box process is physical and so it is governed by some algorithm. But the algorithm is not just it's output.

Computationalism means considering two computational outputs equivalent if they contain the same information, whether they're computed with neurons and represented as membrane potentials, or computed with Tinkertoys and represented by rotations of a set of wheels.

No, not quite - computationalism via functionalism means considering two processes functionally equivalent if they produce the same outputs for the same inputs. It's not just about outputs.

A key idea in functionalism is that one physical system can realize many different functional algorithms simultaneously. A computer is a computational system running physics on the most base level, but it is also can have other programs running at an entirely different functional encoding level - like words written in letters that are themselves composed of sentences or entire books.

If consciousness is computation, then we have the satisfying feeling that how we do those computations matters. But then we're not computationalists anymore!

Err yes and no. Consciousness is always strictly defined in relation to an environment, so speed is always important. But beyond that, how you do the comptuations matters not at all. There are an infinite number of equivalent algorithms and computations in theory, but the set of realizable equivalent algorithms and computational processes that enact them is finite in reality because of physics.

Another way of looking at:

There are many possible patterns of matter/energy that are all automobiles, or dinosaurs, or brains.

There are many possible patterns of matter/energy that are all conscious - defined as patterns of matter/energy that enact a set of intelligence algorithms we label "conscious". The label is necessarily functional.

comment by Perplexed · 2010-09-17T00:31:37.181Z · LW(p) · GW(p)

If I were to attempt to characterize consciousness in computational terms, I would probably start with a diagram like that for the Mealy machine in this pdf. I would label the top box simply "computation" and the lower box "short term memory. I would speculate that consciousness has something to do with that feedback loop through short term memory. I might even go so far as to claim that the information flowing through short term memory constitutes the "stream of consciousness".

If this approach is taken, there are some consequences. One is a kind of anti-anti-zombie principle. You simply cannot characterize consciousness by looking at the I/O function from inputs X to outputs Z. It is at least conceivable that the same I/O function might be implemented using a completely different feedback trace - one which encoded the information Y differently is one possibility. Another possible change might be to use something other than "short term memory" to buffer the feedback.

A second consequence is that if you want to capture consciousness on tape, you probably need to capture Y rather than Z. A third consequence is that the "seat of consciousness" is either in the computational machinery, in the short term memory, or (my intuition) in both, together with the communication paths that tie them together.