Angry Atoms

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-03-31T00:28:10.000Z · LW · GW · Legacy · 59 comments

Contents

59 comments

Fundamental physics—quarks 'n stuff—is far removed from the levels we can see, like hands and fingers.  At best, you can know how to replicate the experiments which show that your hand (like everything else) is composed of quarks, and you may know how to derive a few equations for things like atoms and electron clouds and molecules.

At worst, the existence of quarks beneath your hand may just be something you were told.  In which case it's questionable in one what sense you can be said to "know" it at all, even if you repeat back the same word "quark" that a physicist would use to convey knowledge to another physicist.

Either way, you can't actually see the identity between levels—no one has a brain large enough to visualize avogadros of quarks and recognize a hand-pattern in them.

But we at least understand what hands do.  Hands push on things, exert forces on them.  When we're told about atoms, we visualize little billiard balls bumping into each other.  This makes it seem obvious that "atoms" can push on things too, by bumping into them.

Now this notion of atoms is not quite correct.  But so far as human imagination goes, it's relatively easy to imagine our hand being made up of a little galaxy of swirling billiard balls, pushing on things when our "fingers" touch them.  Democritus imagined this 2400 years ago, and there was a time, roughly 1803-1922, when Science thought he was right.

But what about, say, anger?

How could little billiard balls be angry?  Tiny frowny faces on the billiard balls?

Put yourself in the shoes of, say, a hunter-gatherer—someone who may not even have a notion of writing, let alone the notion of using base matter to perform computations—someone who has no idea that such a thing as neurons exist.  Then you can imagine the functional gap that your ancestors might have perceived between billiard balls and "Grrr!  Aaarg!"

Forget about subjective experience for the moment, and consider the sheer behavioral gap between anger and billiard balls.  The difference between what little billiard balls do, and what anger makes people do. Anger can make people raise their fists and hit someone—or say snide things behind their backs—or plant scorpions in their tents at night.  Billiard balls just push on things.

Try to put yourself in the shoes of the hunter-gatherer who's never had the "Aha!" of information-processing.  Try to avoid hindsight bias about things like neurons and computers.  Only then will you be able to see the uncrossable explanatory gap:

How can you explain angry behavior in terms of billiard balls?

Well, the obvious materialist conjecture is that the little billiard balls push on your arm and make you hit someone, or push on your tongue so that insults come out.

But how do the little billiard balls know how to do this—or how to guide your tongue and fingers through long-term plots—if they aren't angry themselves?

And besides, if you're not seduced by—gasp!—scientism, you can see from a first-person perspective that this explanation is obviously false.  Atoms can push on your arm, but they can't make you want anything.

Someone may point out that drinking wine can make you angry.  But who says that wine is made exclusively of little billiard balls?  Maybe wine just contains a potency of angerness.

Clearly, reductionism is just a flawed notion.

(The novice goes astray and says "The art failed me"; the master goes astray and says "I failed my art.")

What does it take to cross this gap?  It's not just the idea of "neurons" that "process information"—if you say only this and nothing more, it just inserts a magical, unexplained level-crossing rule into your model, where you go from billiards to thoughts.

But an Artificial Intelligence programmer who knows how to create a chess-playing program out of base matter, has taken a genuine step toward crossing the gap.  If you understand concepts like consequentialism, backward chaining, utility functions, and search trees, you can make merely causal/mechanical systems compute plans.

The trick goes something like this:  For each possible chess move, compute the moves your opponent could make, then your responses to those moves, and so on; evaluate the furthest position you can see using some local algorithm (you might simply count up the material); then trace back using minimax to find the best move on the current board; then make that move.

More generally:  If you have chains of causality inside the mind that have a kind of mapping—a mirror, an echo—to what goes on in the environment, then you can run a utility function over the end products of imagination, and find an action that achieves something which the utility function rates highly, and output that action.  It is not necessary for the chains of causality inside the mind, that are similar to the environment, to be made out of billiard balls that have little auras of intentionality.  Deep Blue's transistors do not need little chess pieces carved on them, in order to work.  See also The Simple Truth.

All this is still tremendously oversimplified, but it should, at least, reduce the apparent length of the gap.  If you can understand all that, you can see how a planner built out of base matter can be influenced by alcohol to output more angry behaviors.  The billiard balls in the alcohol push on the billiard balls making up the utility function.

But even if you know how to write small AIs, you can't visualize the level-crossing between transistors and chess.  There are too many transistors, and too many moves to check.

Likewise, even if you knew all the facts of neurology, you would not be able to visualize the level-crossing between neurons and anger—let alone the level-crossing between atoms and anger.  Not the way you can visualize a hand consisting of fingers, thumb, and palm.

And suppose a cognitive scientist just flatly tells you "Anger is hormones"?  Even if you repeat back the words, it doesn't mean you've crossed the gap.  You may believe you believe it, but that's not the same as understanding what little billiard balls have to do with wanting to hit someone.

So you come up with interpretations like, "Anger is mere hormones, it's caused by little molecules, so it must not be justified in any moral sense—that's why you should learn to control your anger."

Or, "There isn't really any such thing as anger—it's an illusion, a quotation with no referent, like a mirage of water in the desert, or looking in the garage for a dragon and not finding one."

These are both tough pills to swallow (not that you should swallow them) and so it is a good easier to profess them than to believe them.

I think this is what non-reductionists/non-materialists think they are criticizing when they criticize reductive materialism.

But materialism isn't that easy.  It's not as cheap as saying, "Anger is made out of atoms—there, now I'm done."  That wouldn't explain how to get from billiard balls to hitting.  You need the specific insights of computation, consequentialism, and search trees before you can start to close the explanatory gap.

All this was a relatively easy example by modern standards, because I restricted myself to talking about angry behaviors.  Talking about outputs doesn't require you to appreciate how an algorithm feels from inside (cross a first-person/third-person gap) or dissolve a wrong question (untangle places where the interior of your own mind runs skew to reality).

Going from material substances that bend and break, burn and fall, push and shove, to angry behavior, is just a practice problem by the standards of modern philosophy.  But it is an important practice problem.  It can only be fully appreciated, if you realize how hard it would have been to solve before writing was invented.  There was once an explanatory gap here—though it may not seem that way in hindsight, now that it's been bridged for generations.

Explanatory gaps can be crossed, if you accept help from science, and don't trust the view from the interior of your own mind.

59 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by poke · 2008-03-31T02:10:22.000Z · LW(p) · GW(p)

I don't really get this. Why can't you simply view an animal or person as a physical system? I don't think you need any concept of information processing. If you think of animals and people as akin to mechanical machines, and many people thought of at least animals as machines before the advent of information processing, then you actually have an accurate grasp of what's happening. The animal is turning physical force into chemical and electrical forces and then back into physical force; this is not substantially different from a mechanical machine. If the primitive atomist world view can encompass chemistry (which it did; different elements were taken to be differently shaped atoms) then I think it can encompass behavior.

Replies from: syzygy
comment by syzygy · 2012-03-20T04:12:10.518Z · LW(p) · GW(p)

Behavior is very different than thoughts. It's easier to think of animals as machines because we have never experienced an animal thought. To us, animals just look exactly as you described, like behavior outputting machines, because we have never experienced the thought processes of animals.

comment by Benquo · 2008-03-31T02:24:03.000Z · LW(p) · GW(p)

@poke

You need the concept of a computing machine -- and perhaps even more than that -- in order to explain how little vibrations in the air can sometimes cause us to fight, and sometimes to stop fighting, sometimes move towards or away from the sound, and sometimes to stop, etc.

comment by Michael_G.R. · 2008-03-31T03:32:24.000Z · LW(p) · GW(p)

It seems like this post isn't as clear as it could be - or at least not as clear as Elizer's best posts.

Either it needs another draft, or the problem lies with me and I just need to re-read it more carefully...

comment by mtraven · 2008-03-31T03:55:28.000Z · LW(p) · GW(p)

I posted this in the last thread but didn't get much response, so I'll try again:

Here's a question for reductionists: It is a premise of AI that the mind is computational, and that computations are algorithms that are more or less independent of the physical substrate that is computing them. An algorithm to compute prime numbers is the same algorithm whether it runs on an Intel chip or a bunch of appropriately-configured tinkertoys, and a mind is the same whether it runs on neurons or silicon. The question is, just how is this reductionist? It's one thing to say that any implementation of an algorithm (or mind) has some physical basis, which is pretty obviously true and hence not very interesting, but if those implementations have nothing physical in common, then your reduction hasn't actually accomplished very much.

In other words: let's grant that any particular mind, or algorithm, is physically instantiated and does not involve any magic non-physical forces. Nonetheless, it is mysterious how physical systems with nothing physical in common can realize the same algorithm. That suggests that the algorithm itself is not a physical thing, but something else. And those something elses have very little to do with the laws of physics.

Replies from: Friendly-HI
comment by Friendly-HI · 2012-01-27T19:47:07.538Z · LW(p) · GW(p)

This post is very old but I'll try answering it as best I can.

"Nonetheless, it is mysterious how physical systems with nothing physical in common can realize the same algorithm.That suggests that the algorithm itself is not a physical thing, but something else. And those something elses have very little to do with the laws of physics."

An algorithm is basically a step-by-step instruction of computation. First you do this calculation, then you take the result and make another calculation with it and so on until ideally you get some kind of output (be it behavior or a number on your calculator). Based on that understanding I don't quite see the trouble with performing the same algorithm (sequence of computations) based on wildly different physical systems. The algorithm itself is of course always a physical "thing" as well in a sense, since it must be carried out on matter... you can't do computation on "nothing", in computers it's electrons being herded through "gates" on microchips and in a brain it's electrical impulses in neurons being carried along the dendrites towards the nucleus which can act as a gate and inhibit or pass on the impulse along it's axon towards other dendrites or nuclei. Of course this is massively oversimplified, but I hope it is obvious how similar computations can be carried out by both.

Granted, a neuron is unimaginably more complex than a gate on a microchip but if you google for the "Blue Brain project" you can see how computers can be used to simulate (or should I rather say emulate?) neurons and ultimately whole neural nets. This would be an instance where (essentially) the same algorithm is implemented aka computed once by neurons and once by microchips.

That something has very little do do with the laws of physics is a statement that couldn't possibly sound more radical and wrong to the ears of science and rationality. Computation is hardly an area where the laws of physics are insufficient to lend themselves for empiricism. To point out that it is evidently absolutely understood how algorithms can be implemented by physical stuff seems almost ridiculous, given the fact that right now we sit in front of a working computer, that was once invented and built by humans.

comment by Yelsgib · 2008-03-31T04:03:44.000Z · LW(p) · GW(p)

"But an Artificial Intelligence programmer who knows how to create a chess-playing program out of base matter, has taken a genuine step toward crossing the gap. If you understand concepts like consequentialism, backward chaining, utility functions, and search trees, you can make merely causal/mechanical systems compute plans."

The space of algorithms to play chess "well" is large. That space is not equivalent to the space of "intelligence."

Your conjecture seems to be that the Problem of Chess requires intelligence.

I also don't see how you can claim that understanding utility functions helps you understand the brain. Do you think that such functions are explicitly represented in the brain? Do you have ANY reason to believe this?

I guess it seems to me that you're claiming that you have reason to believe you understand something about what intelligence is - but then you go on to talk about some crappy models we have for it.

comment by Tom_McCabe2 · 2008-03-31T04:04:45.000Z · LW(p) · GW(p)

"But materialism isn't that easy. It's not as cheap as saying, "Anger is made out of atoms - there, now I'm done.""

If materialism required a detailed understanding of every solved problem in science, none of us could ever be materialists, at least not with human brains. How many times do you have to learn the same lesson ("complex systems can be built from simple parts which don't share the defining properties of the larger system")? Hopefully, after a few iterations, you'll start imagining unseen layers of complexity, rather than phlogiston and elan vital.

comment by athmwiji · 2008-03-31T04:17:49.000Z · LW(p) · GW(p)

You could create an ai that has behavior similar to anger, and displays intention, but having subjective experience is another matter. Explaining subjective experience in terms of quarks is rather like trying to explain quantum mechanics in terms of aerodynamics. You will never get there. Not because subjective experience defies the laws of nature in some mysterious way, but because you would simply be going in the wrong direction. And, the other direction is not getting smaller, but rather a lower level of abstraction.

We have an intuitive sense of how the physical universe works that we have evolved, and this intuition is pretty accurate at the scale we live. Physics has progressed by building from models based on this intuition, and paring aspects down that are simply unnecessary, through Occam's razor, or just wrong, through experimentation.

The problem with this is that starting with the model our evolved intuition has given us skips a layer of abstraction. We have clearly seen how this intuition has failed us at much larger and smaller scales. In both cases we have found that measurements depend on the observer.

The reality that we live in is composed of experience. The yard stick of physics is how accurate a model predicts our experiences. This is where we should start. At least we should incorporate how we get the abstract notions of space-time and matter-energy from experiences, rather then skipping over that abstraction, and seeing matter-energy as something concrete leaving our experiences mysterious.

comment by Tom_McCabe2 · 2008-03-31T04:23:46.000Z · LW(p) · GW(p)

(Sorry about the double comment, but several responses were posted while I was typing.)

"It's one thing to say that any implementation of an algorithm (or mind) has some physical basis, which is pretty obviously true and hence not very interesting, but if those implementations have nothing physical in common, then your reduction hasn't actually accomplished very much."

The reduction of a software system is just as difficult as the reduction of a physical system, and perhaps even more so. I believe there's a theorem which states that the problem of producing a Turing machine which will give output Y for input X is uncomputable in the general case.

"That suggests that the algorithm itself is not a physical thing, but something else."

Algorithms are made from math; math was originally abstracted from physical matter in exactly the way that you describe. You can implement "two" on completely different physical systems- two apples, two computers, two Space Shuttles, and so on.

"Your conjecture seems to be that the Problem of Chess requires intelligence."

It's just an example of how complex behaviors can arise from simple parts that don't exhibit the behaviors themselves. Chess-playing, although not equivalent to general intelligence, does require several complex behaviors which are also used by general intelligence.

"I also don't see how you can claim that understanding utility functions helps you understand the brain. Do you think that such functions are explicitly represented in the brain?"

Utility functions are general enough to apply to any optimization process which can state a clear preference over outcomes.

comment by Tiiba3 · 2008-03-31T04:44:51.000Z · LW(p) · GW(p)

mtraven:

"That suggests that the algorithm itself is not a physical thing, but something else. And those something elses have very little to do with the laws of physics."

An algorithm can exist even without physics. It's math.

comment by Tiiba3 · 2008-03-31T04:47:35.000Z · LW(p) · GW(p)

Please delete my post. I see that Tom said that already.

comment by Unknown · 2008-03-31T05:17:36.000Z · LW(p) · GW(p)

Mtraven: as I understand him, Eliezer is saying that is is logically impossible to have a physical structure identical to a conscious human being without it being, in fact, a conscious human being. He hasn't yet said that it is logically impossible to have something that acts like a conscious human being, but made out of other physical stuff, without consciousness. If he does go on to say this, he will really be going off the deep end in terms of making baseless assertions. In order to know that such a thing is true, he would first have to solve the question which he has admitted is a question, and which he has not solved, nor has anyone else: WHY is a human being conscious?

comment by Latanius2 · 2008-03-31T09:24:41.000Z · LW(p) · GW(p)

Unknown: see Dennett: Kinds of Minds, he has a fairly good theory for what consciousness is. (To put it short: it's the capability to reflect on one's own thoughts, and so use them as tools.)

At the current state of science and AI, this is what sounds like a difficult (and a bit mysterious) question. For the hunter-gatherers, "what makes your hand move" was the same (or even more) difficult question. (The alternative explanation, "there is a God who began all movement etc." is still popular nowadays...)

Tiiba: an algorithm is a model in our mind to describe the similarities of those physical systems implementing it. Our mathematics is the way we understand the world... I don't think the Martians with four visual cortexes would have the same math, or would be capable of understanding the same algorithms... So algorithms aren't fundamental, either.

Replies from: ramana-kumar
comment by Ramana Kumar (ramana-kumar) · 2009-10-29T07:09:42.013Z · LW(p) · GW(p)

an algorithm is a model in our mind to describe the similarities of those physical systems implementing it

a number is a model like that as well, right? (may be relevant to the comments below)

comment by Sebastian_Hagen2 · 2008-03-31T11:00:46.000Z · LW(p) · GW(p)

I believe there's a theorem which states that the problem of producing a Turing machine which will give output Y for input X is uncomputable in the general case. What? That's trivial to do; a very simple general method would be to use a lookup table. Maybe you meant the inverse problem?

WHY is a human being conscious? I don't understand this question. Please rephrase while rationalist-tabooing the word 'conscious'.

comment by Tiiba3 · 2008-03-31T11:24:38.000Z · LW(p) · GW(p)

Latanius:

"Tiiba: an algorithm is a model in our mind to describe the similarities of those physical systems implementing it. Our mathematics is the way we understand the world... I don't think the Martians with four visual cortexes would have the same math, or would be capable of understanding the same algorithms... So algorithms aren't fundamental, either."

One or more of us is confused. Are you saying that a Martian wiith four visual cortices would be able to compress any file? Would add two and two and get five?

They can try, sure, but it won't work.

comment by michael_vassar3 · 2008-03-31T12:09:07.000Z · LW(p) · GW(p)

"I think this is what non-reductionists/non-materialists think they are criticizing when they criticize reductive materialism."

To be fair, there are "materialists" who do make claims like these and they are guilty of scientism.

comment by michael_vassar3 · 2008-03-31T12:23:47.000Z · LW(p) · GW(p)

Richard: If you are still reading here, would you generally agree with "athmwiji"? To me, his position seems reasonable while yours (bridging laws etc) does not, but both are non-materialist in some sense and I would like to know how you responded to his post in any event.

Eliezer: I assume that this whole bundle of threads is aimed at responding to claims like athmwiji's, so I won't ask for a response here unless my assumption is incorrect. My guess is that you think that you have a good counter, or at least a method for reaching a good counter, but if not then you really need one.

comment by Unknown · 2008-03-31T12:41:31.000Z · LW(p) · GW(p)

I don't know about Richard, but I agreed with athmwiji, as well as with Richard, and I don't see why their claims should be opposed to one another.

I tried to make more or less the same point as athmwiji when I pointed out that physics is a way of explaining what we see, hear, and feel.

Eliezer may have some response, but I highly doubt it's a good one. I'm prepared to change my mind if necessary, however.

comment by Caledonian2 · 2008-03-31T12:51:34.000Z · LW(p) · GW(p)

An algorithm can exist even without physics. It's math.
No. Math does not exist without physics.

You could create an ai that has behavior similar to anger, and displays intention, but having subjective experience is another matter. Explaining subjective experience in terms of quarks is rather like trying to explain quantum mechanics in terms of aerodynamics.
So you're denying that the 'subjective' is a subset of the 'objective', categorically? Well, that explains why we can't convince you. You cannot be convinced of that which you have already rejected axiomatically.

But if you already accept that a system can be made to act in such a way that it appears to be 'angry', you've already accepted all of the premises needed to understand how our 'subjective experiences' arise from physical processes. It's just that you've rejected the conclusion outright.

comment by Ben_Jones · 2008-03-31T13:53:18.000Z · LW(p) · GW(p)

If I cut a brain in half, the mind will stop working*. But if I cut a radio in half, it will stop playing songs, no magic there. Playing songs could be considered an emergent phenomenon arising from the configuration of the bits of the radio, but let's be frank here, it's just what the radio does.

Large iterations of simple patterns are the basis for most examples of complexity. The mind is an extreme case of this, but that doesn't imbue it with a magical irreducibility, any more than we should think that a whole, working radio is magical. It's all or nothing here folks - either consciousness is an irreducible phenomenon that only works on meat (for some reason we're still waiting to hear) or it's just a very complex arrangement of meat that, for obvious reasons, we've only seen in meat so far.

[Please, no-one quote me a case of some guy having half his brain removed and getting by just fine.]

comment by Richard4 · 2008-03-31T15:01:03.000Z · LW(p) · GW(p)

Michael - unless I've misunderstood, athmwiji's view sounds like good old-fashioned metaphysical idealism. It's an interesting view, and deserves serious attention, but I don't believe it myself because I think there could be a world (e.g. the zombie world) containing only physical stuff, without any need for "ideas" or phenomenal stuff. The idealist thus faces the same challenge as the materialist (just in the opposite direction): show me the contradiction in my description of the zombie world.

P.S. I use 'scientism' very precisely, to those who hold the indefensible assumption that empirical inquiry is the only form of inquiry (and associated verificationist claims, e.g. that only scientific discourse is coherent or meaningful). There was plenty of this sentiment expressed in the previous thread. (A couple of commenters even expressed their inability to distinguish between philosophy and religion, which is of course the primary symptom of scientism.) I suspect that this is one of the most common forms of bias among the scientifically educated but philosophically ignorant population. It would be interesting to see it (seriously) discussed here sometime.

comment by RobinHanson · 2008-03-31T15:14:01.000Z · LW(p) · GW(p)

Richard, if you'll write a post on scientism, with a few examples of how the purported bias misleads, I'll post a response.

comment by athmwiji · 2008-03-31T15:33:12.000Z · LW(p) · GW(p)

"So you're denying that the 'subjective' is a subset of the 'objective', categorically?" I am not exactly sure what you mean here.

My understanding of 'subjective' and 'objective' is as follows. I see an image and simultaneously hear a sound. Immediately i recognize that three experiences are occurring: seeing, hearing, and the integration of the two into a third experience, which is aware of the other two. I also have experiences of recognizing other experiences as being more or less similar to eachother.

I would define subjective entities as sets of experiences that are connected through the relationship of one experience being aware of another. And, i would define objective entities as sets of experiences that have a consistent structure of similarity.

Generally an experience is in both a subjective set, and an objective set. I would express this by saying Subject experiences Object.

Our experiences are consistent, and physics presents useful models for predicting the objective aspects of that consistency, but it ignores that objective entities come from our experiences, and i think a model which did not ignore this could give us better explanations of how the material consistency of experience connects experience to computational processes.

I would not deny that such an ai has the subjective experience of anger, and i would definitely accept that it is possible to make something that does. But, this does not mean that i understand how it has subjective experiences, and i would not describe those subjective experiences as arising from quarks, which are more an artifact of the computational processes of the universe then experience is.

comment by athmwiji · 2008-03-31T16:06:50.000Z · LW(p) · GW(p)

I think the zombie world is a valid thing to consider, but the only way you could say something about the zombie world is to consider what you would see if you were there, and then it would not be a zombie world anymore. Perhaps a more useful zombieish world to consider is one in which there are only zombies except for one epiphenomenal ghost: you.

comment by poke · 2008-03-31T16:37:39.000Z · LW(p) · GW(p)

Benquo - I still fail to see why you need the concept of a computing machine. Not only does a machine seem adequate but, historically, the machine analogy was in fashion before computers came along (it even motivated the development of computing machines). Mechanical devices offer useful analogies of feedback, control and multi-state systems (at least after the industrial revolution) that can found a materialist conception of mind. Computers offer a better analogy is some ways and a misleading one in others (i.e., the strict software/hardware distinction).

I agree with Eliezer that you need to understand quite a few of the details of a reduction for the reduction to go through. Reductionism is, by definition, a very specific thesis. I think it's wrong to argue that reduction has to be true on logical grounds; rather reductionism is the position we find ourselves in. Scientists are not systematicists like philosophers; when Galileo measured the acceleration due to Earth's gravity he gained the ability to predict very many Earth-bound motions but nothing in this simple measurement, or the methods used to obtain it, assured that someday we'd have similar physical laws describing everything from boiling water to thinking.

Where anti-reductionists often go wrong is in arguing that there's nothing in the scientific method that assures reduction. That may be true but it's beside the point; it's the results of science, here in the 21st century, that require reductionism. We have a set of broad and exclusionary existence claims which are, in our local environment at least, settled science. Fuzzy notions of dualistic substances and properties aren't part of that and don't get to play.

I would argue, however, that there is a sense in which reduction eliminates anger (and many other common sense psychology properties). A part of how we experience anger, and similar emotional states and cognitive processes, derives from our ignorance of how they work. Emotions, for example, are usually taken to be monolithic and irreducible and part of the experience of a negative emotion such as anger is our lack of control. The more we understand emotions the more we, in a sense, "master" them. With some emotions ignorance of how the state came about is probably the central part of how we experience and sustain them (i.e., existential despair or anguish). When we understand the processes behind our emotions we will probably become better at controlling negative emotions and sustaining positive emotions (at least one hopes).

The same is true of cognitive processes and even perception. My experience of my own perception has been changed from monolithic to aggregate and infallible on studying how perception works. This line of reasoning is actually used by Paul Churchland in Scientific Realism and the Plasticity of Mind to defend his version of eliminative materialism. Far from being a position of despair as opponents often paint it (we can't explain these things therefore they don't exist) he argues for replacing common sense understanding with scientific understanding on the grounds that we can enrich our perceptual and conceptual apparatus. (A lot of people take issue with Churchland because of his connectionism but he developed his eliminative materialism separately in this earlier work and I think it's still applicable if you reject connectionism.)

comment by Ben_Jones · 2008-03-31T16:58:38.000Z · LW(p) · GW(p)

I would define subjective entities as sets of experiences that are connected through the relationship of one experience being aware of another.

Sounds like a fair description of consciousness, but certainly nothing like a refutation of, or argument against reductionism.

The zombie world analogy just doesn't speak to me though. If they don't have this third 'subjective experience', then there is something physically, measurably different from our world. Has to be.

athmwiji, I think Caledonian was objecting to the postulation of a purely subjective phenomenon with no objective, measurable consequences. While I'm not going to call anyone a fool any time soon, I think that's fair comment. I mean, we're talking dragons in garages again aren't we?

comment by Unknown · 2008-03-31T17:20:10.000Z · LW(p) · GW(p)

Ben, why do keep talking about being "measurably" different? Even if the zombie world has to be different physically (on the assumption that there is nothing but physical things), why does the difference have to be measurable? Couldn't the difference be precisely that the world is made of unconscious stuff instead of conscious stuff? This would be a physical difference but not a measurable difference--all the same mathematical properties.

comment by Latanius2 · 2008-03-31T17:43:57.000Z · LW(p) · GW(p)

athmwiji: if I understood correctly, you said that the concept of the physical world arises from our subjective experiences, and even if we explain that consistently, there still remain subjective experiences which we can't. We could for example imagine a simulated world in which everyone has silicon-based brains, including, at first sight, you, but in the real world you're still a human with a traditional flesh-based brain. There would be no physics then, which you could use to explain your headache with in-world mechanisms.

But without assuming that you're in such a special position in the world, you just have to explain why the other seemingly conscious beings (including AIs made by us) argue that they must have special subjective experiences which aren't explainable by the objective physics. (In fact, the whole thought process is explainable.) I think it's the same as free will...

Tiiba: no, the Martians wouldn't be able to contradict our math, as it's a model about what's happening around us, of the things we perceive. They wouldn't have different anticipations of real-world facts, but they would have different concepts, as their "hardware" differs from us, and so do their models. If our world would consist of fluid, seemingly infinitely divisible things, I don't think we would understand prime numbers at all... (As quantum mechanics doesn't seem to be intuitive to us.)

So I can imagine another math in which 2+2=5 is not obviously false, but needs a long proof and complicated equations...

comment by michael_vassar3 · 2008-03-31T18:00:53.000Z · LW(p) · GW(p)

Ben Jones: Subjective phenomena are the objective measurements. Measuring proxies for them may be useful for many reasons, but the phenomena are the basic stuff that we are presented with. An AGI doesn't try to prove that it has an "objective measurable" bitstream of inputs. Rather, it has one and tries to derive the nature of the world and self within which that bitstream probably exists.

Richard: I think that scientism is real, common, and a serious problem, but I don't think that your "precise" use of the term characterizes anything real. Rather, scientism is a lack of philosophical reflection, lack of knowledge that there exist communities of reasonably committed truth seekers other than scientists, and identity, expressed through mimicry of common surface features, with the typical members of said community. Such features include naive atheism, incoherent skepticism, credentialism, denunciation of large numbers of academic communities as existing on the other side of an imaginary line between the 'scientific' and 'non-scientific' and fairly useless and clueless futzing around in laboratories. Aggressive questioning of such people may cause them to make the claims that you attribute to scientism, but bad poll design can elicit all sorts of confabulation. These people no more believe in verificationism than naive theists believe in an omniscient god who "tested" Abraham. Less really. When not hounded by philosophers they don't even believe in their belief in it. It doesn't guide their actions, thoughts, or verbalizations, all of which speak to the belief in mathematical analysis, skilled question and hypothesis formation, skilled observation and tool and experiment design, thought experiments (with Einstein as the exemplar of their use), visualization, and even to some extent rigorous logical argument (though they have no idea how far this can be taken by those truly skilled in it, and thus trust it less than they should while holding it to much lower standards than they should). They aren't helped to escape from their confusion by a philosophy profession that does tolerate much that simply is religion (think Hegel) so long as it is promoted by someone who once did even a bit of genuine good philosophy. Think what the prestige of science would be like if the scientific community a) wasn't involved in producing technology and b) advocated familiarity with Newton's extensive alchemical and theological discourses as on par with his theory of gravity.

It seems to me that Eliezer has just presented a fairly good demonstration of how there might very plausibly contain a logical contradiction. His argument is, as yet, far from compelling, but it is a strong enough analogy that it seems to me that rational pseudo-Bayesian truth seekers, as opposed to adherents of traditional rules of debate, must at least accept the possibility as very plausible, especially given the fact that currently prestigious and fairly scientifically competent (but not really at the level of general scientific competence that I have seen to characterize even ordinary science professors at top-50 US universities) philosophers do make arguments assuming confusions Exactly analogous to that in Eliezer's example, for instance, by positing a world exactly like ours except that water on this world is not H2O.

P.S. any chance you could convince Chalmers to make an appearance in the comments here?

comment by mtraven · 2008-03-31T18:29:09.000Z · LW(p) · GW(p)

"Algorithms are made from math" -- indeed, mathematical objects of any kind also have the peculiar properties that I noted. A hexagon is a hexagon no matter what it's made of. A hand is a hand not because its composed of flesh, but because it has certain parts in certain relationships, and is itself attached to a brain. Robotic hands are hands. While there is nothing magically non-physical going on with minds or hands, it does not seem to me that a theory of hands or minds can be expressed in terms of physics. This is the sense in which I am an antireductionist. There are certain phenomena (mathematics most clearly) which, while always grounded n some physical form, seem to float free of physics and follow their own rules.

comment by Scott_Scheule · 2008-03-31T18:47:30.000Z · LW(p) · GW(p)

Mtraven,

This is vintage Platonic idealism, no? Not to criticize, just to clarify.

Vassar,

Does Richard have some pull with Chalmers I don't know about?

Incidentally, Richard is presenting here--well--much of Chalmers' arguments in The Conscious Mind. A good book.

comment by Tiiba3 · 2008-03-31T19:15:05.000Z · LW(p) · GW(p)

"So I can imagine another math in which 2+2=5 is not obviously false, but needs a long proof and complicated equations..."

So, from the fact that another mind might take a long time to understand integer operations, you conclude that it has "another math"? And what does that mean for algorithms?

If an intelligence is general, it will be able to, in time, understand any concept that can be understood by any other general or narrow intelligence. And then use it to create an algorithm. Or be conquered.

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-03T01:36:40.890Z · LW(p) · GW(p)

Think of binary arithmetic verses decimal arithmetic verses hexadecimal arithmetic.

Certain things in each of these arithmetics are extremely easy compared to the other two.

For example, in binary, multiplying by 2 is absurdly easy, but multiplying by 10 is much harder. Multiplying by 16 is actually slightly easier than 10, as there are some cool tricks that apply between the two sets.

In decimal, multiplying by 10 is never hard, no matter how big the number. Multiplying by 2 can be hard if the number is big enough, but it's still pretty easy. Multiplying by 16 takes some mental gymnastics right from the get-go (well, for most people anyway).

You see the pattern, so I won't do hex.

Basic floating point arithmetic is quite easy in decimal, but doing this in binary is significantly more difficult and often results in non-terminating numbers, or even non-real numbers akin to dividing one by three or pi in decimal. 10.06 might look nice and clean in decimal, but it's a nightmare in binary. The net result in computer science is that you have to be very, very careful with binary rounding errors, since almost every floating point calculation is going to require rounding for most numbers.

And that's just starting with a different number of digits on your hands. Imagine if you looked at the world in a completely different way than we do, what would math look like? The physics wouldn't change, but perhaps calculus is as easy as addition is to us, but subtraction requires 8 years of schooling to wrap your head around.

What if Martians could follow the movements of electrons, but couldn't tell that their fingers, thumb, and palm were the same thing as their hand? What would their math look like then?

comment by michael_vassar3 · 2008-03-31T19:27:56.000Z · LW(p) · GW(p)

Scott: I know the arguments from The Conscious Mind. Richard studied with Chalmers.

comment by Scott_Scheule · 2008-03-31T19:33:40.000Z · LW(p) · GW(p)

Mtraven,

I was addressing the entire thread, not you specifically.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-03-31T20:04:54.000Z · LW(p) · GW(p)

Latanius: an algorithm is a model in our mind to describe the similarities of those physical systems implementing it

Latanius understands reductionism.

Unknown: He hasn't yet said that it is logically impossible to have something that acts like a conscious human being, but made out of other physical stuff, without consciousness.

If by "acts like" you mean "produces similar output, but with a different algorithm and different internals" then I do think consciousness will probably prove easy to fake, so long as you could study human cognition to find out what it was you were faking. Otherwise a non-conscious system would only spontaneously fake consciousness, without having computed what-to-fake using an algorithm itself conscious, at extremely low probability - like, egg spontaneously unscrambling itself probability, or the probability that a fake calculator gives correct answers to questions of the form "2 + 2 = ?" using quantum randomness rather than performing any systematic arithmetic operation.

athmwiji: Explaining subjective experience in terms of quarks is rather like trying to explain quantum mechanics in terms of aerodynamics. You will never get there. Not because subjective experience defies the laws of nature in some mysterious way, but because you would simply be going in the wrong direction.

Mind Projection Fallacy. If, in your model of the world, you start from elements you identify as 'experience' and infer backward to arrive at knowledge of things like atoms, it doesn't follow that, in physics, atoms are made of experience rather than the other way around. When you hear thunder, you infer lightning, but this does not mean thunder is the cause of lightning, or that thunder is more fundamental than lightning. The order of inference exists in your map, not in the territory.

Vassar: To be fair, there are "materialists" who do make claims like these and they are guilty of scientism.

Agreed.

Richard: P.S. I use 'scientism' very precisely, to those who hold the indefensible assumption that empirical inquiry is the only form of inquiry (and associated verificationist claims, e.g. that only scientific discourse is coherent or meaningful).

If I believe that mathematicians and even philosophers are performing a scientifically useful form of activity, but I hold that this ability will be treated by future Bayesians as a kind of empirical observation of one's own brain processes and legitimate deductions therefrom on impossible possible worlds, thereby yielding testable differences of anticipation, am I a scientisist?

PS: I've read some of Chalmers's journal articles but not The Conscious Mind.

comment by athmwiji · 2008-03-31T21:00:16.000Z · LW(p) · GW(p)
  • Ben Jones. I am not arguing against reductionism. I am arguing in favor of reductionism. My point is that fundamental particles are not the deepest level we can reduce to.

  • Latanius. I did not mean that my experience is in some way special, but rather that if you start with a model that does not involve observers, i do not think you will be able to derive the existence of an observer, even if you can predict their behavior with some accuracy. You might, for example, predict that an ai will act in a manner that we would recognize as angry, but you will have no way to approach the question of weather or not the ai is actually experiencing anger, nor any way to even really understand what it means to act in an angry manner. Further more, the idea that the universe amounts to billiard balls bouncing around with out any observers is a bias that physics started with and has been trying to rid itself of. It has already partially done this by incorporating observers into physical models, by for instance noting that the mass of an object depends on the inertial frame of reference of the observer, but I think physics has farther to go to this end, and I think the way to go about this is to define basic elements of physical models directly in terms of the subjective experience of observers, rather then skipping that step and jumping right to fundamental partials, which is an abstract concept that comes from our intuition.

comment by athmwiji · 2008-03-31T21:52:14.000Z · LW(p) · GW(p)

"The order of inference exists in your map, not in the territory."

I agree completely. I would however say that "atoms" and "lightning" as concepts must categorically be part of a map, not part of the territory. There is something atom like about the territory in so far as the consequence of atoms in our maps is consistent with our experiences which come from the territory, but the similarity ends there. I would not be willing to conclude from this that the territory actually implements atoms in the same way that they are implemented in our maps, and i think to do so would be a mind projection fallacy. As such i doubt that atoms as we understand them are actually part of the territory.

comment by poke · 2008-03-31T23:15:59.000Z · LW(p) · GW(p)

As a card-carrying "scientismist" let me explain my position a bit. I take the worthlessness of non-scientific inquiry to be a point of scientific fact. Philosophy is, for me, about as likely as telekinesis. Brains just don't do the sorts of things philosophers want them to. Nothing could. That your imagination can concoct an idea tells us something about your imagination and not the world. The noises that come out of your mouth and the marks you make on paper are just noises and marks. This is the position our account of the physical world puts us in. Computers don't help; information processing doesn't help; putting no amount of matter together gets you a closed machine that churns out truths about the world.

Science is unaffected by this. To do science you don't need any cognitive magic and you don't need to perceive the world in a particular way. To do science all you need is for perception and cognition to remain the same (or similar) through time and between scientists. As long as this is true you can make measurements and manipulate mathematical equations (i.e., as long as we all agree on how many seconds the clock ticked, or how many millimeters were measured, or that the litmus turned red, the relationship between the perceived objects and our perception of them does not matter).

The fact that scientists perform thought experiments and have arguments and so forth isn't a problem; these things are part of our uniquely human approach to science (we're also bipedal and have color vision; this isn't strictly relevant to science either but you couldn't explain much of what a human scientist does without it; an alien scientist who spent his sabbatical at Earth U probably wouldn't be able to use any of the apparatus and our mathematical notation would no doubt be a source of endless frustration for him). To be sure, there are clearly behaviors that are necessary for science (dogs don't make good scientists), but there's no reason these need fall under some general category. Alien scientists might have a completely different cognitive make-up than our own.

What thinking and speaking and writing are for, the scientist realizes, are problem solving and communication. We shouldn't confuse the scientists problem solving and the philosophers big-R Reasoning; the scientist couldn't care less about normative strictures (correct reasoning) as long as the physical situation is accounted for. The scientist can therefore apply all the faculties of his mind to a problem (including all the supposedly irrational bits). When a scientist makes a statement, it is not a step in a philosophical argument, it's merely a means to communicate. (Likewise, none of the above is a philosophical argument, it's merely a description.)

comment by mtraven · 2008-03-31T23:26:34.000Z · LW(p) · GW(p)

I wouldn't call my view "vintage Platonic idealism", but maybe it is, I'm not a philosopher. I'm not saying that forms are more primitive or more metaphysically basic than matter, just that higher-level concepts are not derivable in any meaningful way from physical ones. Maybe that makes me an emergentist. But this philosophical labeling game is not very productive, I've found.

comment by Caledonian2 · 2008-04-01T00:04:16.000Z · LW(p) · GW(p)

We don't have any conscious insight into how our small intestines function, or what's going on in our cellular machinery. That does not constitute evidence that nothing is happening there. There are far more things that we have no conscious awareness of and yet still occur than processes we're aware of. Almost infinitely more, in fact.

Even the most introductory and rudimentary college psychology course presents plenty of examples of ways in which our perceptions fail us, and our minds are riddled with illogical biases that we can overcome only with significant effort. Why in the world would you trust your intuition when simple, basic logic contradicts it?

People cling to the idea that there's something inherently special about living things, and humans in particular. I suspect this is a side effect of our minds developing two general categorization systems to handle things: one for things that obey simple kinematic principles, like thrown rocks, and one for things that can move unpredictably, like deer and people and dragonflies.

comment by Brian_Macker · 2008-04-01T08:28:29.000Z · LW(p) · GW(p)

mtraven,

I'm trying to understand why you're finding mystery where I see none.

"Nonetheless, it is mysterious how physical systems with nothing physical in common can realize the same algorithm."

Would you feel the same mystery in a playground where there were side by side swings, one made with rope and the other with chain?

Chain is not only made of completely different material, but is also flexible by a completely different mechanism than rope. Yet both are flexible and both can serve the purpose of making a swing.

The flexibility is emergent in both cases but a different levels. The flexibility of the rope is emergent at the molecular level, whereas the chain is flexible at the mechanical level.

"That suggests that the algorithm itself is not a physical thing, but something else."

In the sense that the flexibility is something else. However algorithms (especially running ones) and flexibility do not "exist" unconnected to the physical objects that exhibit them. Just like the other guy pointed out the number four doesn't exist by itself but can be instantiated in objects. Like a four having four tines.

Note in the above paragraph I was assuming a very big difference between an algorithm running on a computer, written on a piece of paper, or memorized by a student. Only an actually running algorithm is instantiated in an important way to your example. On paper it's only representation being used for communication.

When you flipped to speaking of "the algorithm" you were talking about it as a attribute. It's then very easy in English to equivocate between the two meanings of attribute, the conceptual and the reified. Flexibility as a concept is easily confused with flexibility as instantiated in a particular object. The concept resides in your head as a general model, while the actually flexibility of the object is physical. Well actually the concept in your head is physical also but in a completely different way.

Not sure what you find mysterious in all this. Something does or does not fit the model the concept describes. If it fits than it's behavior will be predicted by the model and will match any other object that fits. Flexible things flex. Things running the algorithm for addition do addition.

comment by Brian_Macker · 2008-04-01T09:35:31.000Z · LW(p) · GW(p)

Poke,

You made an important point in that scientists don't prove things in a foundationalist way. They aren't even attempting to do that and they have solved the problem of human fallibility, and the lack of any foundation to knowledge, by just accepting them as givens. Accepted as givens then the issue is how to deal with those facts. The answer is to come up with methodologies to reduce error.

Some philosophers get this, and some don't. Popper understood. My philosophy teacher didn't. I've noticed a correlation in my experience that the philosophers who don't get it tend to be in the camp of dualists and theologians. They use philosophy to try to discredit science.

I do however thing that the philosopher who do "get it" can come up with valuable tools. Tools for recognizing flaws in our deductions and arguments. So I don't think the disciple is completely void of value.

comment by Ben_Jones · 2008-04-01T10:11:41.000Z · LW(p) · GW(p)

You might, for example, predict that an ai will act in a manner that we would recognize as angry, but you will have no way to approach the question of weather or not the ai is actually experiencing anger, nor any way to even really understand what it means to act in an angry manner.

Athmwiji, what do you mean by 'actually experiencing anger'? How is it different from what an AI would do when 'angry'? [Please taboo 'subjective phenomenon'!]

comment by Nick_Tarleton · 2008-04-01T16:53:31.000Z · LW(p) · GW(p)

Brian, the question is not why the senses feel the way they do, but why they feel like anything at all.

comment by mtraven · 2008-04-01T17:19:47.000Z · LW(p) · GW(p)

Brian Macker: Mysterious was maybe the wrong word. Let's say rather that physical reduction just doesn't help explain some higher-level phenonmenon.

Your swing example is interesting. There are obvious physical similarities between the two systems (rotation, tension, etc) even if the two swings are made of different materials. But consider the task of adding up a column of 4-digit numbers, You do it on pencil and paper, I use a calculator. There is nothing physical in common with these two activities, but surely they have something in common.

However algorithms (especially running ones) and flexibility do not "exist" unconnected to the physical objects that exhibit them. Just like the other guy pointed out the number four doesn't exist by itself but can be instantiated in objects. Like a four having four tines. I agree with this. The concept resides in your head as a general model, while the actually flexibility of the object is physical. These concepts that reside in my head are funny things. Presumably they have a physical incarnation in my brain, but they probably have a rather different incarnation in yours. And if we could talk to silicon-based lifeforms from Altair, we would probably find they have a concept of "four", and maybe even one of "flexible", which is similar to ours but has nothing physical in common with ours.

You don't have to consider this mysterious if you don't want to. But it suggests to me that the reductionist way of looking at the world is, if not wrong, not that useful. You could know all about the states of my neurons' calcium channels, and it would not help you understand my argument.

comment by RobinHanson · 2008-04-01T17:32:43.000Z · LW(p) · GW(p)

Brian Macker, your 1100 word comment was way too long and has been unpublished. If you need that many words to make your point, post it elsewhere and just give a link here.

comment by Brian_Macker · 2008-04-01T22:31:49.000Z · LW(p) · GW(p)

Robin,

You make it seem like my point was singular. There were lots of points. I'll carry on the discussion with Scott over at Distributed Republic blog.

You have an unusual comment policy that I wasn't aware of. Deleting comments merely on length is quite unusually with 50 megabytes of storage costing about a penny. I'd have had to repost that same long comment somewhere around 500 times before it would cost a cent.

Now that I have read your policy I will try to color inside the lines. So, no problem, email me the contents of the post and I'll copy it to Distributed Republic. If you've lost it, as is likely, no problem either as I'm a prolific writer.

comment by Brian_Macker · 2008-04-01T23:06:55.000Z · LW(p) · GW(p)

Mtraven,

"There is nothing physical in common with these two activities, but surely they have something in common."

Having something in common is an easy hurdle. Pen and pencil is vastly more prone to error. You have to remember that when you conceptualize the similarities that doesn't mean the reality matches your conception. You might thing the counting of apples maps nicely onto the integers but it doesn't. Not for very large numbers. A pile of three apples maps nicely to the number three, but a pile of 1x10^34 apples would collapse into a black hole.

"You don't have to consider this mysterious if you don't want to. But it suggests to me that the reductionist way of looking at the world is, if not wrong, not that useful." Reductionism properly understood is but one tool in a toolkit, and one that has an extremely successful track record.

My position on this is very close to Dawkins.

"Reductionism is one of those words that makes me want to reach for my revolver. It means nothing. Or rather it means a whole lot of different things, but the only thing anybody knows about it is that it's bad, you're supposed to disapprove of it. (Dawkins)"

Remember we are talking here about your sentence: "Nonetheless, it is mysterious how physical systems with nothing physical in common can realize the same algorithm."

Why classify as "reductionist" my ability to directly understand what you find mysterious. I've got a degree in Computer Science so I damn well better understand why the same algorithms can run on different physical systems. In fact part of my job is designing such algorithms so they can run on physically different systems. An IBM mainframe, a Mac, and an Intel box are completely different physical systems even if you don't recognize that fact.

I also fully understand how pen and paper calculations and those done by a calculator or computer map onto each other. Thirty years ago computer time was far more valuable and access to time on computers was much less available. I had to actually write machine code with actual ones and zeros, and then hand simulate the running of those particular bytes on a computer. I did a respectable enough job to find bugs before I got shared time on the computer to actually run it. I understand precisely the mapping and why it works. Hell, I understand the electronics behind it.

The mystery evaporates with understanding.

comment by Brian_Macker · 2008-04-01T23:23:01.000Z · LW(p) · GW(p)

"Brian, the question is not why the senses feel the way they do, but why they feel like anything at all."

Do you have any personal experience with beings with consciousnesses that don't feel their own senses? Seems to me you should have some basis of comparison for assuming that senses shouldn't feel like anything at all.

Your senses don't feel like anything to me. Think that has anything to do with the fact that we don't share a brain?

Besides, you are in part wrong, the question has been precisely why the senses feel the way they do. Why is red "red" and blue "blue". Unfortunately, Robin removed my discussion of qualia and the indication to why the answer to "why" is more about engineering than philosophy.

Besides your question is now existential to the point where it can be asked of the material directly. Suppose we discover precisely "why we feel anything at all" and the answer is precisely because of properties of material things. Well then the question would not be considered closed by a philosopher. He'd just was why there are things at all.

That's four points.

comment by mtraven · 2008-04-02T00:17:40.000Z · LW(p) · GW(p)

I've got degrees in Math and CS (more or less), and fully understand how algorithms are implemented, so don't give yourself airs. (In fact you might enjoy this). You can't debug a program in terms of the Schroedinger wave equation, which is what Yudkowski's position amounts to saying. The mystery is not that algorithms can run on different hardware, but that such runnings are instantiations of the same abstract thing; without sharing any physical properties. That indicates that the thing that is being instantiated is not a physical thing. While this does not prove the existence of gods or ghosts, it is still somehthing of a conceptual problem for strong reductionism. Please at least take the trouble to understand what I'm saying before writing long posts explaining why I'm wrong.

comment by Nick_Tarleton · 2008-04-02T01:58:00.000Z · LW(p) · GW(p)

mtraven, Yudkowsky is saying that the Schrödinger equation provides a causally complete account of the program's execution. You can't deny this without positing new physics. You actually could debug the program in terms of the wave function, you'd just have to be superintelligent and insane.

An algorithm can reduce to any of many very different physical representations. How is this any odder than saying 4 quarks and 4 apples are both 4 of something?

comment by Caledonian2 · 2008-04-02T03:40:00.000Z · LW(p) · GW(p)
Why is red "red" and blue "blue".

Oh, that's easy: they aren't. Because there's no such thing as "red" and "blue".

comment by mtraven · 2008-04-02T04:55:00.000Z · LW(p) · GW(p)

Nick said: Yudkowsky is saying that the Schrödinger equation provides a causally complete account of the program's execution.
The Schrödinger equation, let's agree, provides a mechanistic account of the evolution of the physical system of a computer, or brain, or whatever. But it does just as well for a random number generator, or a pile of randomly-connected transistors, or a pile of sand. Whatever makes the execution a sensible mathematical object is not found in the Schrödinger equation.

An algorithm can reduce to any of many very different physical representations. How is this any odder than saying 4 quarks and 4 apples are both 4 of something?
It isn't. Four-ness is also odd, just not as obviously so. Like algorithms, it too is not to be found in the Schrödinger equation. I'm hardly the first person in the world to point out that the nature of mathematical objects is a difficult philosophical question.

I'm not trying to introduce new physical mechanisms, or even metaphysical mechanisms. Let's grant that the universe, incuding the minds in it, runs by the standard physical laws. But the fact that mechanical laws produce comprehensible structures, and minds capable of comprehending them, is exceedingly strange. Even if we understood brains down to the neural level, and could build minds out of computers, it would still be strange.

comment by UnclGhost · 2010-12-07T00:41:57.346Z · LW(p) · GW(p)

Missing word in 28th paragraph - "A good (?) easier".

comment by David_Gerard · 2011-01-10T12:10:29.451Z · LW(p) · GW(p)

Something that I have found useful in comprehending the gap between the primitive hunter-gatherer's thinking and reductionist materialist thinking:

You know New Age/alt. med. puffery, and how it annoys you? That's because it has a deep strain of vitalism. If you think of the annoying stupidity as the effects of vitalistic thinking, you'll know what a vitalistic world view looks like. Fundamental everything. Spirits in everything. (And dolphins, for some reason.)

comment by Dmytry · 2012-03-20T04:26:05.746Z · LW(p) · GW(p)

Democritus imagined this 2400 years ago, and there was a time, roughly 1803-1922, when Science thought he was right.

There's the coolest bit: they actually had very good argumentation for atomism, especially wrt mixing things up homogeneously then separating them back. And they postulated very small number of atoms; if we didn't use their word atom for chemical atom we would of probably used word atoms to mean quanta - which are the indivisible units of now. The argument for there being some small number of something indivisible, is the logic that dictates something indivisible from dissolution and re-purification, combined with occam's razor on the number of something indivisible, combined with belief in reductionism. They didn't lucky guess it any more than we did.

We also guessed quanta in same way as they guessed atoms from dissolution and purification - we had the photoelectric effect.