Reductionism

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-03-16T06:26:38.000Z · LW · GW · Legacy · 161 comments

Almost one year ago, in April 2007, Matthew C submitted the following suggestion for an Overcoming Bias topic:

"How and why the current reigning philosophical hegemon (reductionistic materialism) is obviously correct [...], while the reigning philosophical viewpoints of all past societies and civilizations are obviously suspect—"

I remember this, because I looked at the request and deemed it legitimate, but I knew I couldn't do that topic until I'd started on the Mind Projection Fallacy sequence, which wouldn't be for a while...

But now it's time to begin addressing this question.  And while I haven't yet come to the "materialism" issue, we can now start on "reductionism".

First, let it be said that I do indeed hold that "reductionism", according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.

This seems like a strong statement, at least the first part of it.  General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?

On the other hand, we are never going back to Newtonian mechanics.  The ratchet of science turns, but it does not turn in reverse.  There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.

"To hell with what past civilizations thought" seems safe enough, when past civilizations believed in something that has been falsified to the trash heap of history.

And reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy.

I once met a fellow who claimed that he had experience as a Navy gunner, and he said, "When you fire artillery shells, you've got to compute the trajectories using Newtonian mechanics.  If you compute the trajectories using relativity, you'll get the wrong answer."

And I, and another person who was present, said flatly, "No."  I added, "You might not be able to compute the trajectories fast enough to get the answers in time—maybe that's what you mean?  But the relativistic answer will always be more accurate than the Newtonian one."

"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."

"If that were really true," I replied, "you could publish it in a physics journal and collect your Nobel Prize." 

Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider.  Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.

But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC.  A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.

So is the 747 made of something other than quarks?  No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747.  The map is not the territory.

Why not model the 747 with a chromodynamic representation?  Because then it would take a gazillion years to get any answers out of the model.  Also we could not store the model on all the memory on all the computers in the world, as of 2008.

As the saying goes, "The map is not the territory, but you can't fold up the territory and put it in your glove compartment."  Sometimes you need a smaller map to fit in a more cramped glove compartment—but this does not change the territory.  The scale of a map is not a fact about the territory, it's a fact about the map.

If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions.  Better predictions than the aerodynamic model, in fact.

To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift.  There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings.  It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.

"What?" cries the antireductionist.  "Are you telling me the 747 doesn't really have wings?  I can see the wings right there!"

The notion here is a subtle one.  It's not just the notion that an object can have different descriptions at different levels.

It's the notion that "having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.

It's not that the airplane itself, the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought.  Rather we, for our convenience, use different simplified models at different levels.

If you looked at the ultimate chromodynamic model, the one that contained only elementary particle fields and fundamental forces, that model would contain all the facts about airflow and lift and wing positions—but these facts would be implicit, rather than explicit.

You, looking at the model, and thinking about the model, would be able to figure out where the wings were.  Having figured it out, there would be an explicit representation in your mind of the wing position—an explicit computational object, there in your neural RAM.  In your mind.

You might, indeed, deduce all sorts of explicit descriptions of the airplane, at various levels, and even explicit rules for how your models at different levels interacted with each other to produce combined predictions—

And the way that algorithm feels from inside, is that the airplane would seem to be made up of many levels at once, interacting with each other.

The way a belief feels from inside, is that you seem to be looking straight at reality.  When it actually seems that you're looking at a belief, as such, you are really experiencing a belief about belief.

So when your mind simultaneously believes explicit descriptions of many different levels, and believes explicit rules for transiting between levels, as part of an efficient combined model, it feels like you are seeing a system that is made of different level descriptions and their rules for interaction.

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level.  The airplane is too large.  Even a hydrogen atom would be too large.  Quark-to-quark interactions are insanely intractable.  You can't handle the truth.

But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces.  You can't handle the raw truth, but reality can handle it without the slightest simplification.  (I wish I knew where Reality got its computing power.)

The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings, the way that the mind of an engineer contains distinct additional cognitive entities that correspond to lift or airplane wings.

This, as I see it, is the thesis of reductionism.  Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.  Understanding this on a gut level dissolves the question of "How can you say the airplane doesn't really have wings, when I can see the wings right there?"  The critical words are really and see.

161 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by mitchell_porter2 · 2008-03-16T08:18:09.000Z · LW(p) · GW(p)

This denial that "higher level" entities actually exist causes a problem when we are supposed to identify ourselves with such an entity. Does the mind of a cognitive scientist only exist in the mind of a cognitive scientist?

Replies from: rkyeun, max-hodges
comment by rkyeun · 2011-04-11T00:15:48.050Z · LW(p) · GW(p)

The belief that there is a cognitive mind calling itself a scientist only exists in that scientist's mind. The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2017-11-10T10:31:22.548Z · LW(p) · GW(p)

That observation runs headlong into the problem, rather than solving it.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-11-10T14:05:13.020Z · LW(p) · GW(p)

Exactly. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist." Let's reword that. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING 'undecatillion swarms of quarks' not having any beliefs, with a belief that there is a cognitive mind calling itself a scientist that only exists in the undecatillion swarms of quarks's mind."

There seems to be a logic problem there.

Replies from: rkyeun
comment by rkyeun · 2017-12-26T09:56:46.376Z · LW(p) · GW(p)

Composition fallacy. Try again.

Replies from: entirelyuseless
comment by entirelyuseless · 2018-01-06T01:26:22.426Z · LW(p) · GW(p)

Nope. There is no composition fallacy where there is no composition. I am replying to your position, not to mine.

comment by Max Hodges (max-hodges) · 2020-05-05T18:18:11.317Z · LW(p) · GW(p)

Answering the question of who is experiencing the illusion [of self] or interpreting the story is much more problematic. This is partly a conceptual problem and partly a problem of dualism. It is almost impossible to discuss the self without a referent in the same way that is difficult to think about a play without any players. Second, as the philosopher Gilbert Ryle pointed out, in searching for the self, one cannot simultaneously be the hunter and the hunted, and I think that is a dualistic problem if we think we can objectively examine our own minds independently, because our mind and self are both generated by the brain. So while the self illusion suggests an illogical tautology, I think this is only a superficial problem.

-Bruce Hood

comment by Aaron_Boyden · 2008-03-16T08:26:01.000Z · LW(p) · GW(p)

One minor quibble; how do we know there is any most basic level?

Replies from: RafeFurst, Furcas, DanielLC, ronny-fernandez, Basil Marte
comment by RafeFurst · 2010-03-07T16:58:35.596Z · LW(p) · GW(p)

Agreed. Why would we believe a quark is not "emergent"? Could be turtles all the way down....

comment by Furcas · 2010-03-07T17:08:02.741Z · LW(p) · GW(p)

Because a level being more basic means it's made of (or described by, if you're not a patternist) fewer bits of information, and the only way there can be less than 1 bit is if there's nothing at all.

comment by DanielLC · 2012-02-29T05:52:19.859Z · LW(p) · GW(p)

Levels are an attribute of the map. The territory only has one level. Its only level is the most basic one.

Let's consider a fractal. The Mandelbrot set can be made by taking the union of infinitely many iterations. You could think of each additional iteration as a better map. That being said, either a point is in the Mandelbrot set or it is not. The set itself only has one level.

Replies from: army1987
comment by A1987dM (army1987) · 2012-02-29T11:06:27.701Z · LW(p) · GW(p)

Interesting analogy!

comment by Ronny Fernandez (ronny-fernandez) · 2012-06-08T23:52:41.650Z · LW(p) · GW(p)

Because things happen, if there was no most basic level, figuring out what happens would be an infinite recursion with no base case. Not even the universe's computation could find the answer.

comment by Basil Marte · 2019-10-10T12:57:02.280Z · LW(p) · GW(p)

There isn't, and the article is committing a type error. The terrain isn't a map, reality isn't a model/theory.

Unless you are using a model to approximate the behavior of a system that is of exactly the same kind, i.e. using a computational model to approximate another computational thingy, in which case you could indeed have the model that exactly coincides with what it is to describe. This may even be useful, e.g. in cryptography. But this is an edge case.

comment by Joshua_Fox · 2008-03-16T08:34:30.000Z · LW(p) · GW(p)

Yet something in the real world makes it tractable to create the "map" -- to find those hidden class variables which enable Naive Bayes.

comment by Unknown · 2008-03-16T09:34:55.000Z · LW(p) · GW(p)

This is an example of Eliezer's extreme overconfidence. As he rightly points out, we cannot in fact construct a quantum mechanical model of a 747. Yet he asserts as absolute fact that such a model would be more accurate than our usual models.

I think it would be too. But I don't assert this as absolute fact, much less the universal claim that reality in no way has different levels in it; especially since, as Mitchell points out, one level of reality seems to be our mental representations, which cannot be said to be mere representations of representations. They are precisely real representations.

Replies from: RafeFurst, waveman
comment by RafeFurst · 2010-03-07T16:28:51.478Z · LW(p) · GW(p)

I agree with your skepticism a QM model of classical realm mechanics being ipso facto more accurate. Since by unsurmountable algorithmic complexity problems we agree this is an untestable hypothesis, confidence should start out low. And there's lots of circumstantial evidence that the farther you go down the levels of organization in order to explain the higher level, the less accuracy this yields. It's easier to explain human behavior with pre-supposed cognitive constructs (like pattern recognition, cognitive biases, etc) than with neurological.

The map is not the terrain, but maybe the map for level 1 is the terrain for level 2.

"Mere" is the problem.

comment by waveman · 2011-05-10T00:29:04.198Z · LW(p) · GW(p)

This is an example of Eliezer's extreme overconfidence. As he rightly points out, we cannot in fact construct a quantum mechanical model of a 747. Yet he asserts as absolute fact that such a model would be more accurate than our usual models.

This is the point made in "A Different Universe" by Robert B Laughlin, a Nobel Prize-winning physicist. He is a solid state physicist and argues that

  1. Going from a more "fundamental" to "higher" level requires computations that are in principle intractable. You cannot possibly avoid the use of levels of analysis. It is not just a matter of computational convenience. [I admit that the universe does the calculation but we have no idea how].

    Laughton won his Nobel for "explaining" the fractional quantum Hall effect before anyone else did. But he casts scorn on such explanations, pointing out that of the 27 solid phases of water, not one was predicted, but all have been "explained" after the fact.

  2. Phenomena at higher levels are often, even usually, insensitive to the nature of the levels below. A good example is the statistical mechanics of gases, which hardly changed when our view of the atoms that make up gases changed from hard Newtonian balls to fuzzy quantum blobs.

  3. There is plenty of evidence that "fundamental" physics is just the statistical mechanics of a lower layer eg all those "virtual particles" - what are they all about? "Empty" space seems to be about as empty as the super bowl on the day of the big game. There is no evidence that "fundamental" physics is at all fundamental in fact. We don't even have any indication how many layers there are before we get to the turtle at the bottom, if there is one.

Replies from: badger
comment by badger · 2011-05-10T01:27:46.304Z · LW(p) · GW(p)

computations that are in principle intractable

Doesn't the fact that the universe is carrying out these computations mean it is feasible in principle? Our current ignorance of how this is done is irrelevant. Am I missing something?

statistical mechanics of gases, which hardly changed when our view of the atoms that make up gases changed from hard Newtonian balls to fuzzy quantum blobs

This seems to be a map/territory confusion. A change in our model shouldn't change what we observe. If our high level theories changed dramatically, that would be a bad sign.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-01-03T22:38:40.243Z · LW(p) · GW(p)

Doesn't the fact that the universe is carrying out these computations mean it is feasible in principle?

It makes the universe Not a Computer in principle.

comment by Ian_C. · 2008-03-16T09:46:35.000Z · LW(p) · GW(p)

Our brain and senses are made out of fundamental particles too, and the image of a plane with wings is the result of the interaction between the fundamental particles out there with the fundamental particles in us.

So I would I say the plane image is an effect not a primary, but that does not make it any less real than the primary. It is a real thing, just as real, that just happens to be further down the chain of cause and effect.

comment by JulianMorrison · 2008-03-16T11:23:09.000Z · LW(p) · GW(p)

Reductionism does have a caveat, and this is "a fact about maps" and not "a fact about the territory": the real world level can be below the algorithm. Example: a CD. A chromodynamic model would spend immense computing resources simulating the heat and location and momentum and bonds of a slew of atoms (including those in the surrounding atmosphere, or the plasticizer would boil off). In reality there are about four things that matter in a CD: you can pick it up, it fits into a standard box, it fits into a standard reader tray, and when you measure the pattern of pits they encode a particular blob of binary data. From a human utility perspective, the CD is fully replaceable with a chromodynamically dissimilar other CD that happens to have those same characteristics.

Computers are full of examples of this, where the least important level is not the fundamental level. In in some cases, each level is not just built upon lower levels, but ought to be fully independent of them. If your lisp doesn't implement the lambda calculus because of a silicon fault, an atomic model would correctly represent this, but it would be representing a mathematically unimportant bug. A correct lisp would be representable on any compute substrate, from a Mac to a cranks-and-gears Babbage engine. A model which took account of the substrate would be missing the point.

Replies from: bigjeff5, laofmoonster
comment by bigjeff5 · 2011-02-01T20:52:27.036Z · LW(p) · GW(p)

I think the point is that the model of four elements we use to describe the CD is also contained within the chromodynamic model - the four elements are a less accurate abstraction of the chromodynamic model, even if we don't recognize it as such when we used the more abstract model.

In the same way, Newtonian Mechanics is a less accurate abstraction of Special Relativity.

Therefore, no matter how precise Newtonian Mechanics is, it does not match up exactly with reality. Because it is an abstraction, it contains inaccuracies. The SR version of the same process will always be more accurate than the NM version, though the SR version is also probably not completely accurate.

A correct lisp would be representable on any compute substrate, from a Mac to a cranks-and-gears Babbage engine.

I don't think that is true. For Lisp to mean anything to any machine, it must first be compiled into the machine language of that particular machine. Because this process is fundamentally different for different types of machines, the way the same Lisp behaves on each machine will be highly dependent on its specific translation into machine language. In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine. The difference may not be enough to take any note of, but it is still there.

This is the similar to calculating the trajectory of an artillery shell with Newtonian Mechanics vs Special Relativity. The difference between the two will be so small that it is almost unmeasurable, but there will definitely be a difference between them.

Replies from: None
comment by [deleted] · 2012-01-12T23:00:32.927Z · LW(p) · GW(p)

In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine. The difference may not be enough to take any note of, but it is still there.

I am going to have to disagree here. A given Lisp will require a Bounded-Tape Turing Machine of tape size N and head state count M and symbol table Q. If the ARM processor running Windows NT can supply that, Lisp is possible. If a x86 running Unix can supply that, Lisp is possible. If Lisp behaves differently from the mathematical ideal on any machine, that means the machine is incapable of supplying said turing machine.

"If the Lisp is untrue to the Specification, that is a fact about the Implementation, not the Mathematics behind it."

Replies from: DSimon
comment by DSimon · 2012-01-17T04:28:24.079Z · LW(p) · GW(p)

What about the speed of operation? The specification does not set any requirements for this, and so two different Lisp implementations which differ in that property can both be correct yet produce different output.

Replies from: None
comment by [deleted] · 2012-01-18T12:12:11.645Z · LW(p) · GW(p)

Even if it runs at one clock cycle per millenia, it would still theoretically be able to run any given program, and produce exactly the same output. The time function is also external to the LISP implementation; it is a call to the OS, so the output that is printing the current time doesn't count.

Replies from: DSimon
comment by DSimon · 2012-01-18T20:21:43.899Z · LW(p) · GW(p)

I think we may have to taboo "output", as the contention seems to be about what is included by that word.

Replies from: None
comment by [deleted] · 2012-01-18T20:41:42.377Z · LW(p) · GW(p)

Given a program P consisting of a linear bit-pattern, that is fed into virtual machine L, and produces a linear bit-pattern B to a section of non-local memory location O. During the runtime of P on L, the only interaction with non-local memory is writing B to O. There is no bits passed from non-local memory to local memory.

For all L If and only if L is true to the specification then for any P there is only one possible B.

  • P is the lisp program source code, which does not read from stdin, keyboard drivers, web sockets or any similar source of external information.
  • L is a LISP (virtual) machine.
  • B is some form of data, such as text, binary data, images, etc.
  • O is some destination could be stdout, screen, speakers, etc.
Replies from: DSimon
comment by DSimon · 2012-01-18T23:07:09.667Z · LW(p) · GW(p)

Ah, ok, I find nothing to disagree with there. Looking back up the conversation, I see that I was responding to the word "behavior". Specifically, bigjeff5 said:

In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine.

To which you responded:

If Lisp behaves differently from the mathematical ideal on any machine, that means the machine is incapable of supplying said turing machine.

So it comes down to: does the "behaviour" of a Lisp implementation include anything besides the output? Which effectively comes down to what question we're trying to answer about Lisp, or computation, or etc.

The original question was about whether a Lisp machine needs to include abstractions for the substrate it's running on. The most direct answer is "No, because the specification doesn't mention anything about the substrate." More generally, if a program needs introspection it can do it with quine trickery, or more realistically just use a system call.

Bigjeff5 responded by pointing out that the choice of substrate can determine whether or not the Lisp implementation is useful to anybody. This is of course correct, but this is a separate issue from whether or not the Lisp abstraction needs to include anything about its own substrate; a Lisp can be fast or slow, useful or useless, regardless of whether or not its internal abstractions include a reference to or description of the device it is running on.

comment by laofmoonster · 2014-02-22T06:58:36.403Z · LW(p) · GW(p)

Is it fair to call the CD data a map in this case? (Perhaps that's your point.) The relationship is closer to interface-implementation than map-territory. Reductionism still stands, in that the higher abstraction is a reduction of the lower. (Whereas a map is a compression of the territory, an interface is a construction on top of it). Correct lisp should be implementation-agnostic, but it is not implementation-free.

comment by RobinHanson · 2008-03-16T11:23:24.000Z · LW(p) · GW(p)

This is a situation where a lot of confidence seems appropriate, though of course not infinite confidence. I'd put the chance that Eliezer is wrong here at below one percent.

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-07-30T04:47:33.964Z · LW(p) · GW(p)

I really have no idea what Eliezer being wrong on this would mean. Is the subject matter of this posting the nature of the territory or is it advice on the best way to construct maps?

What conceivable observations might cause you to revise that 1% probability estimate up to, say, 80%?

As I see it, reductionism is not a hypothesis about the world; it is a good heuristic to direct research.

Replies from: ata
comment by ata · 2010-07-30T05:07:35.038Z · LW(p) · GW(p)

I take the main thesis as being summed up by this sentence around the end:

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

Specific non-reductionist hypotheses, in the extremely unlikely event that any are supported by evidence, could cast doubt on reductionism. We'd need to find a specific set of circumstances under which reality appears to be computing the same entities at multiple levels simultaneously and applying different laws at each level, or we'd need to find fundamental laws that talk about non-fundamental objects. For example, if the Navy gunner were actually correct that you need to use Newtonian mechanics instead of relativity in order to get the right answer when computing artillery trajectories (given the further unlikely assumption that we couldn't find a simpler explanation for this state of affairs than "physical reductionism as a whole is wrong").

Replies from: Perplexed
comment by Perplexed · 2010-07-30T05:54:14.124Z · LW(p) · GW(p)

Ok, let me try to construct an example of a non-reductionist hypothesis. Eliezer says that it would be a claim that higher levels of simplified multilevel models are out there in the territory. So, as a multi-level model, let us take (low-level) QCD+electroweak, (mid-level): nucleons, mesons, electrons, neutrinos, photons; (high-level): atomic theory with 92 kinds of atoms + photons.

Now as I understand it, reductionism forbids me to believe that photons and electrons - entities which exist in higher level models - are actually out there in the territory. What am I doing wrong here? Could you maybe give me an example of a hypothesis which a reductionist ought to disbelieve?

Replies from: ata
comment by ata · 2010-07-30T06:38:50.882Z · LW(p) · GW(p)

As I understand it, photons and electrons are identified as elementary particles in the Standard Model. Wouldn't that be considered the lowest level?

Replies from: Perplexed
comment by Perplexed · 2010-07-31T02:50:53.231Z · LW(p) · GW(p)

Sure, they exist in both the lowest (so far) level and in the next level up. But Eliezer wants to forbid things at "higher levels of simplified multilevel models" from existing out there in the territory. If that doesn't include electrons in this example, then I don't know what it includes. I don't understand exactly what it is that is forbidden. Is it type errors - confusing map entities with territory entities? Is it failing to yet be convinced by what someone else thinks is the best low-level model? Is it somehow imagining that, say, atoms still exist in the territory while simultaneously imagining that atoms are made of more fundamental things which also exist in the territory? I seems to me that the definition of reductionism that Eliezer has given is completely useless because no one sane would proclaim themselves as non-reductionists. He is attacking a straw-man position, as far as I can see.

Replies from: taryneast, bigjeff5, Sniffnoy
comment by taryneast · 2010-12-16T07:48:45.168Z · LW(p) · GW(p)

AFAICS, he is not "forbidding" a plane's wing from existing at the level of quark. He's just saying that "plane's wing" is a label that we are giving to "that bunch of quarks arranged just so over there". This as opposed to "that other bunch of quarks arranged just so over there" that we call "a human".

That the arrangement of a set of quarks does not have a fundamental "label" at the most basic level. The classification of the first bunch o' quarks (as separate from the second) is something that we do on a "higher level" than the quarks themselves.

comment by bigjeff5 · 2011-02-01T21:06:37.401Z · LW(p) · GW(p)

But Eliezer wants to forbid things at "higher levels of simplified multilevel models" from existing out there in the territory.

You're confusing the map and the territory.

The territory is only quarks (or whatever quarks may be made of). There is nothing else, it's just a big mass of quarks.

The map is the description of this bunch of quarks is human, while that bunch is an airplane.

There was a time when physicists thought that earth, air, water, and fire were the reality - that they were fundamental. Then they discovered molecules, and they thought those were fundamental. Then they discovered atoms, and thought those were fundamental. Etc. on down until the current (I think, I'm not a physicist) belief that quarks are fundamental.

At no point did reality change. Reality did not change when we discovered rocks were made up of molecules - the map was simply inaccurate. The reality was that rocks were always made up of molecules. The same when we discovered that molecules were made of atoms. It was always true, our map was simply not as accurate as we thought it was.

You could quite accurately say the map is wrong because it does not perfectly reflect reality, but the map is extremely useful, so we should not discard it. We should simply recognize that it is a map, it is not the territory. It's a representation of reality, it is not what is real. We know Newtonian Mechanics is a less accurate map than Special Relativity, but it is more useful than SR in many cases because it doesn't have the detail cluttering up the map that SR has. Yeah, it's less precise, but for calculating the trajectory of an artillery shell it is more than good enough.

The different levels are maps, there is only one territory.

Replies from: DanielLC
comment by DanielLC · 2011-03-29T06:08:58.902Z · LW(p) · GW(p)

The territory is only quarks (or whatever quarks may be made of).

It's also leptons.

comment by Sniffnoy · 2011-02-02T02:01:42.852Z · LW(p) · GW(p)

In short, you seem to be confusing {A} with A.

Replies from: Perplexed
comment by Perplexed · 2011-02-02T02:10:47.600Z · LW(p) · GW(p)

Too short. But intriguing. Please explain.

Replies from: Sniffnoy
comment by Sniffnoy · 2011-02-03T06:24:31.278Z · LW(p) · GW(p)

What I mean is, your objection doesn't hold water because raw objects at lower levels can always be put in a wrapper to be made suitable for use at a higher level. E.g. if we consider an elementary particles level, and a general-particles-which-for-now-we-will-consider-as-sets-of-particles-level (yes, I realize this almost certainly does not actually work in actual physics), then in the higher level we have proton={up_1, up_2, down}, and electron_H={electron_L}. But for most purposes the distinction between electron and {electron} is irrelevant, so we elide it. Your point seems to me analogous to the statement "But 2 can't be the rational number {...,(-4,-2),(2,1),(-2,-1),(4,2),...}, it's the integer {...(1,-1),(2,0),(3,1),...}!"

Replies from: Perplexed
comment by Perplexed · 2011-02-03T12:45:31.737Z · LW(p) · GW(p)

Ah! Good point. And now that it is explained, good analogy.

I still have some reservations about Eliezer's approach to reductionism/anti-holism and his equation of the idea of "emergence" with some kind of mystical mumbo-jumbo. But this is a complicated subject and philosophers of science much more careful than myself have addressed it better than I can.

Thank you, though, for pointing out that my argument in this thread can be refuted so easily simply by taking Eliezer a little less literally. Electrons at one level reduce to electrons at a lower level. But the two uses of the word 'electron' in the above sentence refer to different (though closely related) entities. As closely related as A and {A}. You are right. Cool.

Replies from: timtyler
comment by timtyler · 2011-02-03T22:33:29.992Z · LW(p) · GW(p)

Strong emergence is mystical mumbo-jumbo.

I don't think scientists should waste too much of their terminology on that sort of thing, though.

comment by timtyler · 2010-08-15T19:10:36.937Z · LW(p) · GW(p)

"Reductionism" has come to have two meanings:

"Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents."

This post is about the second meaning. But that meaning is silly, useless, and redundantly duplicates other terms for such nonsense - such as reducibility and irreducibility.

We should kill off that meaning - and reclaim the meaning of the term that is useful and sensible. Posts like this one - which use the second meaning - are part of the problem.

Replies from: simplicio
comment by simplicio · 2010-08-15T19:24:32.330Z · LW(p) · GW(p)

Why is it silly to say that higher level phenomena reduce, in principle, to ontologically fundamental particle fields?

Replies from: timtyler
comment by timtyler · 2010-08-15T19:30:58.906Z · LW(p) · GW(p)

This discusssion is about the term "reductionism" - which is obviously some kind of philosophy about "reducing" things - but the cited definitions differ on the details of exactly what the term means.

The first meaning just states the obvious, IMO. Also, other terms have that kind of nonsense covered. There is no need to overload the perfectly useful and good term "reductionism" with something that is only useful for the refutation of nonsense. It just causes the type of mix-up that you see in this thread.

Replies from: simplicio, Perplexed
comment by simplicio · 2010-08-15T19:33:54.082Z · LW(p) · GW(p)

I understand, I just don't get why you object to reductionism as exemplified by the second definition. It seems to me a fairly reasonable philosophical position.

Replies from: timtyler
comment by timtyler · 2010-08-15T19:42:02.477Z · LW(p) · GW(p)

I object to that terminology because it overloads a useful term which is used for something else without having a good excuse for doing so. Call the idea that invisible pixies push atoms around "irreducibility" - or something else - anything!

IMO, "Reductionism" and "Holism" should be reserved for the Hofstadter-favoured sense of those words - or you have a terminological mess:

http://i93.photobucket.com/albums/l76/orestesmantra/MU.jpg

Replies from: simplicio
comment by simplicio · 2010-08-15T20:03:40.962Z · LW(p) · GW(p)

Oh, I see. Thanks for clarifying.

comment by Perplexed · 2010-08-15T19:49:55.744Z · LW(p) · GW(p)

You are confusing me, Tim. Above you seemed to be criticizing the usefulness of the second meaning. Now, you seem to be criticizing the usefulness of the first.

Which do you find useless: the label for a methodology, or the label for a hypothesis about the possibility of hierarchical explanations?

Replies from: timtyler
comment by timtyler · 2010-08-15T20:06:09.865Z · LW(p) · GW(p)

a) - good; b) - not needed. (Ref for a and b: http://en.wikipedia.org/wiki/Reductionism)

Reductionism and Holism should be the names of strategies for analysing complex sysytems by reducing them to the interactions of their parts - or considering them as high-level entities - respectively.

The other terminology - the kind used in this post - is very bad. People should not overload such useful terminology - unless there really is no other way.

Replies from: Perplexed
comment by Perplexed · 2010-08-15T20:24:19.791Z · LW(p) · GW(p)

One windmill I try to avoid attacking is the dictionary. I would suggest you spend a few extra syllables and refer to a. as "methodological reductionism" and b. as "philosophical (or ontological) reductionism". I understand the badness of needless overloading, but I'm not sure I agree that b. is "useless" simply because its validity is obvious to you. Would you also advocate abandoning the term "atheism"?

My problem with philosophical reductionism is I don't know whether it is a claim about the territory or a convention about maps. If it is a claim about the territory, I certainly remain unconvinced, having not yet glimpsed the territory.

Replies from: timtyler, timtyler
comment by timtyler · 2010-08-15T20:30:00.827Z · LW(p) · GW(p)

One can't just let dictionary authors rule language. When they get scientific things wrong, responsible individuals should put up a fight. Look at what is happening to "epigenesis" - for example. Or "emergence".

comment by timtyler · 2010-08-15T20:32:03.249Z · LW(p) · GW(p)

Would you also advocate abandoning the term "atheism"?

That is likely to lead off topic. If the atheists and agnostics could sit down and decide what those terms actually meant, it would certainly help. Meanwhile, call me an adeist.

comment by Ian_C. · 2008-03-16T12:08:03.000Z · LW(p) · GW(p)

When an image you are looking at is altered due to viewing it through a pane of coloured glass, you don't suddenly start calling it "the map" instead of "the territory."

So why is it, when it passes through our eyes and brain it suddenly becomes "the map," when the brain is made of the same fundamental stuff (quarks etc.) as the glass?

Replies from: Perplexed, taryneast
comment by Perplexed · 2010-07-30T04:57:46.518Z · LW(p) · GW(p)

I would say that the stuff making up "the map" is not stuff inside the brain. Instead, it is stuff inside the mind, and the mind is "emergent from" the brain (or, if you prefer, the mind "reduces to" the brain).

The neurons in the brain reduce (through several levels) to brain quarks. The map ideas in the mind also reduce to brain quarks, but they do so in an odd way. I choose to label that kind of oddness "emergence", but the local powers-that-be seem to disapprove of this terminology.

comment by taryneast · 2010-12-16T07:54:51.386Z · LW(p) · GW(p)

The image that you see contains far less information than the original actual stuff that makes up the original "image and coloured glass" objects that exist in front of you. That is why the image in your head is map, not territory.

You also have "territory" that makes up your head... but that doesn't mean that everything represented inside you little piece of territory is also territory.

After all, you can store a map in your glovebox. Does the glovebox turn a map of England into England itself, simply because a glovebox is part of the territory?

comment by Ben_Jones · 2008-03-16T12:15:14.000Z · LW(p) · GW(p)

Our brain and senses are made out of fundamental particles too, and the image of a plane with wings is the result of the interaction between the fundamental particles out there with the fundamental particles in us.

Ian C - are you claiming that there are no maps, just lots of territory, some of which refers to other bits of territory? While probably accurate, this doesn't seem very useful if we're trying to understand minds. I don't think Eliezer ever claims that maps are stored in the glove compartments of cars in the car park, just outside The Territory. I'd enjoy a few posts going deeper into the map/territory analogy though.

Computers are full of examples of this, where the [most] important level is not the fundamental level.

Bzzzzzt! Please taboo the word 'important' and tell us what you mean.

Atomic interactions work just as well in a lump of scrap as in a 747. But a 747 won't work without atomic interactions. This being the case, higher levels can't be more 'important' than more fundamental ones, unless 'important' means 'more intuitively obvious to the human eye'.

As long as no-one makes the ridiculous claim that, say, biology is worthless because atomic theory could, ideally, explain giraffes, then is there really any disagreeing with this post?

comment by Ian_C. · 2008-03-16T12:40:50.000Z · LW(p) · GW(p)

Ben Jones - yes, I'm saying there's just lots of territory. I think it's useful to understanding minds, because (if correct) it means they don't work by making an internal mirror of reality to study, but rather they just "latch on" to actual reality at a certain point. The role of the brain in that case would not be to "hold" the internal mirror copy, but to manipulate reality to make it amenable to latching.

comment by Tim_Tyler · 2008-03-16T12:44:22.000Z · LW(p) · GW(p)

I always found Hofstadter's take on the issue illuminating.

Disappointingly, dictionaries and encyclopaedias today seem to have defined reductionism and holism away from Hofstadter's usage - to the detriment of both of the terms involved.

comment by Ben_Jones · 2008-03-16T12:59:08.000Z · LW(p) · GW(p)

Ian - if minds don't create their own distinct internal maps, but simply 'latch on' to what's actually there, then how do explain the fact that maps can be wrong? In fact, how do you explain any two people holding two opposed beliefs?

Sensory perception isn't like a photograph - low-resolution but essentially representative. It's like an idiot describing a photograph to someone who's been blind all their life. This is why we get our maps wrong, and that is why it's useful to think in terms of map and territory - so that we can try and draw better ones.

comment by Ian_C. · 2008-03-16T14:30:37.000Z · LW(p) · GW(p)

Ben Jones: "if minds don't create their own distinct internal maps, but simply 'latch on' to what's actually there, then how do explain the fact that maps can be wrong? In fact, how do you explain any two people holding two opposed beliefs?"

Different people have different eyes, nervous systems and brains, so the causal path from the primary object to the part of reality in their brain to which they are latching on can be different.

I agree sensory perception is not like a photograph, but I don't think it's like an idiot trying to explain to us. I don't believe there's the outside world, and then an idiot distortion layer, and then our unfortunate internal model. There's one reality, and one part of it outside our body acts by a chain of cause and effect on another part inside our body, of which we happen to be able to be conscious.

So if the internal object is just as real as the external object, then we're done. We have our contact point with reality, and can begin to study it and figure out the universe, including deducing (maybe one day) the existence of the primary object. But whether it actually resembles the primary object in some way, surely that is not the main issue? From an evolutionary point of view, it doesn't have to be similar, just useful, and from an epistemological point of view it's not important whether it is (at all) similar or not.

Replies from: Perplexed
comment by Perplexed · 2010-07-30T05:29:15.856Z · LW(p) · GW(p)

Different people have different eyes, nervous systems and brains, so the causal path from the primary object to the part of reality in their brain to which they are latching on can be different.

When you first mentioned "latching" my initial reaction was as negative and incredulous as Ben Jones's was. Now I recognize that this idea is Kripke's - he explains intensionality as a chain of causal links between territory and map. I see why Kripke went that way, but the whole enterprise turns my stomach. Where is Descartes when we need him? Intensionality carries no mystery in a model where map is distinct from territory, with no attempt being made to embed map in territory. It only becomes problematic when naive reductionism demands that our models must capture the act of modeling. And then we proceed to tie ourselves completely in knots when we imagine that this bit of self-reference contains the secret of consciousness.

Can't we just pretend that our minds reside outside the physical universe when discussing epistemology? It makes things much simpler. Then we can discuss the reductionist science of cognition by allowing some minds back into the universe to serve as objects of study. :)

comment by Caledonian2 · 2008-03-16T14:36:47.000Z · LW(p) · GW(p)

At present, we cannot generate accurate quantum mechanical descriptions of atoms more complex than hydrogen (and, if we fudge a bit, helium). Any attempt to do so, because of the complexity and intractability of the equations evolved, produces results that are less accurate than our empirically-derived understanding.

Even if we ignore the massive computational problems with trying to create a QM model of an airplane, such a model is guaranteed to be less accurate than the existing higher-order models of aerodynamics and material science.

We presume that our models, if we knew how to generate and evaluate them, would accurately describe things on an atomic level, and this is not unreasonable to claim. But Eliezer's claim goes far, far, far beyond what can be justified at present.

comment by Nominull3 · 2008-03-16T15:38:43.000Z · LW(p) · GW(p)

I'm surprised that this point is controversial enough that Eliezer felt the need to make a post about it, and even more surprised that he's catching heat in the comments for it. This "reductionism" is something I believe down to the bone, to the extent that I have trouble conceptualizing the world where it is false.

Replies from: kremlin
comment by kremlin · 2013-02-04T10:09:39.303Z · LW(p) · GW(p)

After talking to some non-reductionists, I've come to this idea about what it would mean for reductionism to be false:

I'm sure you're familiar with Conway's Game of Life? If not, go check it out for a bit. All the rules for the system are on the pixel level -- this is the lowest, fundamental level. Everything that happens in conway's game of life is reducible to the rules regarding individual pixels and their color (white or black), and we know this because we have access to the source code of Conway's Game, and it is in fact true that those are the only rules.

For Conways' Game to be non-reductionistic, what you'd have to find in the source code is a set of rules that override the pixel-level rules in the case of high-level objects in the game. Eg "When you see this sort of pixel configuration, override the normal rules and instead make the relevant pixels follow this high-level law where necessary."

Something like that.

It's an overriding of low-level laws when they would otherwise have contradicted high-level laws.

Replies from: EniScien
comment by EniScien · 2021-11-09T16:02:59.191Z · LW(p) · GW(p)

It helps me to understand non reductable alternative world better, but i think that computer program is also reductable, not to elementary particles, but to object properties and scripts, and then to bytes and bits

Replies from: TAG
comment by TAG · 2021-11-09T16:20:03.230Z · LW(p) · GW(p)

The anti reductionist claim is not that the universe is literally a programme, it is that it is analogous to a programme with with rules that latch onto higher level structures being analogous to nonreductive physical laws.

Replies from: EniScien
comment by EniScien · 2021-11-13T17:21:52.405Z · LW(p) · GW(p)

I am sorry, I did not express myself accurately enough, I just do not know a term that could briefly indicate "something similar to a computer program, but not necessarily created by an intelligent developer or even existing in an analogue of a computer; something that may be an alternative physical law", so I wrote simply "computer program", because I don’t know what else could be an alternative to elementary particles and the universal physical law.

comment by George_Weinberg2 · 2008-03-16T20:42:33.000Z · LW(p) · GW(p)

The essential idea behind reductionism, that if you have reliable rules for how the pieces behave then in principle you can apply them to determine how the whole behaves, has to be true. To say otherwise is to argue that the airplane can be flying while all its constituent pieces are still on the ground.

But if you can't do a calculation in practice, does it matter whether or not it would give you the right answer if you could?

comment by Pyramid_Head2 · 2008-03-17T00:18:46.000Z · LW(p) · GW(p)

And there goes Caledonian again, completely misrepresenting Eliezer's claims.

His arguments are completely baseless. Of course it would be very, very, very hard to make a QM model of an airplane, and attempting it now would fail miserably - Eliezer wouldn't dispute that.

But to say that a full-fledged QM model would be guaranteed to be less accurate than current models is downright preposterous.

comment by PK · 2008-03-17T01:31:21.000Z · LW(p) · GW(p)

Caledonian's job is to contradict Eliezer.

comment by Nick_Tarleton · 2008-03-17T02:03:12.000Z · LW(p) · GW(p)

I'm surprised that this point is controversial enough that Eliezer felt the need to make a post about it, and even more surprised that he's catching heat in the comments for it. This "reductionism" is something I believe down to the bone, to the extent that I have trouble conceptualizing the world where it is false.

Seconded.

I suppose the next post is on how a non-reductionist universe would overwhelmingly violate Occam's Razor?

Replies from: taryneast
comment by taryneast · 2010-12-16T11:12:55.880Z · LW(p) · GW(p)

Hmmm... from my understanding, Occam's Razor is not actually a Law, just an overwhelmingly useful Heuristic. Thus, I'm not sure that "violating" Occam's Razor means more than just saying that something is "far less likely". I don't believe it can be used to prove that a non-reductionist universe is "not true".

comment by anonymous9 · 2008-03-17T06:20:53.000Z · LW(p) · GW(p)

Caledonian's job is to contradict Eliezer.

Not even that -- it's as if he and other commenters (e.g. Unknown in this case) are simply demanding that Eliezer express his points with less conviction.

If you think Eliezer is wrong, say so and explain why. Merely protesting that he is "confident beyond what is justified", or whatever, amounts to pure noisemaking that is of no use to anyone.

comment by a._y._mous · 2008-03-17T07:49:25.000Z · LW(p) · GW(p)

Slighlty off-topic. I am a bit new to all this. I am a bit thick too. So help me out here. Please.

Am I right in understanding that the map/territory analogy implies that the map is always evaluated outside the territory?

I guess, I'm asking the age old Star Trek transporter question. When I am beamed up, which part of which quark forms the boundary between me and Scotty.

comment by Frank_Hirsch · 2008-03-17T09:27:07.000Z · LW(p) · GW(p)

I wish I knew where Reality got its computing power. Hehe, good question that one. Incidentally, I'd like to link this rather old thing just in case anyone cares to read more about reality-as-computation.

comment by Ben_Jones · 2008-03-17T10:26:47.000Z · LW(p) · GW(p)

Ian C - well put. My point is that since there is, at least, some distortion between mind and world (hence this very blog), it's useful to think in terms of map and territory. At the simplest level, it stops us confusing the two. If you have a wrong belief, saying 'my mind is part of reality!' doesn't make it any less wrong. Agreed?

I don't believe there's the outside world, and then an idiot distortion layer, and then our unfortunate internal model.

That was exactly the situation I found myself in at about 3am on Sunday morning.

comment by Ian_C. · 2008-03-17T11:02:47.000Z · LW(p) · GW(p)

Ben Jones: "If you have a wrong belief, saying 'my mind is part of reality!' doesn't make it any less wrong. Agreed?"

I agree that there is a difference between the object in the mind and the object in the world, but I wouldn't call it distortion any more than a chair is a distortion of the table next to it. They are both just different parts of reality. But if your mind can only be aware of the chair then you must discover the table by deduction, which is what someone trying to "correct" the chair would do also. So yes, I guess it makes little practical difference.

"That was exactly the situation I found myself in at about 3am on Sunday morning."

And here I was thinking it was only a model, when it was direct observation all along! Who am I to contradict direct observation? I hereby accept your theory and discard my own :-)

comment by Ben_Jones · 2008-03-17T15:03:27.000Z · LW(p) · GW(p)

I agree that there is a difference between the object in the mind and the object in the world, but I wouldn't call it distortion any more than a chair is a distortion of the table next to it.

But the chair isn't seeking to imitate the table. That's one thing that minds do that nothing else does - form abstract representations. It's not magic, but it's a pretty impressive trick for a couple of pounds of quivering territory.

Besides, you've already acknowledged that the mental concept has a causal link with the object itself. Chairs aren't causally linked to tables. Like you say, they're both just different parts of reality. Minds and maps are more subtle.

We may believe that 'what we see is what's actually there', but in truth there are millennia of evolutionary filters and lenses distorting our perception of the territory. And you can't start eliminating the errors from your map until you realise that a) you have a map, b) your map is not the territory, and c) your map doesn't even look much like the territory.

That last paragraph's for the back of the book, Eliezer.

comment by Ian_C. · 2008-03-17T16:16:38.000Z · LW(p) · GW(p)

Ben Jones: "But the chair isn't seeking to imitate the table."

But the mind isn't seeking to imitate reality either. The mind seeks to provide awareness of reality, that is all. In taking the data of the senses and processing it only following the laws of cause and effect, it achieves this goal (because the output of the pipeline remains reality).

The idea that it is trying to imitate (and the associated criticisms like map, territory and distortion) come from looking at the evolved design after the fact and assuming how it is supposed to work without taking a wide enough view of all the ways awareness of reality could be implemented.

comment by Steve · 2008-03-17T18:55:30.000Z · LW(p) · GW(p)

'I wish I knew where Reality got its computing power.'

Assume Reality has gotten computing power and that it makes computations. Computation requires time. Occurrence would require the time required for the occurrence plus the time necessary for Reality to make the computation for that occurrence. The more complex the occurrence, either more computing power or longer computation time, or both. Accounting for that seems a challenge that can not be overcome.

Alternatively, let's assume Reality did not get computing power and that it does not make computations. Rather, let's assume that there are computational activities within Reality.

Perhaps Reality is certainty, while attempts to comprehend Reality are computational activities that have acquired mapping processes that attempt to map certainty.

Changing the sentence to: 'I wish I knew why I believe there is a where from which Reality got its computing power.' gets me to an answer while the original question precluded me from one.

comment by Matthew_C.2 · 2008-03-17T23:19:16.000Z · LW(p) · GW(p)

Interesting to see all this fervent and unquestioning faith in reductionism here. No surprise.

However, reductionism is incapable of explaining the real world.

Consider protein folding. A good rough approximation model of a protein is a string with a hundred magnets tied to it.

Now throw the string up into the air.

The notion that the string would immediately fold into a precise shape every time you throw it, is the same as the notion that a protein would fold into a precise shape, very quickly, every time you make it. And yet that is what proteins do. And we have no reductionistic explanation that fits the facts.

There are billions of very close to minimum-energy configurations for each protein sequence based on the electrochemical forces between the amino acids in the chain and hydrophilic / hydrophobic considerations. And yet only one of them is chosen, often in a few microseconds. This is totally inexplicable based on a reductionistic analysis. We CANNOT predict protein conformation based on the physical and chemical properties of the amino acid chains.

All of the protein folding software uses the folding behavior of known proteins and sub-domains of known proteins (such as α-helices and β-sheets) to attempt to guess protein structures, and even then there are many solutions to the equations (in a "successful" analysis the actual tertiary structure will match one of the possible structures that the software came up with, but not any of the others, and reductionism is at a complete loss why). Rupert Sheldrake suggests an answer based on an evolving set of holonic structures where each more complex level includes the behaviors of its constituent holons, yet also includes additional properties basically "chosen" by the universe through a repetition and reinforcement of habits.

Even with supposedly "well known" phenomena like snowflake crystallization the reductionist explanation simply fails to ring true. Why in a probabalistic structure like a very well-formed snowflake is there so much symmetry between arms (and most especially the mirror symmetry on each arm) between areas that are millions or billions of atoms away from each other? The "contact mechanics" explanation simply doesn't wash. Snowflake branches are very obviously probabalistic structures, so the "changing growing conditions of the snowflake" explanation simply doesn't wash, since probabalistic structures ought not show such high amounts of symmetry unless some kind of resonance is occurring between the arms and reflections of arms in the snowflake.

Unquestioning reductionism blinds people to some very simple observations about the world. . .

Replies from: Insert_Idionym_Here, APMason, Dojan, Manfred, Tyrrell_McAllister
comment by Insert_Idionym_Here · 2011-11-07T00:00:49.038Z · LW(p) · GW(p)

How is "unquestioning reductionism" possible?

comment by APMason · 2011-11-07T00:37:43.258Z · LW(p) · GW(p)

The notion that the string would immediately fold into a precise shape every time you throw it, is the same as the notion that a protein would fold into a precise shape, very quickly, every time you make it. And yet that is what proteins do. And we have no reductionistic explanation that fits the facts.

Doesn't that just demonstrate that the protein-to-magnet-string analogy wasn't a very good one in the first place?

comment by Dojan · 2011-12-02T20:34:32.056Z · LW(p) · GW(p)

Consider protein folding. A good rough approximation model of a protein is a string with a hundred magnets tied to it.

Now throw the string up into the air.

Now watch what happens. (Biased chains, starting @ 4:30.)

Its not a string of magnets, sure, but the same principle applies. The fact that we can't explain how something happens, doesn't mean that it doesn't have an explanation.

[Edit: Fixed link]

comment by Manfred · 2012-03-07T19:08:32.097Z · LW(p) · GW(p)

Why in a probabalistic structure like a very well-formed snowflake is there so much symmetry between arms (and most especially the mirror symmetry on each arm) between areas that are millions or billions of atoms away from each other?

So this turns out to be a really cool question. Part of what makes snowflakes unique is that each one is grown in a slightly different environment, and over the course of the growth of a snowflake this has a startlingly big impact. There are some cool attempts to model this with nonlinear systems / differential equations, and it does seem to be the case that if you have uniform growth conditions, you can get really different looking snowflakes that are still symmetrical.

comment by Tyrrell_McAllister · 2012-03-17T22:01:06.117Z · LW(p) · GW(p)

Why in a probabalistic structure like a very well-formed snowflake is there so much symmetry between arms (and most especially the mirror symmetry on each arm) between areas that are millions or billions of atoms away from each other? The "contact mechanics" explanation simply doesn't wash. Snowflake branches are very obviously probabalistic structures, so the "changing growing conditions of the snowflake" explanation simply doesn't wash, since probabalistic structures ought not show such high amounts of symmetry unless some kind of resonance is occurring between the arms and reflections of arms in the snowflake.

This bulwark of irreducible mysteriousness seems to be falling fast:

[A] team of mathematicians has for the first time succeeded in simulating a panoply of snowflake shapes using basic conservation laws, such as preserving the number of water molecules in the air.

comment by Caledonian2 · 2008-03-17T23:53:06.000Z · LW(p) · GW(p)
But to say that a full-fledged QM model would be guaranteed to be less accurate than current models is downright preposterous.

No, it follows directly from our inability to simulate 'complex' atoms. If we can't represent the basic building blocks of matter correctly, how are we supposed to represent the matter?

A correct model of physics would, given enough computational power, allow us to perfectly simulate everything in reality, on every level of reality. QM is known not to be correct; it is in fact known to be incorrect in the ultimate sense. It is merely the most correct model we possess.

comment by Ben_Jones · 2008-03-18T09:45:53.000Z · LW(p) · GW(p)

"However, reductionism is incapable of explaining the real world."

Is that the argument against Reductionism? That there are things it can't, as yet, explain? That's the same position the Intelligent Design people put forward. Your post is a big fat Semantic Stop Sign.

No, we don't understand protein folding yet. Precedent suggests that one day, we probably will, and it probably won't be down to some mystical emergent phenomenon. It'll be complicated, subtle, amazing, and fully explicable within the realms of reductionist science.

comment by Nick_Tarleton · 2008-03-18T12:22:44.000Z · LW(p) · GW(p)

A quick Google search turns up:

But the crystal growth depends strongly on temperature (as is seen in the morphology diagram). Thus the six arms of the snow crystal each change their growth with time. And because all six arms see the same conditions at the same times, they all grow about the same way.... If you think this is hard to swallow, let me assure you that the vast majority of snow crystals are not very symmetrical.
comment by Rafe_Furst · 2008-04-22T20:18:21.000Z · LW(p) · GW(p)

It's not that reductionism is wrong, but rather that it's only part of the story. Additional understanding can be gleaned through a bottom-up, emergent explanation which is orthogonal to the top-down reductionist explanation of the same system.

It is important to take seriously the reality of higher level models (maps). Or alternatively to admit that they are just as unreal, but also just as important to understanding, as the lower level models. As Aaron Boyden points out, it is not a foregone conclusion that there is a most basic level.

comment by Caledonian2 · 2008-04-23T19:08:22.000Z · LW(p) · GW(p)

Reductionism IS the bottom-up, emergent explanation. It tries to reduce reality to basic elements that together produce the phenomena of interest - you can't get any more emergent than that.

comment by Rafe_Furst · 2008-04-24T16:22:42.000Z · LW(p) · GW(p)

From the Wikipedia definition for "reductionism":

"Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents."

and

"The limit of reductionism's usefulness stems from emergent properties of complex systems which are more common at certain levels of organization."

comment by Caius · 2008-05-10T23:55:55.000Z · LW(p) · GW(p)

Rafe, do you mean that as a criticism? Because usefulness and reality are very different things. There are two things that can make a reductionist model less useful:

  1. It requires much more computational power. This has been discussed already.
  2. Because even modest mistakes at lower levels can have drastic effects at higher levels.

Both, you'll notice, are practical problems pertaining to the model, and don't invalidate the principle.

comment by Valentina_Poletti · 2008-08-28T09:27:13.000Z · LW(p) · GW(p)

So human brains are themselves models of reality.

Do you have a deterministic view of the world, i.e. believe reality is there, independently of our existence or of our interactions with it?

Have you ever wondered what is information, at the physical level.. what is it that our brains are actually modelling?

comment by wockyman · 2009-01-02T06:39:53.000Z · LW(p) · GW(p)

Simply because particles are the smallest things does not mean they are the only things. Particles are defined by how they act. How a particle will act can only be determined by taking into account the particles surrounding it. And to fully examine those particles, their surrounding particles must be examined. And so on and so forth...

As you move up in scale, new rules and attributes emerge that do not exist at the smaller scales. You can speculate about whether or not these new things might have been deduced as possibilities from quantum laws. But short of complete omniscience (physically impossible by the uncertainty principle), the subatomic laws will only tell you what can arise, not what does emerge.

So it doesn't really make sense to arbitrarily draw a line at a certain scale of examination and say, "Only these things REALLY exist." Reductionism yields a convenient mental model with practical application... but it is still just a map.

comment by Psy-Kosh · 2009-01-02T08:04:54.000Z · LW(p) · GW(p)

Wockyman: It's not that they're the smallest, as such.

Yes, how a particle acts is affected by those around it. But the idea is that if you know the basic rules, then knowing those rules, plus which particles are where around it lets you predict, in principle, given sufficient computational power, stuff about how it will act. In other words, the complicated stuff that emerges arises from the more basic stuff.

Think of it this way: You know cellular automatons? Especially Conaway's Game of Life? Really simple rules, just the grid, cells that can be on and off, and basic rules for when a transition occurs based on a cell's and its neighbor's state.

Yet complicated behavior arises out of that. One would not, however, say that behavior is beyond the rules, or that reduction to those rules fails. Those complicated behaviors arise out of those simple rules.

Incidentally, if you looked through Eliezer's QM sequence, the more fundamental reduction isn't so much particles, but probably quantum amplitudes over configuration space, with particles corresponding with it being possible to "factor out" certain sets of dimensions in the configuration space.

(Reductionism does NOT mean "reduction to particles", just "reduction to simple principles that are the basic thing that give rise to everything else", not identical to, but similar to the way that comparatively simple rules of chess give rise to really complex strategies (and even more so for Go))

As for it being "just a map"... it is a map, but it's a map about something. The map may not be the territory, but there is a territory, and the fact that the map seems to tell us accurate stuff about the territory is at least a justification for suspecting that the actual underlying reality of the territory may actually resemble what the map claims it's like.

comment by Ramana Kumar (ramana-kumar) · 2009-10-28T11:42:02.297Z · LW(p) · GW(p)

But the way physics really works, as far as we can tell, is that there is only the most basic level - the elementary particle fields and fundamental forces.

To clarify (actually, to push this further): there is only one thing (the universe) - because surely breaking the thing down into parts (such as objects) which in turn lets you notice relations between parts (which in turn lets you see time, for example) -- surely all that is stuff done by modelers of reality and not by reality itself? I'm trying to say that the universe isn't pre-parsed (if that makes any sense...)

Replies from: byrnema
comment by byrnema · 2009-10-28T16:02:38.201Z · LW(p) · GW(p)

As modelers of reality, we parse the world into fundamental particles and forces. You would claim that these distinctions are ultimately inherent features of the model and not necessarily defining reality.

I understand that a person might look at a car and see "mode of transportation" while another way of looking at the car is as a "particular configuration of quarks", in which case the distinction between a car and a tree does seem arbitrarily modeler-dependent.

But I would not go so far as to say that reality itself is featureless. Where would you begin to argue that there are no inherent dichotomies? Even if there is only one type of thing 'x', our reality (which is, above all, dynamic) seems to require a relationship and interaction between 'x' and ' ~x'. I'd say, logically, reality needs at least two kinds of things.

Replies from: ramana-kumar
comment by Ramana Kumar (ramana-kumar) · 2009-10-28T21:26:52.332Z · LW(p) · GW(p)

Even if there is only one type of thing 'x', our reality (which is, above all, dynamic) seems to require a relationship and interaction between 'x' and ' ~x'. I'd say, logically, reality needs at least two kinds of things.

Logic can only compel models.

You seem to be saying "Let x denote the universe. ~x is then a valid term. So ~x must denote something that isn't x, thus there are two things!" There are surface problems with this such as that x may not be of type boolean, and that you're just assuming every term denotes something. But the important problem is simpler: we can use logic to deduce things about our models, but logic doesn't touch reality itself (apart from the part of reality that is us).

What do you mean by "reality is dynamic"? Have you read Timeless Physics?

Replies from: byrnema
comment by byrnema · 2009-10-29T00:33:30.621Z · LW(p) · GW(p)

So I infer from the above that you have no logical arguments to support that reality is "one thing". I would think only an agnostic position on the nature of reality would be consistent with the nihilist stance you are representing.

comment by RafeFurst · 2010-03-07T17:15:34.983Z · LW(p) · GW(p)

Reductionism is great. The main problem is that by itself it tells us nothing new. Science depends on hypothesis generation, and reductionism says nothing about how to do that in a rational way, only how to test hypotheses rationally. For some reason the creative side of science -- and I use the word "creative" in the generative sense -- is never addressed by methodology in the same way falsifiability is:

http://emergentfool.com/2010/02/26/why-falsifiability-is-insufficient-for-scientific-reasoning/

We are at a stage of historical enlightenment where more and better reductionism is producing marginal returns. To be even less wrong, we might spend more time on the hypothesis generation side of the equation.

Replies from: Jack, JGWeissman, Morendil
comment by Jack · 2010-03-07T18:08:56.566Z · LW(p) · GW(p)

Really? I think of reductionism as maybe the greatest, most wildly successful abductive tool in all of history. If we can't explain some behavior or property of some object it tells us one good guess is to look to the composite parts of that thing for the answer. The only other strategy for hypothesis generation I can think of that has been comparably successful is skepticism (about evidence and testimony). "I was hallucinating." and "The guy is lying" have explained a lot of things over the years. Can anyone think of others?

comment by JGWeissman · 2010-03-07T18:32:52.580Z · LW(p) · GW(p)

Science depends on hypothesis generation, and reductionism says nothing about how to do that in a rational way, only how to test hypotheses rationally.

You may be interested in Science Doesn't Trust Your Rationality, in which Eliezer suggests that science is a way of identifying the good theories produced by a community of scientists who on their own have some capacity to produce theories, and that Bayesian rationality is a systematic way of producing good theories.

Oh, and Welcome to Less Wrong! You have identified an important point in your first few comments, and I hope that is predictor of good things to come.

Replies from: whowhowho
comment by whowhowho · 2013-02-04T15:13:01.139Z · LW(p) · GW(p)

and that Bayesian rationality is a systematic way of producing good theories.

An automated theory generator would be worth a nobel.

Replies from: TheOtherDave, shminux
comment by TheOtherDave · 2013-02-04T17:38:31.953Z · LW(p) · GW(p)

So, the introduction of "automated" to this discussion feels like a complete nonsequitor to me. Can you clarify why you introduce it?

Replies from: whowhowho, private_messaging
comment by whowhowho · 2013-02-04T19:49:51.434Z · LW(p) · GW(p)

If you have a "systematic" way of "producing" something, (JGWeissman) surely you can automate it.

Replies from: TheOtherDave, army1987
comment by TheOtherDave · 2013-02-04T20:21:25.661Z · LW(p) · GW(p)

Ah. OK, thanks for clarifying.

comment by A1987dM (army1987) · 2013-02-05T05:03:04.990Z · LW(p) · GW(p)

I could call a procedure "systematic" even if one of the steps used a human's System 1 as an oracle, in which case it'd be hard to automate that as per Moravec's paradox.

Replies from: whowhowho
comment by whowhowho · 2013-02-05T11:07:13.550Z · LW(p) · GW(p)

I would not call such a procedure systematic. Who would? Here's a system for success as an author: first have a brilliant idea...it reads like a joke, doesn't it?

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-05T12:32:23.102Z · LW(p) · GW(p)

I wasn't thinking of something that extreme; more like the kind of tasks people do on Mechanical Turk.

Replies from: whowhowho
comment by whowhowho · 2013-02-05T12:35:06.375Z · LW(p) · GW(p)

Is there anything non systematic by that definition? In what way does it promote Bayesianism to call it systematic?

Replies from: TheOtherDave, army1987
comment by TheOtherDave · 2013-02-05T16:08:30.287Z · LW(p) · GW(p)

Well, I have no idea if it "promotes Bayesianism" or not, but when someone talks to me about a systematic approach to doing something in normal conversation, I understand it to be as opposed to a scattershot/intuitive approach.

For example, if I want to test a piece of software, I can make a list of all the integration points and inputs and key use cases and build a matrix of those lists and build test cases for each cell in that matrix, or I can just construct a bunch of test cases as they occur to me. The former approach is more systematic, even if I can't necessarily automate the test cases.

I realize that your understanding of "systematic" is different from this... if I've understood you, if I can't automate the test cases then this approach is not systematic on your account.

Replies from: whowhowho
comment by whowhowho · 2013-02-05T17:46:59.741Z · LW(p) · GW(p)

Can there be a scattershot or intuitive scientific method?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-02-05T19:30:55.253Z · LW(p) · GW(p)

Well, first of all, we should probably clarify that the original claim was that Bayesian rationality was a systematic way of producing good theories, and therefore presumably was meant to contrast with scattershot or intuitive ways of producing good theories, rather than to contrast with a scattershot or intuitive scientific method... just in case any of our readers lost track of the original question.

But to answer your question... I wouldn't think so, in that an important part of what X needs to have before I'm willing to call X a scientific method is a systematic way of validating and replicating results.

That said, I would say it's possible for a scientific method to embed a scattershot or intuitive approach to producing theories. Indeed, the history of the scientific method as applied by humans has done this pretty ubiquitously.

Replies from: whowhowho
comment by whowhowho · 2013-02-05T19:54:06.665Z · LW(p) · GW(p)

Well, first of all, we should probably clarify that the original claim was that Bayesian rationality was a systematic way of producing good theories, and therefore presumably was meant to contrast with scattershot or intuitive ways of producing good theories,

That just makes matters worse. Bayes might systematically allow you judge the relative goodness of various theories, once they have been produced,, but it doesn't help at all in producing them. You can't just crank the handle on Bayes and get relativity

Replies from: TheOtherDave
comment by TheOtherDave · 2013-02-05T20:53:05.984Z · LW(p) · GW(p)

I'm not sure what you mean by "worse" here.
To my mind, challenging the original claim as false is far superior to failing to engage with it altogether, since it can lead to progress.

In that vein, perhaps it would help if you returned to JGWeissman's original comment and ask them to clarify what makes Bayesian rationality "a systematic way of producing good theories," so you can either learn from or correct them on the question.

comment by A1987dM (army1987) · 2013-02-05T16:39:21.643Z · LW(p) · GW(p)

Is there anything non systematic by that definition?

See TheOtherDave.

In what way does it promote Bayesianism to call it systematic?

See E.T. Jaynes calling certain frequentist techniques “ad-hockeries”. EDIT: BTW, I didn't have Bayesianism in mind when I replied to this ancestor -- I should stop replying to comments without reading their ancestors first.

comment by private_messaging · 2013-02-05T07:39:15.671Z · LW(p) · GW(p)

It feels like you use 'questions' a lot more than usual, and it looks very much like a rhetorical device because you inject counter points into your questions. Can you clarify why you do it? (see what I did there?)

Sidenote: Actually, questions are often a sneaky rhetorical device - you can modify the statement in the way of your choosing, and then ask questions about that. You see that in political debates all the time.

Replies from: Vaniver, TheOtherDave
comment by Vaniver · 2013-02-05T14:12:43.030Z · LW(p) · GW(p)

Agreed that questions can be used in underhanded ways, but this example does seem more helpful at focusing the conversation than something like:

Can you clarify why you added "automated" to the discussion?

That could easily go in other directions; this makes clear that the question is "how did we get from A to B?" while sharing control of the topic change / clarification.

comment by TheOtherDave · 2013-02-05T15:37:44.943Z · LW(p) · GW(p)

Can you clarify why you do it?

Sure, I'd be happy to: because I want answers to those questions.

For example, whowhowho's introduction of "automated" did in fact feel like a nonsequitor to me, and I wanted to understand better why they'd introduced it, to see whether there was some clever reasoning there I'd failed to follow. Their answer to my question clarified that, and I thanked them for the clarification, and we were done.

(see what I did there?)

You asked a question.
I answered it.
It really isn't that complicated.

That said, I suspect from context that you mean to imply that you did something sneaky and rhetorical just then, just as you seem to believe that I do something sneaky and rhetorical when I ask questions.
If that's true, then no, I guess I don't see what you did there.

questions are often a sneaky rhetorical device

Yes. So are statements.

comment by Morendil · 2010-03-07T18:37:44.508Z · LW(p) · GW(p)

Agreed: we need more posts on abductive reasoning specifically.

comment by imaxwell · 2010-11-07T17:43:48.490Z · LW(p) · GW(p)

Probably no one will ever see this comment, but.

"I wish I knew where reality got its computing power."

If reality had less computing power, what differences would you expect to see? You're part of the computation, after all; if everything stood still for a few million meta-years while reality laboriously computed the next step, there's no reason this should affect what you actually end up experiencing, any more than it should affect whether planets stay in their orbits or not. For all we know, our own computers are much faster (from our perspective) than the machines on which the Dark Lords of the Matrix are simulating us (from their perspective).

Replies from: Perplexed, bruno-mailly
comment by Perplexed · 2010-11-07T18:53:56.399Z · LW(p) · GW(p)

If reality were computed in reverse chronological order, what differences would you expect to see?

Suppose our universe was produced by specifying some particular final state, and then repeatedly computing predecessor states according to some deterministic laws of nature. Would we experience time backward? Or would we still experience it forward (the reverse of the direction of the simulation) because of some time assymetry in the physical laws or in the entropy of the initial vs final states?

Everyone always assumes that the simulation will proceed "foreward". Is that important? I honestly don't know.

Replies from: imaxwell
comment by imaxwell · 2010-11-08T04:17:14.071Z · LW(p) · GW(p)

You can go one step further. If folks like Barbour are correct that time is not fundamental, but rather something that emerges from causal flow, then it ought to be that our universe can be simulated in a timeless manner as well. So a model of this universe need not actually be "executed" at all---a full specification of the causal structure ought to be enough.

And once you've bought that, why should the medium for that specification matter? A mathematical paper describing the object should be just as legitimate as an "implementation" in magnetic patterns on a platter somewhere.

And if it doesn't matter what the medium is, why should it matter whether there's a medium at all? Theorems don't become true because someone proves them, so why should our universe become real because someone wrote it down?

If I understand Max Tegmark correctly, this is actually the intuition at the core of his mathematical universe hypothesis (Wikipedia, but with some good citations at the bottom), which basically says: "We perceive the universe as existing because we are in it." Dr. Tegmark says that the universe is one of many coherent mathematical structures, and in particular it's one that contains sentient beings, and those sentient beings necessarily perceive themselves and their surroundings as "real". Pretty much the only problem I have with this notion is that I have no idea how to test it. The best I can come up with is that our universe, much like our region of the universe, should turn out to be almost but not quite ideal for the development of nearly-intelligent creatures like us, but I've seen that suggested of models that don't require the MUH as well. Aside from that, I actually find it quite compelling, and I'd be a bit sad to hear that it had been falsified.

Interestingly enough, a version of the MUH showed up in Dennis Paul Himes' (An Atheist Apology)[http://www.cookhimes.us/dennis/aaa.htm] (as part of the "contradiction of omnipotent agency" argument), written just a few years after Dr. Tegmark started writing about these ideas. Mr. Himes' essay was very influential on me as a teenager, and yet I never did hear of the "mathematical universe hypothesis" by that name until a few years ago. In past correspondence, he wrote that the argument was original to him as far as he knew, and at least one of his commenters claimed to also have developed it independently, so it may be a more intuitively plausible idea than it seems to be at first glance.

Replies from: Perplexed
comment by Perplexed · 2010-11-08T17:54:02.007Z · LW(p) · GW(p)

at least one of his commenters claimed to also have developed it independently, so it [Tegmark's idea] may be a more intuitively plausible idea than it seems to be at first glance.

I'm pretty sure that the idea has occurred to just about everyone who has wondered whether the meanings of the intransitive verb "to exist" in mathematics and philosophy might have anything in common. Tegmark deserves some credit though for writing it down.

comment by Bruno Mailly (bruno-mailly) · 2018-10-07T08:08:50.444Z · LW(p) · GW(p)

From the inside we can't judge the relative speed or power, but we can judge the efficiency.

And it's abysmal : the jumps from quarks to particles to atoms to molecules to cells to animals to stars to galaxies each throw orders of magnitude around like it's nothing.

What could this possibly tell us ?

  • Reality just has that much resource.
  • The result of our reality was not designed.
  • The lords of the matrix are not very bright.
comment by Traddles · 2011-05-03T18:06:22.548Z · LW(p) · GW(p)

Sounds like one of the central tenants of discordianism. There is no such thing as wings, identity, truth, the concept of equality. These are all abstract concepts that exist only in the mind. "Out there" in "True" reality, there is only chaos (not necessarily of the random kind, just of the meaningless/purposeless kind).

comment by Tuukka_Virtaperko · 2012-01-16T22:31:07.273Z · LW(p) · GW(p)

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.

Can you handle the truth then? I don't understand the notion of truth you are using. In everyday language, when a person states something as "true", it doesn't usually need to be grounded to logic in order to work for a practical purpose. But you are making extremely abstract statements here. They just don't mean anything unless you define truth and solve the symbol grounding problem. You have criticized philosophy in other threads, yet here you are making dubious arguments. The arguments are dubious because they are not clearly mere rhetoric, and not clearly philosophy. If someone tries to require you to explain the meaning of them, you could say you're not interested of philosophy, so philosophical counterarguments are irrelevant to you. But you can't be disinterested of philosophy if you make philosophical claims like that and actually consider them important.

I don't like contemporary philosophy either, but I would suppose you are in trouble with these things, and I wonder if you are open to a solution? If not, fine.

But the way physics really works, as far as we can tell, is that there is only the most basic level - the elementary particle fields and fundamental forces. You can't handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)

But you haven't defined reality. As long as you haven't done so, "reality" will be a metaphorical, vague concept, which frequently changes its meaning in use. This means if you state something to be "reality" in one discussion, logical analysis would probably reveal you didn't use it in the same meaning in another discussion.

You can have a deterministic definition of reality, but that will be arbitrary. Then people will start having completely pointless debates with you, and to make matters worse, you will perceive these debates as people trying to unjustify what you are doing. That's a problem caused by you not realizing you didn't have to justify your activities or approach in the first place. You didn't need to make these philosophical claims, and I don't suppose you would done so had you not felt threatened by something, such as religion or mysticism or people imposing their views on you.

This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

If you categorize yourself as a reductionist, why don't you go all the way? You can't be both a reductionist and a realist. Ie. you can't believe in reductionism and in the existence of a territory at the same time. You have to drop either one of them. But which one?

Drop the one you need to drop. I'm serious. You don't need this metaphysical nonsense to justify something you are doing. Neither reductionism nor realism is "true" in any meaningful way. You are not doing anything wrong if you are a reductionist for 15 minutes, then switch to realism (ie. the belief in a "territory") for ten seconds, then switch again into reductionism and then maybe to something else. And that is also the way you really live your life. I mean, think about your mind. I suppose it's somewhat similar to mine. You don't think about that metaphysical nonsense when you're actually doing something practical. So you are not a metaphycisist when you're riding a bike and enjoying the wind or something.

It's just some conception of yourself which you have, that you have defined as someone who is an advocate of "reductionism and realism". This conception is true only when you indeed are either one of those. It's not true, when you're neither of those. But you are operating in your mind. Suppose someone says to you you're not a "reductionist and a realist" when you are, for example, in intense pain for some reason and are very unlikely to think about philosophy. Well, even in that case you could remind yourself of your own conception of yourself, that is, you are a "reductionist and a realist", and argue that the person who said you are not was wrong. But why would you want to do so? The only reasons I see are some naive or egoistic or defensive reasons, such as:

  • You are afraid the person who said you're not a "reductionist or realist" will try to waste your time by presenting stupid arguments according to which you may or may not or should or should not do something.
  • You believe your image of yourself as a "reductionist and realist" is somehow "true". But you are able to decide at will whether that image is true. It is true when you are thinking in a certain way, and false when you are not thinking that way. So the statement conveys no useful information, except maybe on something you would like to be or something like that. But that is no longer philosophy.
  • You have some sort of a need to never get caught uttering something that's not true. But in philosophy, it's a really bad idea to want to make true statements all the time. Metaphysical theories in and of itself are neither true nor false. Instead, they are used to define truth and falsehood. They can be contradictory or silly or arbitrary, but they can't be true or false.

If you state you to regard one state of mind or one theory, such as realism or reductionism, as some sort of an ultimate truth, you are simply putting yourself into a prison of words for no reason except that you apparently perceive some sort of safety in that prison or something like that. But its not safe. It exposes you to philosophical criticism you previously were invulnerable towards, because before you went to that prison, you didn't even participate in that game.

If you actually care about philosophy, great. But I haven't yet gotten such an impression. It seems like philosophy is an unpleasant chore to you. You want to use philosophy to obtain justification, a sense of entitlement, or something, and then throw it away because you think you're already finished with it - that you've obtained a framework theory which already suits your needs, and you can now focus on the needs. But you're not a true reductionist in the sense you defined reductionism, unless you also scrap the belief in the territory. I don't care what you choose as long as you're fine with it, but I don't want you to contradict yourself.

There is no way to express the existence of the "territory" as a meaningfully true statement. Or if there is, I haven't heard of it. It is a completely arbitrary declaration you use to create a framework for the rest of the things you do. You can't construct a "metatheory of reality" which is about the territory, which you suppose to exist, and have that same territory prove the metatheory is right. The territory may contain empirical evidence that the metatheory is okay, but no algorithm can use that evidence to produce proof for the metatheory, because:

  • From "territory's" point of view, the metatheory is undefined.
  • But the notion of gathering empirical evidence is meaningless if the metatheory, according to which the "territory" exists, is undefined.

Therefore, you have to define it if you want to use it for something, and just accept the fact that you can't prove it to be somehow true, much less use its alleged truth to prove something else false. You can believe what you want, but you can't make an AI that would use "territory" to construct a metatheory of territory, if it's somehow true to the AI that territory is all there is. The AI can't even construct a metatheory of "map and territory", if it's programmed to hold as somehow true that map and territory are the only things that exist. This entails that the AI cannot conceptualize its own metaphysical beliefs even as well as you can. It could not talk about them at all. To do so, it would have to be able to construct arbitrary metatheories on its own. This can only be done if the AI holds no metaphysical belief as infallible, that is, the AI is a reductionist in your meaning of the word.

I've seen some interest towards AI on LW. If you really would like to one day construct a very human-like AI, you will have problems if you cannot program an AI that can conceptualize the structure of its own cognitive processes also in terms that do not include realism. Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task. So if you want to have that assumption around all the time, you'll just end up adding unnecessary extra baggage to the AI which will probably also make the code very difficult to comprehend. You don't want to lug the assumption around all the time just because it's supposed to be true in some way nobody can define.

You could as well have a reductionist theory, which only constructs realism (ie. the declaration that an external world exists) under certain conditions. Now, philosophy doesn't usually include such theories, because the discipline is rather outdated, but there's no inherent reason why it can't be done. Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.

I hope you were interested of my rant.

Replies from: thomblake, DSimon, DSimon
comment by thomblake · 2012-01-16T22:40:48.118Z · LW(p) · GW(p)

I don't understand the notion of truth you are using.

A belief is true when it corresponds to reality. Or equivalently, "X" is true iff X.

But you haven't defined reality.

In the map/territory distinction, reality is the territory. Less figuratively, reality is the thing that generates experimental results. From The Simple Truth:

I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.

comment by DSimon · 2012-01-17T00:40:02.961Z · LW(p) · GW(p)

I don't follow why you claim that reductionism and realism are incompatible. I think this may be because I'm very confused when I try to figure out, from context, what you mean by "realism", and I strongly suspect that that's because you don't have a definition of that word which can be used in tests for updating predictions, which is the sort of thing LWers look for in a useful definition.

Basically, I'm inclined to agree with you when you say:

Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.

This is a really good reason in my experience for not getting into long discussions about "But what is reality, really?"

comment by DSimon · 2012-01-17T00:45:23.256Z · LW(p) · GW(p)

Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task.

Actually, this may be a good point for me to try to figure out what you mean by "realism", because here you seem to have connected that word to some but not all strategies of problem-solving. Can you give me some specific examples of problems which the mind tends to use realism in solving, and problems where it doesn't?

Replies from: Tuukka_Virtaperko
comment by Tuukka_Virtaperko · 2012-01-17T03:19:22.155Z · LW(p) · GW(p)

I got "reductionism" wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn't take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas for theoretical work which could be useful for constructing AI, and I'm trying to check whether my approach is deemed intelligible here.

"Realism" is the belief that there is an external world, usually thought to consist of quarks, leptons, forces and such. It is typically thought of as a belief or a doctrine that is somehow true, instead of just an assumption an AI or a human makes because it needs to. Depending on who labels themself as a realist and on what mood is he, this can entail that everybody who is not a realist is considered mistaken.

An example of a problem whose solution does not need to involve realism is: "John is a small kid who seems to emulate his big brother almost all the time. Why is he doing this?" Possible answers would be: "He thinks his brother is cool" or "He wants to annoy his brother" or "He doesn't emulate his brother, they are just very similar". Of course you could just brain scan John. But if you really knew John, that's not what you would do, unless brain scanners were about as common and inexpensive as laptops. And have much better functionality than they currently do.

In the John problem, there's no need to construct the assumptions of a physical world, because the problem would be intelligible even in the case you meet John in a dream. You can't take any physical brain scanner with you in a dream, so you can't brain scan John. But you can analyze John's behavior with the same criteria according to which you would analyze him had you met him while awake.

I'm not trying to impose any views on you, because I'm basically just trying to find out whether someone is interested of this kind of stuff. The point is that I'm trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception - all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. The theory would be pretty much both philosophy and AI.

The problem I see now is this. My theory, RP, is founded on the notion that important parts of thinking are based on metaphysical emergence. The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed. I would allow both, but if the people on LW are reductionist, I would suppose that the logical consequence of that would be they believe my theory cannot work. And that's why I'm a bit troubled by the notion that you might accept reductionism as some sort of an axiom, because you don't want to have a long philosophical conversation and would prefer to settle down with something that currently seems reasonable. So should I expect you to not want to consider other options? It's strange that I should go elsewhere with my project, because that would amount to you rejecting an AI theory on grounds of contradicting your philosophical assumptions. Yet, my common sense expectation would be that you'd find AI more important than philosophy.

Replies from: DSimon
comment by DSimon · 2012-01-17T04:21:02.952Z · LW(p) · GW(p)

The point is that I'm trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception - all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. [...] The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed.

You seem to be overthinking this. Reductionism is "merely" a really useful cognition technique, because calculating everything at the finest possible level is hopelessly inefficient. Perhaps a practical simple example is needed:

An AI that can use reductionism can say "Oh, that collection of pixels within my current view is a dog, and this collection is a man, and the other collection is a leash", and go on to match against (and develop on its own) patterns about objects at the coarser-than-pixel size of dogs, men, and leashes. Without reductionism, it would be forced to do the pattern matching for everything, even for complex concepts like "Man walking a dog", directly at the pixel level, which is not impossible but is certainly a lot slower to run and harder to update.

If you've ever refactored a common element out in your code into its own module, or even if you've used a library or high-level language, you are also using reductionism. The non-reductionistic alternative would be something like writing every program from scratch, in machine code.

Replies from: Tuukka_Virtaperko
comment by Tuukka_Virtaperko · 2012-01-17T11:02:51.013Z · LW(p) · GW(p)

Okay. That sounds very good. And it would seem to be in accordance with this statement:

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it's not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It's hardly a philosophical statement at all, which is good. I would say that "the notion of higher levels being out there in the territory" is meaningless, but expressing disbelief to that notion is apparently intended to convey approximately the same meaning.

RP doesn't yet actually include reduction. It's about next on the to do list. Currently it includes an emergence loop that is based on the power set function. The function produces a staggering amount of information in just a few cycles. It seems to me that this is because instead of accounting for emergence relations the mind actually performs, it accounts for all defined emergence relations the mind could perform. So the theory is clearly still under construction, and it doesn't yet have any kind of an algorithm part. I'm not much of a coder, so I need to work with someone who is. I already know one mathematician who likes to do this stuff with me. He's not interested of the metaphysical part of the theory, and even said he doesn't want to know too much about it. :) I'm not guaranteeing RP can be used for anything at all, but it's interesting.

comment by Voltairina · 2012-03-04T12:50:37.056Z · LW(p) · GW(p)

One way of tracing the uhm, data I guess might be to say, we see, naively, a chair. And know that underneath the chair out there is, at the bottom level we're aware of, energy fields and fundamental forces. And those concepts, like the chair, correspond to a physics model, which is in turn a simplification/distillation of vast reams of recorded experimental data into said rules/objects, which is in turn actual results of taking measurements during experiments, which in turn are the results of actual physical/historical events. So the reductionist model - fields and forces - I think is still a map of experimental results tagged with like, interpretations that tie them together, I guess.

Replies from: Voltairina
comment by Voltairina · 2012-03-04T12:51:28.026Z · LW(p) · GW(p)

Er, I guess I should say its strictly /not/ an attempt at a simplified description, but a minimal description which can still account for everything...

comment by Voltairina · 2012-03-04T18:32:40.967Z · LW(p) · GW(p)

Whatever the bottom level of our understanding of the map, even a one-level map is still above the territory, so there're still levels below that which carry back to, presumedly, territory. We find some fields-and-forces model that accounts for all the data we're aware of. But, its always going to be possible - less likely the more data we get - that something flies along and causes us to modify it. So, if we wanted to continue the reductionistic approach about the model we're making about our world, stripping away higher level abstractions, we'd say that its an in-process unifying simplification of and minimal inferences from the results of many experiments, which correspond to measurements of the world at certain levels of sensitivity by different means.

Replies from: Voltairina
comment by Voltairina · 2012-03-04T18:39:15.976Z · LW(p) · GW(p)

Like, I can draw a picture of a face in increasingly finer and finer detail down to "all the detail I see" but its still going to contain unifying assumptions - like a vector representation of a face, versus the data, which may be pixellated - made up of specific individual measurement events. Or I can show a chart of where and how all the nerves are excited in my eyes, which are the 'raw data' level stuff that I have access to about what's 'out there', for which the simplest explanation is most probably a face. Actually its kind of interesting to think of it that way because a lot of our raw mental data is 'vectored' already. But, whenever we do a linear regression of a dataset, that's also a reduction-to-a-vector of something.

comment by Ronny Fernandez (ronny-fernandez) · 2012-06-08T23:50:05.906Z · LW(p) · GW(p)

This post, represents for me, the typical LW response to something like the Object Oriented Ontologies of Paul Levi Bryant and DeLanda. These Ontologies attempt to give things like numbers, computations, atoms, fundamental particles, galaxies, higher level laws, fundamental laws, concepts, referents of concepts, etc. equal ontological status. They, hence, are strictly against making a distinction between map and territory, there is only territory, and all things that are, are objects.

I'm a confident reductionist, model/reality (bayesian), type guy. I'm not having major second thoughts about that, right now. But engaging in productive debate with object oriented philosophers might be a good chance for us to check ourselves,i.e., see how confident we really should be in our reductionist ontology. There are leading philosophers, and other scientists, that are apposed to reductionism, and opposed to correlationism. They have blogs, and are often open to debate. There's no point missing out on talking with someone that see's the universe fundamentally different from you in a way that is technically derivable.

comment by aceofspades · 2012-07-02T04:46:06.831Z · LW(p) · GW(p)

Does the reductionist model give different predictions about the world than the non-reductionist model? If so, are any easily checked?

comment by Kawoomba · 2013-02-05T11:55:31.331Z · LW(p) · GW(p)

Solomonoff Induction, in so much as it is related to interpretations at all, rejects 'many worlds interpretation' because valid (non falsified) code strings are the ones whose output began with the actual experimental outcome rather than list all possible outcomes, i.e. are very much Copenhagen - like.

Has this point ever been answered? If we are content with the desired output appearing somewhere along the line - as opposed to the start - then the simplest theory of everything would be printing enough digits of pi, and our universe would be described somewhere down the line.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-05T19:39:54.935Z · LW(p) · GW(p)

Solomonoff induction is about putting probability distributions on observations - you're looking for the combination of the simplest program that puts the highest probability on observations. Technically, the original SI doesn't talk about causal models you're embedded in, just programs that assign probabilities to experiences.

Generalizing somewhat, for QM as it appears to humans, the generalized-SI-selected hypothesis would be something along the lines of one program that extrapolated the wavefunction, then another program that looked for people inside it and translated the underlying physics into the "observed data" from their perspective, then put probabilities on the sequences of data corresponding to integral squared modulus. Note that you also need an interface from atoms to experiences just to e.g. translate a classical atomic theory of matter into "I saw a blue sky", and an implicit theory of anthropics/sum-probability-measure too if the classical universe is large enough to have more than one copy of you.

Replies from: Kawoomba, whowhowho
comment by Kawoomba · 2013-02-05T19:42:35.398Z · LW(p) · GW(p)

Thanks for this. I'll mull it over.

Replies from: private_messaging
comment by whowhowho · 2013-02-05T20:04:12.474Z · LW(p) · GW(p)

It isn't at all clear why all that would add up to something simpler than a single world theory

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-05T20:08:19.445Z · LW(p) · GW(p)

Single-world theories still have to compute the wavefunction, identify observers, and compute the integrated squared modulus. Then they have to pick out a single observer with probability proportional to the integral, peek ahead into the future to determine when a volume of probability amplitude will no longer strongly causally interact with that observer's local blob, and eliminate that blob from the wavefunction. Then translating the reductionist model into experiences requires the same complexity as before.

Basically, it's not simpler for the same reason that in a spatially big universe it wouldn't be 'simpler' to have a computer program that picked out one observer, calculated when any photon or bit of matter was moving away and wasn't going to hit anything that would reflect it back, and then eliminated that matter.

comment by Rixie · 2013-03-29T17:24:37.088Z · LW(p) · GW(p)

This website is doing amazing things to the way I think every day, as well as occasionally making me die of laughter.

Thank you, Eliezer!

Replies from: wedrifid
comment by wedrifid · 2013-09-12T08:53:35.763Z · LW(p) · GW(p)

as well as occasionally making me die of laughter.

But you got better.

comment by RogerS · 2013-04-23T16:24:57.221Z · LW(p) · GW(p)

"having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory

Why do we distinguish “map” and “territory”? Because they correspond to “beliefs” and “reality”, and we have learnt elsewhere in the Sequences that

my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.

Let’s apply that test. It isn’t only predictions that apply at different levels, so do the results. We can have right or wrong models at quark level, atom level, crystal level, and engineering component level. At each level, the fact that one model is right and another wrong is a fact about reality: it is Talking about Territory. When we say a 747 wing is really there, we mean that (for example) visualising it as a saucepan will result in expectations that the results will not fulfil in the way that they will when visualising it as a wing. Indeed, we can have many different models of the wing, all equally correct - since they all result in predictions that conform to the same observations. The choice of correct model is what is in our head. The fact that it has to be (equivalent to) a model of a wing to be correct is in the Territory. In short, when Talking about Territory we can describe things at as many levels (of aggregation) as yield descriptions that can be tested against observation.

at different levels

What exactly is meant by “levels” here? The Naval Gunner is arguing about levels of approximation. The discussion of Boeing 747 wings is an argument about levels of aggregation. They are not the same thing. Treating the forces on an aircraft wing at the aggregate level is leaving out internal details that per se do not affect the result. There will certainly be approximations involved in practice, of course, but they don’t stem from the actual process of aggregation, which is essentially a matter of combining all the relevant force equations algebraically, eliminating internal forces, before solving them; rather than combining the calculated forces numerically.

...the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces

The way that reality works, as far as we can tell, is that there are basic ingredients, with their properties, which in any given system at any given instant exist in a particular configuration. Now reality is not just the ingredients but also the configuration - a wrong model of the configuration will give wrong predictions just as a wrong model of the ingredients will. The possible configurations include known stable structures. These structures are likewise real because any model of a configuration which cannot be transformed into a model which includes the identified structure in question is in conflict with reality. Physics is I understand it comprises (a) laws that are common to different configurations of the ingredients, and (b) laws that are common to different configurations of the known stable structures. Physicalism implies the belief that laws (b) are always consistent with laws (a) when both are sufficiently accurate.

...The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings

True but the key word here is “additional”. Newton’s laws were undoubtedly laws of physics, and in my school physics lessons were expressed in terms of forces on bodies, rather than on their constituent particles. The laws for forces on constituent particles were then derived from Newton’s laws by a thought experiment in which a body is divided up. In higher education today the reverse process is the norm, but reality is indifferent to which equivalent formulation we use: both give identical predictions.[Original wording edited]

General Relativity contains the additional causal entity known as space-time curvature, which is an aggregate effect of all the massive particles in the universe given their configuration so is not a natural fit in the Procrustean bed of reductionism. [Postscript] Interestingly, I've read that Newton was never happy with his idea of gravitation as a force of attraction between two things because it implied a property shared between the two things concerned and therefore being intrinsic to neither - but failed to find a better formulation.

The critical words are really and see

Indeed, but when you see a wing it is not just in the mind, it is also evidence of how reality is configured. It is the result of the experiment you perform by looking.

.. the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought

What the gunner really thought is pure speculation of course, but this assumption by EY raises an important point about meta-models.

In thought experiments the outcome is determined by the applicable universal laws – that’s meta-model (A). In any real-world case you need a model of the application as well as models of universal laws. That’s meta-model (B). An actual artillery shell will be affected by things like air resistance, so the greater accuracy of Einstein’s laws in textbook cases is no guarantee of it giving more accurate results in this case. EY obviously knew this, but his meta-model excluded it from consideration here. Treating the actual application as a case governed only by Newton’s or Einstein’s laws is itself a case of “Mind Projection Fallacy” – projecting meta-model (A) onto a real-world application. So it’s not a case of the gunner mistaking a model for reality, but of mistaking the criteria for choosing between one imperfect model and another. I imagine gunners are generally practical men, and in the field of the applied sciences it is very common for competing theories to have their own fields of application where they are more accurate than the alternatives – so although he was clearly misinformed, at least his meta-model was the right one.

[Postscript] An arguable version of reductionism is the belief that laws about the ingredients of reality are in some sense "more fundamental" than laws about stable structures of the ingredients. This cannot be an empirical truth, since both laws give the same predictions where they overlap so cannot be empirically distinguished. Neither is any logical contradiction implied by its negation. It can only be a metaphysical truth, whatever that is. Doesn't it come down to believing Einstein's essentialist concept of science against Bohr's instrumentalist version? That science doesn't just describe, but also tells? So pick Bohr as an opponent if you must, not some anonymous gunner.

comment by A1987dM (army1987) · 2013-09-12T08:44:29.277Z · LW(p) · GW(p)

"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."

[extreme steelman mode on]

By “relativity” he must have meant the ultrarelativistic approximation, of course.

[extreme steelman mode off]

:-)

comment by 3p1cd3m0n · 2015-01-03T22:10:45.598Z · LW(p) · GW(p)

Should one really be so certain about there being no higher-level entities? You said that simulating higher-level entities takes fewer computational resources, so perhaps our universe is a simulation and that the creators, in an effort to save computational resources, made the universe do computations on higher-level entities when no-one was looking at the "base" entities. Far-fetched, maybe, but not completely implausible.

Perhaps if we start observing too many lower-level entities, the world will run out of memory. What would that look like?

comment by MarsColony_in10years · 2015-03-05T06:53:29.696Z · LW(p) · GW(p)

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.

Less Wrongs "The Futility of Emergence" article argues against using the word "emergence", claiming that it provides no additional information. The argument went that literally everything is an emergent property, since everything can be boiled down to more fundamental components. (I would argue that it is never actually used in this broad sense, but rather to indicate cases where relatively simple circumstances, when iterated enough times, give rise to much more complex interactions in ways which are difficult to fully model.)

Isn't this article using "reductionism" in exactly the same sense as "emergence" in the broad sense? It isn't actually using "reductionism" as a curiosity stopper, though. It isn't saying "airplanes work because of quarks", and leaving it at that, which is what "The Futility of Emergence" was warning against. Still, this highlights an interesting exception to the rationalists rule against statements which are always true. In terms of Bayes theorem, a statement which excludes nothing proves nothing. But if we aren't trying to provide new insights into specific phenomena, but only to make facts about the universe more explicit, then such statements can still serve such a purpose. This only holds true if the statement is falsifiable, though. If you make some generalization such as "free will is an emergent property" or "free will can be reduced to more fundamental components", such statements can still be falsified.

I guess the takeaway message from the two articles is not to accept things like "because quarks" as answers, but also to understand that "because quarks" is in fact correct. It's the answer (or one of the answers) that you'd eventually get if you just kept asking "why" until you got to the most fundamental underlying principles. We shouldn't look for our explanations only in terms of fundamentals or only in terms of more broad terms. If we want a full understanding, we need to examine all of the layers of the onion.

comment by Max Hodges (max-hodges) · 2020-05-05T18:10:18.457Z · LW(p) · GW(p)

Minsky writing in Society of Mind might bring some light here (paraphrasing):

How can a box made of six boards hold a mouse when a mouse could just walk away from any individual board? No individual board has any "containment" or "mouse-tightness" on it's own. So is "containment" an emergent property?

Of course, it is the way a box prevents motion in all directions, because each board bars escape in a certain direction. The left side keeps the mouse from going left, the right from going right, the top keeps it from leaping out, and so on. The secret of a box is simply in how the boards are arranged to prevent motion in all directions!

That's what containing means. So it's silly to expect any separate board by itself to contain any containment, even though each contributes to the containing. It is like the cards of a straight flush in poker: only the full hand has any value at all.

"The same applies to words like life and mind. It is foolish to use these words for describing the smallest components of living things because these words were invented to describe how larger assemblies interact. Like boxing-in, words like living and thinking are useful for describing phenomena that result from certain combinations of relationships. The reason box seems nonmysterious is that everyone understands how the boards of a well-made box interact to prevent motion in any direction. In fact, the word life has already lost most of its mystery — at least for modern biologists, because they understand so many of the important interactions among the chemicals in cells. But mind still holds its mystery — because we still know so little about how mental agents interact to accomplish all the things they do."

comment by TAG · 2020-05-05T22:25:58.862Z · LW(p) · GW(p)

This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

The higher levels could have been, though. The fact that we have high-level abstractions in our heads does not by itself mean that there is nothing corresponding to them in the territory. (To make that argument is a version the fallacy that since there is a form of probability in the map, there can be none in the territory).

comment by MikkW (mikkel-wilson) · 2020-08-10T19:25:48.683Z · LW(p) · GW(p)

Tangential to the main point: one hypothesis for why the artillery gunner thought that "General relativity gives you the wrong answer", is that maybe he had an experience with a software which could either run "Newtonian mode" or "GR mode", and the software had to make approximations for the relativistic calculation to be roughly tractable (which might be nonetheless useful for roughly solving problems where relativistic effects matter, but would only reduce accuracy for non-relativistic situations).

Now, the "GR mode" (with approximations) would be a different model from real General Relativity, but could have given the gunner the impression that GR gives substantially different (and worse) answers from Newtonian mechanics in non-relativistic situations

comment by [deleted] · 2021-10-04T16:19:43.465Z · LW(p) · GW(p)