A Priori

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-08T21:02:14.000Z · LW · GW · Legacy · 133 comments

Contents

133 comments

Traditional Rationality is phrased as social rules, with violations interpretable as cheating: if you break the rules and no one else is doing so, you're the first to defect - making you a bad, bad person.  To Bayesians, the brain is an engine of accuracy: if you violate the laws of rationality, the engine doesn't run, and this is equally true whether anyone else breaks the rules or not.

Consider the problem of Occam's Razor, as confronted by Traditional philosophers.  If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true?

You could argue that Occam's Razor has worked in the past, and is therefore likely to continue to work in the future.  But this, itself, appeals to a prediction from Occam's Razor.  "Occam's Razor works up to October 8th, 2007 and then stops working thereafter" is more complex, but it fits the observed evidence equally well.

You could argue that Occam's Razor is a reasonable distribution on prior probabilities.  But what is a "reasonable" distribution?  Why not label "reasonable" a very complicated prior distribution, which makes Occam's Razor work in all observed tests so far, but generates exceptions in future cases?

Indeed, it seems there is no way to justify Occam's Razor except by appealing to Occam's Razor, making this argument unlikely to convince any judge who does not already accept Occam's Razor.  (What's special about the words I italicized?)

If you are a philosopher whose daily work is to write papers, criticize other people's papers, and respond to others' criticisms of your own papers, then you may look at Occam's Razor and shrug.  Here is an end to justifying, arguing and convincing.  You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs.  And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".

But to a Bayesian, in this era of cognitive science and evolutionary biology and Artificial Intelligence, saying "a priori" doesn't explain why the brain-engine runs.  If the brain has an amazing "a priori truth factory" that works to produce accurate beliefs, it makes you wonder why a thirsty hunter-gatherer can't use the "a priori truth factory" to locate drinkable water.  It makes you wonder why eyes evolved in the first place, if there are ways to produce accurate beliefs without looking at things.

James R. Newman said:  "The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2."  The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience.  Wikipedia quotes Hume:  Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe."  You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.

But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains.  Material brains, real in the universe, composed of quarks in a single unified mathematical physics whose laws draw no border between the inside and outside of your skull.

When you add 1 + 1 and get 2 by thinking, these thoughts are themselves embodied in flashes of neural patterns.  In principle, we could observe, experientially, the exact same material events as they occurred within someone else's brain.  It would require some advances in computational neurobiology and brain-computer interfacing, but in principle, it could be done.  You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2.  How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing?  When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

If this seems counterintuitive, try to see minds/brains as engines - an engine that collides the neural pattern for 1 and the neural pattern for 1 and gets the neural pattern for 2.  If this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern.  In other words, for every form of a priori knowledge obtained by "pure thought", you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation.  The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same.

There is nothing you can know "a priori", which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain.  What do you think you are, dear reader?

This is why you can predict the result of adding 1 apple and 1 apple by imagining it first in your mind, or punch "3 x 4" into a calculator to predict the result of imagining 4 rows with 3 apples per row.  You and the apple exist within a boundary-less unified physical process, and one part may echo another.

Are the sort of neural flashes that philosophers label "a priori beliefs", arbitrary?  Many AI algorithms function better with "regularization" that biases the solution space toward simpler solutions.  But the regularized algorithms are themselves more complex; they contain an extra line of code (or 1000 extra lines) compared to unregularized algorithms.  The human brain is biased toward simplicity, and we think more efficiently thereby.  If you press the Ignore button at this point, you're left with a complex brain that exists for no reason and works for no reason.  So don't try to tell me that "a priori" beliefs are arbitrary, because they sure aren't generated by rolling random numbers.  (What does the adjective "arbitrary" mean, anyway?)

You can't excuse calling a proposition "a priori" by pointing out that other philosophers are having trouble justifying their propositions.  If a philosopher fails to explain something, this fact cannot supply electricity to a refrigerator, nor act as a magical factory for accurate beliefs.  There's no truce, no white flag, until you understand why the engine works.

If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found.  "But," you cry, "why is the universe itself orderly?"  This I do not know, but it is what I see as the next mystery to be explained.  This is not the same question as "How do I argue Occam's Razor to a hypothetical debater who has not already accepted it?"

Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam's Razor, just as you cannot argue anything to a rock.  A mind needs a certain amount of dynamic structure to be an argument-acceptor.  If a mind doesn't implement Modus Ponens, it can accept "A" and "A->B" all day long without ever producing "B".  How do you justify Modus Ponens to a mind that hasn't accepted it?  How do you argue a rock into becoming a mind?

Brains evolved from non-brainy matter by natural selection; they were not justified into existence by arguing with an ideal philosophy student of perfect emptiness.  This does not make our judgments meaningless.  A brain-engine can work correctly, producing accurate beliefs, even if it was merely built - by human hands or cumulative stochastic selection pressures - rather than argued into existence.  But to be satisfied by this answer, one must see rationality in terms of engines, rather than arguments.

133 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Robin_Hanson2 · 2007-10-08T21:30:52.000Z · LW(p) · GW(p)

My posts for the next two days will be on related topics.

comment by GreedyAlgorithm · 2007-10-08T22:09:47.000Z · LW(p) · GW(p)

Something feels off about this to me. Now I have to figure out if it's because fiction feels stranger than reality or because I am not confronting a weak point in my existing beliefs. How do we tell the difference between the two before figuring out which is happening? Obviously afterward it will be clear, but post-hoc isn't actually helpful. It may be enough that I get to the point where I consider the question.

On further reflection I think it may be that I identify a priori truths with propositions that any conceivable entity would assign a high plausibility value given enough thought. I think I'm saying "in the limit, experience-invariant" rather than "non-experiential". I believe that some things, like 2+2=4, are experience-invariant: in every universe I can imagine, an entity who knows enough about it should conclude that 2+2=4. Perhaps my imagination is deficient, though. :)

comment by TGGP4 · 2007-10-08T22:21:01.000Z · LW(p) · GW(p)

Generalizing from past observations to future expectations is often referred to in philosophy as the "problem of induction". It has the same problem is that you have to accept induction working in the past to expect it to work in the future, and if Bertrand Russell is right to argue that you were created five seconds ago with false memories you can't know it worked in the past either. Against that kind of skepticism I can only fall back on a David Stove type "common sense" position, but fortunately I am not interested in persuading others but understanding the world well enough to attain my goals.

comment by TGGP4 · 2007-10-08T22:21:43.000Z · LW(p) · GW(p)

You left the italics tag on.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-08T22:24:43.000Z · LW(p) · GW(p)

Greedy, all you're doing is specifying properties into the definition of what you mean by "entity" or "knows enough". I can always build a tape recorder that plays back "Two and two make five!" forever.

TGGP, fixed the tag. And remember, it's not about persuading an ideal philosophy student of perfect emptiness, it's about understanding why the engine works.

comment by Nick_Tarleton · 2007-10-08T22:39:59.000Z · LW(p) · GW(p)

I can rigorously model a universe with different contents, and even one with different laws of physics, but I can't think of how I could rigorously model (as opposed to vaguely imagine) one where 2+2=3. It just breaks everything. This suggests there's still some difference in epistemic status between math and everything else. Are "necessary" and "contingent" no more than semantic stopsigns? How about "logical possibility" as distinct from physical possibility?

comment by Gray_Area · 2007-10-08T23:16:37.000Z · LW(p) · GW(p)

I don't really understand what Eliezer is arguing against. Clearly he understands the value of mathematics, and clearly he understands the difference between induction and deduction. He seems to be arguing that deduction is a kind of induction, but that doesn't make much sense to me.

Nick: you can construct a model where there is a notion of 'natural number' and a notion of 'plus' except this plus happens to act 'oddly' when applied to 2 and 2. I don't think this model would be particularly interesting, but it could be made.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-08T23:36:24.000Z · LW(p) · GW(p)

Nick, I'm honestly not sure if there's a difference between logical possibility and physical possibility - it involves questions I haven't answered yet, though I'm still diligently hitting Explain instead of Worship or Ignore. But I do know that everything we know about logic comes from "observing" neurons firing, and it shouldn't matter if those neurons fire inside or outside our own skulls.

Gray Area, what I'm arguing is that deduction, induction, and direct sensory experiences, should all be considered as equivalent-to-observation.

comment by Nick_Tarleton · 2007-10-08T23:52:50.000Z · LW(p) · GW(p)

Eliezer: Good answer. I take the same view, although I think the "can you model it" question suggests there is a difference. Do you think a rigorous, consistent (or not provably inconsistent) model of arithmetic or physics is possible where 2+2=3? (or the 3rd decimal place of pi is 2, or Fermat's last theorem is false, or ...)

comment by Tom_McCabe2 · 2007-10-09T00:29:06.000Z · LW(p) · GW(p)

It seems like you could justify Occam's Razor by looking at the past history of discarded explanations. An explanation that is ridiculously complex, yet fits all the observations so far, will probably be broken by the next observation; a simple explanation is less likely to fail in the future. A hypothesis that says "Occam's Razor will work until October 8th, 2007" falls into the general category of "hypotheses with seemingly random exceptions", which should have a history of lesser accuracy than hypotheses with justified exceptions or no exceptions. To quote Virtues: "Simplicity is virtuous in belief, design, planning, and justification. When you profess a huge belief with many details, each additional detail is another chance for the belief to be wrong. Each specification adds to your burden; if you can lighten your burden you must do so. There is no straw that lacks the power to break your back. Of artifacts it is said: The most reliable gear is the one that is designed out of the machine. Of plans: A tangled web breaks. A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere."

Replies from: dmitrii-zelenskii
comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-16T01:12:08.739Z · LW(p) · GW(p)

But both rule "have no seemingly random exceptions" and the passage in Virtues are special cases of Occam's razor. So the argument does become rounded (or, at best, thrown one step to the "low-entropy universe" and rounded there).

comment by logicnazi · 2007-10-09T01:29:03.000Z · LW(p) · GW(p)
But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains.

Really? I'm aware that physical outputs are totally determined by physical inputs. Neurology can tell us what sorts of physical causes give rise to what sorts of physical effects. We even have reason to believe that thoughts can be infered from the physical state of the brain in a lawlike fashion but this surely doesn't let us infer that thoughts are IDENTICAL to the operation of brains. Merely that they always go together in the actual world.

In particular (as chalmers argues most convincingly) there is nothing contradictory about imagining we have the same physical state but totally different experiences, i.e., that it might feel like something totally different to be us. Worse we actually only know about the physical world through our experiences so you can't simplify the problem purely down to a physical one. It's actually the more complicated one of why the lawlike relationship between experiences and physical facts is set up right for us to have knowledge. It could have been that the physical world was all the same but we just had totally incoherent experiences.

comment by Gray_Area · 2007-10-09T01:45:45.000Z · LW(p) · GW(p)

"I'm aware that physical outputs are totally determined by physical inputs."

Even this is far from a settled matter, since I think this implies both determinism and causal closure.

comment by Nick_Tarleton · 2007-10-09T01:49:59.000Z · LW(p) · GW(p)

logicnazi, if we can talk about our experiences, our experiences have a causal effect on the physical world. Assuming, as you do, causal closure (which is not known, but the most parsimonious hypothesis), this means that the idea of different experiences with the same physical state is indeed incoherent.

comment by Tom_McCabe2 · 2007-10-09T02:02:43.000Z · LW(p) · GW(p)

"We even have reason to believe that thoughts can be infered from the physical state of the brain in a lawlike fashion but this surely doesn't let us infer that thoughts are IDENTICAL to the operation of brains. Merely that they always go together in the actual world."

Look at airplanes: they all have a bunch of common characteristics like an engine, wings, rudders, etc. If you argued that an airplane was not really "identical" to the pile of parts, but that they just "always went together", people would look at you like you had three heads. Yet, when applied to brains, people think this argument makes sense. A brain is made up of the frontal cortex, visual cortex, auditory cortex, amygdala, pituitary gland, cerebellum, etc.; that's just what it is.

comment by Matthew2 · 2007-10-09T03:18:19.000Z · LW(p) · GW(p)

Tom: I agree with your analogy. Yudkowsy said: "Gray Area, what I'm arguing is that deduction, induction, and direct sensory experiences, should all be considered as equivalent-to-observation."

This is only convincing to someone who believes logic is only possible when their is some physical structure directly corresponds to logical output. Yet even the evidence indicating this is true uses logic.

I recently started (and then backed out of)a debate with a Christian presupositionalist.I had no idea how to show how logic itself works except by example. Naturally, there is no lesson on how logic works in the bible. So then, how does logic itself work and depend on physical structure? He would not answer my questions reguarding method of rationalists. Since we could not agree what rationality itself is, I did eventually learn that he doesn't believe in evolution or an old earth. He revealed this information only after realizing I no longer could take him seriously. This only occured after 25 emails between us! I wasted all that time to learn he didn't believe the results of modern science. I'll never make that mistake again. Anyone agree or disagree with the futility of debating someone who believes the universe is around 6,000 years old (and is also above age 25)?

comment by Shakespeare's_Fool · 2007-10-09T03:37:17.000Z · LW(p) · GW(p)

I am not sure if my understanding of Occam’s Razor matches Eliezer Yudkowsky’s.

I understand it more as (to use a mechanical analogy) “don’t add any more parts to a machine than are needed to make it work properly.”

This seems to fit Occam’s Razor if I take it to be a guide, not a prediction or a law. It does not say that the theory with the fewest parts is more likely to be correct. It just reminds us to take out anything that is unnecessary.

If scientists have often found that theories with more parts are less often correct, that may further encourage us to look for and test the simpler theories first. But it does not tell us that they are more likely to be correct only because they are simpler.

As soon as I try an aesthetic analogy “strip the iPod down to its essential features” (and make them easy to use), I run into trouble. There is no agreement on what the essential features are or on what is easiest to use. (1)

Perhaps Occam works best with a certain type of simplicity. F=MA being much simpler than the Mac OS. Even if it did require a different genius to discover it.

John

(1) I realize that in order to make the mechanical analogy work we need to know what the machine is before we apply Occam’s Razor. Once we start improving the product (replacing the stick shift with automatic transmission, adding air conditioning) we are into feature wars. It is not possible to know in advance what customers will find essential.

But even then we would not want unnecessary parts in the transmission or the air conditioner.

Still, taking out all unnecessary parts won’t guarantee that the machinery will work properly any more than removing unnecessary parts of a theory will guarantee the correctness of the theory.

comment by Gray_Area · 2007-10-09T04:37:18.000Z · LW(p) · GW(p)

I think a discussion of what people mean exactly when they invoke Occam's Razor would be great, though it's probably a large enough topic to deserve its own thread.

The notion of hypothesis parsimony is, I think, a very subtle one. For example, Nick Tarleton above claimed that 'causal closure' is 'the most parsimonious hypothesis.' At some other point, Eliezer claimed the multi-world interpretation of quantum mechanics as the most parsimonious. This isn't obvious! How is parsimony measured? Would some version of Chalmers' dualism really be less parsimonious? How will we agree on a procedure to compare 'hypothesis size?' How much should we value 'God' vs 'the anthropic landscape' favored at Stanford?

comment by Constant2 · 2007-10-09T05:53:27.000Z · LW(p) · GW(p)

"Anyone agree or disagree with the futility of debating someone who believes the universe is around 6,000 years old (and is also above age 25)?"

Agree 100%. The Universe is slightly over 10,000 years old. The 6000-ers got their math badly wrong. Crackpots, the lot of them.

comment by Matthew2 · 2007-10-09T06:30:56.000Z · LW(p) · GW(p)

Constant, the obviousness felt by both disagreeing parties almost never changes. How many formal debates actually end with the other person changing their mind? I would take it further and say formal debate is usually worthless too.

In the meantime where are your error bars? I bet somewhere there is a fundy who includes error bars.

Replies from: PetjaY
comment by PetjaY · 2015-05-24T07:18:27.141Z · LW(p) · GW(p)

While people often often end debates without admitting defeat, if you discuss with them after a couple of days (or weeks) you can often see their opinions changed. This is because people need time to think before changing their mind, which they cannot do that well while debating. Especially people do not like admitting they´re wrong before they´re sure they are.

comment by Constant2 · 2007-10-09T09:54:21.000Z · LW(p) · GW(p)

Error bars: give or take about 14 billion years. My calculations are quite precise. I am still working out the ramifications of the universe being 10,000 minus 14 billion years old.

comment by Matthew2 · 2007-10-09T11:01:30.000Z · LW(p) · GW(p)

I knew you would come through Constant simply by reading your name.

comment by william2 · 2007-10-09T13:43:12.000Z · LW(p) · GW(p)

"But what is a "reasonable" distribution? Why not label "reasonable" a very complicated prior distribution, which makes Occam's Razor work in all observed tests so far, but generates exceptions in future cases?"

Occam's Razor is only relevant to model selection problems. A complicated prior distribution does not matter. What does matter is how much the prior distribution volume in parameter space decreases as the model becomes more complex (more parameters). Each additional parameter in the model spreads the prior distribution over an increased parameter space.

For a more complex model to have a higher posterior distribution, the evidence (likelihood) must increase the posterior volume more than the addition prior parameter(s) decrease it. Since it is possible to fit the model to noise (uncertainly) in the data, the likelihood for the model with more parameters will be greater or equal to the likelihood for a model with less parameters. When an increase in likelihood is due to fitting data instead of noise, the more complex model becomes more probable. Otherwise the decrease in the prior distribution volume reduces the probability for the model with more parameters.

A reasonable distribution is one that assigns a reasonable prior to all the parameters in the model. After that Bayes Theorem takes care of the rest.

comment by michael_vassar3 · 2007-10-09T14:57:32.000Z · LW(p) · GW(p)

Eliezer: It sure does seem to me that when you say that "a mind needs a certain amount of dynamic structure to be an argument acceptor" you are saying that it does in fact know certain things prior to any "learning" taking place, e.g. that there are "priors". I would argue that 2+2=4 is part of this set, but as the punchline argues, we have already established the basics, now we are just haggling.

comment by Stephen_Jordan · 2007-10-09T15:19:38.000Z · LW(p) · GW(p)

William,

By considering models in the first place, one is already using Occam's razor. With no preference for simplicity in the priors at all, one would start with uniform priors for all possible data sequences, not finite-parameter models of data sequences. If you formalize models as being programs for Turing machines which have a separate tape for inputting the program, and your prior is a uniform distribution over possible inputs on that tape, you exactly recover the 2^-k Occam's razor law, where k is the number of program bits that the Turing machine reads during its execution (i.e. the Kolmogorov complexity).

Interestingly, the degree to which one can outperform this distribution is provably bounded. Suppose you are considering some distribution of priors and you want to see how much it outperforms the 2^-k distribution. It can do so if its prior for the true model is higher than 2^-k. So you compute the ratio of this prior to 2^-k for every model. If I remember correctly, there is a theorem which says that this set of ratios will have a finite supremum for any computable prior distribution (which is not obvious since the set of models is infinite). Of course, in some respects, this is an unfair comparison since the 2^-k distribution is itself uncomputable. Hopefully I am not misstating the theorem. I think it is stated and proved in the book by Li and Vitanyi.

To see why ignoring Occam's razor is unthinkable (and to be amused), consider the following joke.

An astronaut visits a planet with intelligent aliens. These aliens believe in the reverse induction principle. That is, whatever has happened in the past is unlikely to be what will happen in the future. That the sun has risen every previous day is to them not evidence that it will rise tomorrow, but the contrary. Unsurprisingly, this causes all sorts of problems for the aliens, and they are starving and miserable. The astronaut asks them, "Why are you clinging to this belief, given that it obviously causes you so much suffering?" The aliens respond, "Well...it's never worked well for us in the past!"

comment by Konstrukteur · 2007-10-09T16:27:21.000Z · LW(p) · GW(p)

You could argue that Occam's Razor is a reasonable distribution on prior probabilities. But what is a "reasonable" distribution?

If you make the assumption that what you observe is the result of a computational process, the prior probability of a lossless description/explanation/theory of length l becomes inversely proportional to the size of the space of halting programs of length l. You're free to dismiss the assumption, of course.

"But," you cry, "why is the universe itself orderly?"

One reason among many may be the KAM-Theorem.

comment by Alan_Crowe · 2007-10-09T17:49:11.000Z · LW(p) · GW(p)

Occam's Razor has two aspects. One is model fitting. If the model with more free parameters fits better that could merely be because it has more free parameters. It would take a thorough Bayesian analysis to work out if it was really better. A model that fits just as well but with fewer parameters is obviously better.

Occam's Razor goes blunt when you already know that the situation is complicated and messy. In neurology, in sociology, in economics, you can observe the underlying mechanisms. It is obvious enough that there are not going to be simple laws. If two models fit equally well, you just don't know, even if one is simpler than the other.

The "quant" trying to make money on the financial markets can take a modelling approach and may find the Razor sharp, but the scientist, trying to get to the bottom of things, has little reason to go for an explanation simpler than the known complexity of the underlying mechanisms.

comment by michael_vassar3 · 2007-10-09T18:43:23.000Z · LW(p) · GW(p)

Alan: Does a scientist likewise have no reason to pay attention to any model of the universe but fundamental physics? High level descriptions of the world very frequently can account for most of the variance in high level phenomena without containing the known complexity of the substrate.

comment by Alan_Crowe · 2007-10-09T20:16:41.000Z · LW(p) · GW(p)

Do high level descriptions of the world frequently account for most of the variance in high level phenomena without containing the known complexity of the substrate?

I think you can constrast thermodynamics and sociology by noticing that there is no Princess Diana molecule. All the molecules are on the same footing. None of them get to spoil the statistics by setting a trend and getting in all the newspapers papers. So perhaps Occam's Razor grabs credit not due to it, as researchers favour simple theories when they have specific reasons to do so.

An example of the mis-use of Occam's Razor arises in discussion of the question of whether minimum wage laws cause unemployment. Many people think they do and it is reasonable to imagine a politician finding an increase in the minimum wage to be politically necessary even as he wonders how to dodge blame for the subsequent rise in unemployment that he believes will follow. He will likely look to timing, seeking to delay the increase until there is a good chance of a tightening labour market raising wages.

How can you do empirical research on the effect of minimum wage laws on employment when practical men are scheming to conceal the very effect that you are looking for? One way is to appeal to Occam's Razor. Let us prefer the simpler hypothesis that increases to the minimum wage are random. That is bogus. We already know of the politicing and scheming that goes on. If our research methods cannot accommodate it, they leave us in the dark and Occam's Razor does not light our way.

comment by Tom_McCabe2 · 2007-10-09T21:35:39.000Z · LW(p) · GW(p)

"I am not sure if my understanding of Occam’s Razor matches Eliezer Yudkowsky’s.

I understand it more as (to use a mechanical analogy) “don’t add any more parts to a machine than are needed to make it work properly.”

Think of Kolmogorov complexity: the most parsimonious hypothesis is the one that can generate the data using the least number of bits when fed into a Turing machine.

"One way is to appeal to Occam's Razor. Let us prefer the simpler hypothesis that increases to the minimum wage are random. That is bogus."

Why it is bogus? An ideal stock market, operating over a fixed resource base, must necessarily be random (or at least pseudorandom). If it had any patterns distinguishable by investors, people would exploit those patterns to make money, and in the process eliminate them. The same principle could apply here: the minute a politician discovers a pattern in the economy, he begins exploiting it to get votes, and so erases the pattern by selectively hacking off the parts of it the voters consider bad.

comment by Doug_S. · 2007-10-09T22:28:13.000Z · LW(p) · GW(p)

Let's see. What else would I have to believe in order to accept a statement like "~(p&~p) is not a theorem in propositional logic?"

A statement of the form "X is a theorem in this particular formal mathematical system" means that I can use the operations allowed within that system to construct a "proof" of the sentence X. In theory, I can make a machine that takes a "proof" as input and returns "true" if the proof is indeed a correct proof and "false" if there is a step in the proof that is not allowed by the formal system. If the machine works as intended, then if the machine says that a proof is a correct one, it really is a correct proof and that statement really is a theorem in that system.

My brain is like such a machine. I can look at a mathematical proof in a formal system and check if the proof really is a correct proof within that system. In order to believe that 2+2=3, I would have to believe that the "theorem checker" module in my brain - that part responsible for deductive reasoning - is not operating properly. What I perceive as a correct step in a proof is, in actuality, an incorrect step, and my brain is hardwired to make that particular kind of mistake without realizing it. In other words, in order to believe that "2+2=4" is false, I would have to believe that the proof of "2+2=4" in my head does not mean what I think it means.

I would have to believe that I am not capable of correct deductive reasoning. A person not capable of correct deductive reasoning is insane.

If "2+2=4" is false, then I am insane.

All of my beliefs have to come with the background assumption that I am not insane. If I am, in fact, insane, then all my internal models of the universe have to be replaced with the black hole of maximum entropy. I can no more trust my probability estimates than I could trust the probability estimates of a rock.

I choose to act as though I am sane because if I am sane, I gain maximum benefits from that belief, and if I am not sane, it doesn't matter anyway. I can be convinced I am mistaken about something, but I cannot be convinced that I am insane.

comment by TGGP4 · 2007-10-09T23:03:38.000Z · LW(p) · GW(p)

A person not capable of correct deductive reasoning is insane. The people usually deemed insane are those with deviant behavior, or what Caplan calls "the extreme tails of a preference distribution with high variance".

comment by Owain_Evans2 · 2007-10-10T00:58:27.000Z · LW(p) · GW(p)

And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".

I should note that the most famous paper in 20th Century analytic philosophy, Quine's "Two Dogmas of Empiricism", is an attack on the idea of the a priori. The paper was written in 1951 and built on papers written in the previous two decades. A large proportion of contemporary philosophers agree with Quine's basic position. This doesn't stop them from doing theoretical work, just as Eliezer's disavowal of the a priori need not prevent him theorizing about rationality, philosophy of science, or epistemology.

In "Two Dogmas", Quine talks mainly about a certain view of a priori knowledge that was held by logical empiricists such as Carnap, Ayer, etc. This view, that all a priori statements are analytic, already gives a significantly smaller role to a priori justification than did previous philosophers. Roughly, the empiricists didn't think that there were synthetic statements that could be known a priori.

Quine's paper is quite hard to read without some of the philosophical background. A more recent discussion of his view can be found in Harman's essay "Death of Meaning" in his book "Reasoning, Meaning, and Mind". This is available online (if you have subscription) at Oxford Scholarship.

comment by Richard4 · 2007-10-10T02:32:20.000Z · LW(p) · GW(p)

Eliezer - It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence". For one thing, no amount of mere observation will suffice to bring us to a conclusion, as Lewis Carroll's tortoise taught us. Further, it mistakes content and vehicle. When I judge that p, and subsequently infer q, the basis for my inference is simply p - the proposition itself - and not the psychological fact that I judge that p. I could infer some things from the latter fact too, of course, but that's a very different matter. (And in turn distinct from inferring things from the second-order judgment that I judge that p!)

Anyway, here's a simple argument for the inescapability of a priori justification:

For any instance of empirical justification, it seems like we can construct a parallel instance of a priori justification simply through conditionalization. Suppose that empirical evidence E would justify your drawing conclusion C. Then presumably you could justifiably believe the conditional "if E then C" prior to experiencing E. We can repeat this procedure to conditionalize out all empirical grounds for belief, and the result will be a conditional statement that is justifiable a priori -- i.e. not dependent on any particular experiences or empirical evidence at all.

Tom McCabe wrote: "If you argued that an airplane was not really "identical" to the pile of parts, but that they just "always went together", people would look at you like you had three heads. Yet, when applied to brains, people think this argument makes sense."

This is because there is nothing more to our concept of being an airplane than the reduction basis. Any possible world with all the parts arranged in the right way is immediately recognizable, under that description, as a world containing an airplane. Indeed, that's just what it is to be an airplane. Minds are a rather different matter. They are not conceptually reducible to neurons firing. It is conceptually possible for the two to come apart (if we imagine a world with different laws of nature, perhaps), so they are not simply one and the same thing.

(Philosophers have written books on this argument, so I don't pretend that the above is incontrovertible. But it is certainly not so easily dismissed as Tom and others - including my past self - might assume. More detail here.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-10T02:40:13.000Z · LW(p) · GW(p)

It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence".

If you view it as an argument, yes. The engines yield the same outputs.

Minds are a rather different matter. They are not conceptually reducible to neurons firing.

Just because you do not know how the trick works, does not mean the trick is powered by magic pixie dust.

comment by Matthew2 · 2007-10-10T04:00:43.000Z · LW(p) · GW(p)

Eliezer Yudkowsky said: "Just because you do not know how the trick works, does not mean the trick is powered by magic pixie dust."

I agree yet this won't convinve a sophisticated right-wing Christian (or Jew, or Muslim, etc).

comment by Richard4 · 2007-10-10T04:24:17.000Z · LW(p) · GW(p)

Who said anything about 'magic pixie dust'? I agree that the brain gives rise to (or 'powers') the mind, thanks to the laws of nature that happen to govern our universe. I may even agree with all the causal claims you want to make. But if you're going to start talking about identity, then you need to do some real philosophy.

"If you view it as an argument, yes. The engines yield the same outputs."

What does the latter have to do with rationality?

comment by TGGP4 · 2007-10-10T05:53:52.000Z · LW(p) · GW(p)

Does Eliezer really need to do some "real philosophy"? If he does not, will he miss out on the Singularity? Will A.I be insufficiently friendly? I don't see any reason to think so. I say be content in utter philosophical wrongness. Shout to the heavens that our actual world is a zombie world with XYZ rather than H20 flowing in the creeks that tastes grue, all provided it has no impact on your expectations.

comment by James_Blair · 2007-10-10T09:31:41.000Z · LW(p) · GW(p)
But if you're going to start talking about identity, then you need to do some real philosophy.

What's the difference between the brain giving rise to a mind by the laws of nature and the brain giving rise to a mind without identity by the laws of nature?

comment by Nick_Tarleton · 2007-10-10T13:24:46.000Z · LW(p) · GW(p)

But if you're going to start talking about identity, then you need to do some real philosophy.

"Identity" is not magic. There is no abiding personal essence, just continuity of memory. A real philosopher said that, by the way.

I do think there are important unanswered questions in the philosophy of mind, but this isn't one of them. (Although one of them is "where is our thinking still contaminated by the idea of magic personal identity?", which I suspect is at the root of several apparent paradoxes.)

comment by Alan_Crowe · 2007-10-10T14:43:44.000Z · LW(p) · GW(p)

Tom, I think we are actually agreeing. I'm arguing that if you already know the situation is complicated you cannot just appeal to Occam's Razor, you need some reason specific to the situation about why the simple hypothesis should win.

You are proposing a reason, specific to economics, about why the complications might be washed away, making it reasonable to prefer the simpler hypothesis. My claim is that those extra reasons are essential. Occam's Razor, on its own, is useless in situations known to be complicated.

comment by Shakespeare's_Fool · 2007-10-10T18:01:43.000Z · LW(p) · GW(p)

Tom McCabe, Thank you for the comment. You have started me thinking about the differences between Occam's Razor and Einstein's "Everything should be made as simple as possible, but not simpler." John

comment by Shakespeare's_Fool · 2007-10-10T18:03:57.000Z · LW(p) · GW(p)

"--" should have been "Shakespeare's Fool" John

comment by Richard4 · 2007-10-10T21:15:19.000Z · LW(p) · GW(p)

TGGP - You seem to have missed the conditional nature of my claim. I'm not forcing philosophy on anyone; just saying if you're going to do it at all, best do it well.

Nick - I never suggested there was an "abiding personal essence". (Contemporary philosophers like Derek Parfit and David Velleman have done a stellar job in revealing the conceptual confusions underlying such an idea.) In any case, it's hardly relevant. The issue here is individuation (how to count the distinct things in the world), not personal identity and persistence through time.

James - if you're asking what difference identity makes, that's a good question. To answer literally, if "two" things are really identical, i.e. really one, then there is no possible world (however distant or "unrealistic") where they come apart -- where there is one without the other. That's arguably the criterion for what it is to be one rather than two. Now you may ask why we should care about this difference. Here's an answer: we care about the fundamental constituents of reality, or "what it takes" to create a world like ours. Imagine a god were to create all the physical stuff of our universe. Does that suffice, or does he have more work to do before he can rest? My earlier arguments suggest that there is more work to do here. He also needs to add some extra, 'bridging' psycho-physical laws of nature, to ensure that brainy matter gives rise to conscious minds. If they were truly one and the same thing, this extra step would not be required. (Cf. airplanes.) That's an interesting result, no?

comment by Nick Hay (nickjhay) · 2007-10-10T21:44:06.000Z · LW(p) · GW(p)

Eliezer: "You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence."

Richard: "It's just fundamentally mistaken to conflate reasoning with "observing your own brain as evidence"."

Eliezer: "If you view it as an argument, yes. The engines yield the same outputs."

Richard: "What does the latter have to do with rationality?"

Pure thought is something your brain does. If you consider having successfully determined a conclusion from pure thought evidence that that thought is correct, then you must consider the output of your brain (i.e. its, that is your, internal representation of this conclusion) as valid evidence for the conclusion. Otherwise you have no reason to trust your conclusion is correct, because this conclusion is exactly the output of your brain after reasoning.

If you consider your own brain as evidence, and someone else's brain works in the same way, computing the same answers as yours, observing their brain is the same as observing your brain is the same as observing your own thoughts. You could know abstractly that "Bob, upon contempating X for 10 minutes, would consider it a priori true iff I would", perhaps from knowledge of both of your brains compute whether something is a priori true. If you then found out that "Bob thinks X a priori true" you could derive that X was a priori true without having to think about it: you know your output would be the same ("X is a priori true") without having to determine it.

comment by Nick_Tarleton · 2007-10-10T22:13:13.000Z · LW(p) · GW(p)

Richard: oops, I thought you meant personal identity. Ach, homonyms.

Do you think that the human bodies in a physics-only zombie world would behave identically to ours? ( = Do you think physics is causally closed?)

Nick Hay: great explanation.

comment by g · 2007-10-10T22:54:01.000Z · LW(p) · GW(p)

Richard: sure, minds and brains can "come apart" in possible worlds other than ours (or indeed in this one, if and when someone teaches a computer to think), but I have never understood why some people seem to think that this suggests that there's anything weird about the relationship between actual minds and actual brains in the actual world.

Consider those airplanes again, but let's use a more general term like "flying machine" that isn't so tightly tied to the details of their construction. You can imagine (yes?) a world in which a Boeing 747 takes off, then its wings fall away and it just continues flying; or in which there are flying machines having no apparent similarity to the ones we use in the actual world. In other words, you can conceptually separate flying from the actual aerodynamic phenomena that (here, in the actual world) enable it. None the less, flying (here, in the actual world) is a consequence of those aerodynamic phenomena, and if a god made a world physically just like ours then the Boeing 747s in it would be able to fly without any further "bridging" aviatiophysical laws of nature being added.

So why does the (uncontroversial) fact that one can imagine thinking without a brain, and that (in so far as there "are" possible worlds) there are possible worlds in which thinking happens without brains, give the slightest reason to suspect that psychophysical bridging laws are required to enable our brains to support our minds?

comment by Richard4 · 2007-10-11T03:10:20.000Z · LW(p) · GW(p)

g - there's no possible world that's physically identical to ours but where the Boeing's don't fly. There is a possible world that's physically identical to ours that lacks consciousness. That's the difference. It shows that physics suffices for flight but not fully-fledged mentality. (N.B. the interesting case here is not minds without brains, but brains without minds.)

Nick Hay - Thanks for bringing this back to the key issue. In fact I do not "consider having successfully determined a conclusion from pure thought evidence that that thought is correct". I take my evidence to be something beyond myself: whatever premises guided my reasoning, not the mere psychological fact of my concluding as I did. (This is why I brought up the content/vehicle distinction in my original comment.) Granted, reasoning presupposes that one's thought processes are reliable, and a subjectively convincing line of thought may be undermined by showing that the thinker was rationally incapacitated at the time (due to a deceptive drug, say). But presuppositions are not premises.

Compare: (1) P, therefore Q (2) If I were to think about it, I would conclude that Q. Therefore Q.

These are different arguments! If I come to believe Q via #2, my evidence is the (hypothetical) brain process you talk about. But in the first case, my evidence is simply P, and not any fact about me at all.

P.S. Nobody denies that a priori justifiable claims may also be justified empirically, say by the testimony of a reliable thinker, or by observing a reliable brain or other computational engine. But it's a different kind of justification. And of course the mere fact that there is a second argument for a conclusion does nothing to show that the first one was flawed.

comment by Richard4 · 2007-10-11T03:18:20.000Z · LW(p) · GW(p)

Sorry, my second sentence to NH is unclear. The psychological fact could be taken as a kind of indirect evidence, as noted in my postscript. But it is not what I take my evidence to be, when I am reasoning according to a #1-style argument. We could say the evidence of my thought [vehicle] is not the evidence in my thought [content].

comment by Richard4 · 2007-10-11T03:38:32.000Z · LW(p) · GW(p)

Nick T. - yes, I accept the causal closure of the physical. (And thus epiphenomenalism. I discuss the epistemic consequences in my post 'Why do you think you're conscious?')

On the broader issue - to expand on my response to James above - see my post on the explanatory power of dualism.

comment by TGGP4 · 2007-10-11T04:45:25.000Z · LW(p) · GW(p)

Richard, are you saying that if in this world I attempted to move around some material to produce an artificial brain, it would not work unless I also did some psycho-manipulation of some sort? Or is the psycho-stuff bound so tightly with the material that the materially-sufficient is psycho-sufficient?

I neglected to link to this before when I mentioned anticipated experiences, which is one of my favorite posts here. I am so fond of linking to it I assumed I already had.

comment by g · 2007-10-11T08:53:00.000Z · LW(p) · GW(p)

Richard, you have presented absolutely no evidence that there is a possible world physically identical to ours but in which we are not conscious, beyond saying that it's "conceptually possible" for minds and brains to "come apart", if we imagine a world with different laws of nature.

But it's equally conceptually possible for flying machines and aerofoils to come apart, if we imagine a world with different laws of nature, and (it appears) you don't see that as any reason to think that flying machines fly by aerofoils plus some extra bridging aviatiophysical laws.

Incidentally, I think you're misunderstanding what Eliezer is trying to do. He's saying (unless I'm misunderstanding him too) roughly "forget about justification; never mind what inference processes we can find the best arguments for; what matters is what actually works; if accepting the results of reasoning in your own brain works, then accepting the results of reasoning in an equally competent other brain works equally well".

comment by RobinHanson · 2007-10-11T12:48:00.000Z · LW(p) · GW(p)

I agree with Richard that we should respect the fact that philosophers have spilled a lot of ink on the consciousness question; we should read them and respond to their arguments. We should have at least one post devoted to this topic. But after doing so, I'm betting I'll still mainly agree with Eliezer.

Richard, I don't think Eliezer conflated reasoning with observing your own brain - he just suggested that simple Bayesian reasoning based on observing your own brain gets you pretty much all the conclusions you need from most other "reasoning."

comment by Constant2 · 2007-10-11T17:23:00.000Z · LW(p) · GW(p)

Robin and Richard - I think it is possible that Eliezer did not word his statement as cleanly as he might. However if his wording conflated categories, I am confident that with some care the exact same point can be re-worded without such conflation. There is something real and significant here that he's pointing out, and it's not going to go away simply because he was (if he was) a bit to loose in his presentation.

comment by Constant2 · 2007-10-11T18:58:00.000Z · LW(p) · GW(p)

I think this contains one of the main points:

If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found. "But," you cry, "why is the universe itself orderly?" This I do not know, but it is what I see as the next mystery to be explained. This is not the same question as "How do I argue Occam's Razor to a hypothetical debater who has not already accepted it?"

The philosophers have been busy trying to answer the question which Eliezer has taken pains to distinguish from another question. A lot of the philosophical material on the question of Occam's razor either tries to justify it in a non-question-begging fashion, or else argues that it cannot be so justified. Either way, the focus is on the justification. But there is something else that can be focused on: the question of whether Occam's razor is in fact true, or valid. Of course, in itself it is a method rather than a claim, but we can roughly translate it into the claim that among the possible hypotheses to explain a phenomenon, the true hypothesis tends to be the simplest. This can be asserted as a conjecture about the world we live in. Occam's Razor can be thought of as a general hypothesis about the world, which may or may not be true. And if it is true, then it is true even though we have been unable to justify our acceptance of it in a non-question-begging way. The truth does not actually depend on the possibility of our one day finding a non-circular justification for it.

comment by Constant2 · 2007-10-11T19:15:00.000Z · LW(p) · GW(p)

The evolutionary formation of the mind is, as Eliezer points out, based on the truth, and not on justification. Mutation throws one brain after another at the problems of life, and the brains that generate true beliefs are the ones that tend to survive. No justification is involved at this stage. For example, suppose that Occam's razor is true (to understand this, translate the method called "Occam's razor" into the appropriate assertion about the world so that it can be assigned a truth value). Then brains that apply Occam's razor will tend to survive. Notice what is happening here: the truth of Occam's razor is causing the evolution of brains that apply it. What this means is that the recognition is not mere accident. We apply Occam's razor because it is true. Generally speaking, if the truth of X causes the belief that X, then that belief is not mere arbitrary belief but can arguably be called knowledge. So, if Occam's razor is true and if we evolved to apply it, then our application of Occam's razor constitutes knowledge about the world. It does not, however, constitute knowledge from the more narrow point of view of "justified true belief".

This suggests that we have knowledge about the world which we are unable to justify but which is nevertheless knowledge and not mere arbitrary belief.

comment by Richard2 · 2007-10-11T23:04:00.000Z · LW(p) · GW(p)

Constant - Sure, there's something to be said for epistemic externalism. But I thought Eliezer had higher ambitions than merely distinguishing rationality and reliability? He seems to be attacking the very notion of the a priori, claiming that philosophers lazily treat it as a semantic stopsign or 'truce' (a curious claim, since many philosophers take themselves to be more or less exclusively concerned with the a priori domain, and yet have been known to disagree with one other on occasion), and dismissively joking "it makes you wonder why a thirsty hunter-gatherer can't use the "a priori truth factory" to locate drinkable water." (The answer isn't that hard to see if one honestly wonders about it for a moment or two.) But maybe you're right, and these cheap shots are just part of the local attire, not intended for cognitive consumption.

g - I already answered this. Change the extra-physical laws of nature as you will, it is not conceptually possible for a world physically identical to ours to lack flying airplanes. What else are we to call the boeing-arranged atoms at 10000ft? The zombie (physically identical but non-conscious) world, by contrast, does seem conceptually possible. So there's no analogy here.

TGGP - Yes, I think that, thanks to the bridging laws, "the materially-sufficient is psycho-sufficient". This dualism is empirically indistinguishable from materialism. Anticipating experience may be a useful constraint for science, but that is not all there is to know. (See also my responses to James above.)

comment by g · 2007-10-12T00:31:00.000Z · LW(p) · GW(p)

Richard, I would like to know what you mean by "conceptually possible" and why you think conceptual possibility has anything to do with actual possibility. I think you mean something like "I can/can't imagine X without any obvious inconsistencies". So, e.g., you can imagine, or think you can imagine, a world physically identical to ours in which people have no experiences; but you can't imagine, or think you can't imagine, a world physically identical to ours in which jumbo jets don't fly.

But whether something is "conceptually possible" in this sort of sense obviously has as much to do with the limits of our understanding as with what's actually possible, no?

1. Consider some notorious open problem in pure mathematics; the Riemann hypothesis, say. I can, in some sense, "imagine" a world in which RH is true and a world in which RH is false; I can tell you about some of the consequences in each case; but, despite that, one of those worlds is logically impossible; we just don't know which. (I'm ignoring, because I'm too lazy to think it through now, the possibility that RH might be undecidable.) So something can be "conceptually possible" despite being logically impossible and hence (if you believe in possible worlds) false in all possible worlds.

2. I cannot, so far as I can tell, imagine what it would be like if the world had two "timelike" dimensions and two "spacelike" ones rather than 1 and 3. (Perhaps if I sat down and concentrated for a while I could; in which case, make it twenty of each, or something.) I can calculate some of the consequences, I suppose, but I can form no coherent mental picture. None the less, it seems clear that such a world is possible in principle. So something can be (for a given person, at least) "conceptually impossible" despite being possible in other senses.

Examples like these make it seem obvious to me that "conceptual possibility" tells us much more about the limits of our imagination and reasoning than it does about the nature of reality.

You can't imagine a world physically like ours in which jumbo jets don't fly; that would be because flying is simple enough that we have some a pretty good understanding of how it works, and what mechanisms underlie it. Of course we don't have any similarly good understanding of how minds work. It seems to me that that's the only difference here. Lack of understanding is not evidence of magic.

(Suppose I claim that I can so imagine a world physically identical to ours in which boeing-arranged atoms at 10k feet aren't flying airplanes; they're, er, zairplanes; they are doing something physically indistinguishable from flying, but of course it isn't really flying. Those who fail to see the difference just lack sufficient subtlety of thought. Ridiculous, no?)

Anyway, let's suppose it's "conceptually possible" that the world should be exactly as it is, physically, but with no consciousness anywhere to be found. So what? All that means is that someone can form some sort of mental picture of what such a world might be like. I don't see how to eliminate the possibility that filling in the details might ultimately lead to a contradiction (as with either RH or not-RH). Or that digging further into the notion of "phenomenal consciousness" being used might reveal that it has no real content and serves only to obfuscate. (I strongly suspect that this is in fact the case. Of course that doesn't mean that those who appeal such notions have any intention to obfuscate.)

For what it's worth, I'm pretty sure that a zombie world is not conceptually possible to me: I can only "imagine" such a world by deliberately not thinking too hard about the details.

comment by g · 2007-10-12T00:32:00.000Z · LW(p) · GW(p)

Oh, gosh, that was rather long. Sorry.

comment by Constant2 · 2007-10-12T00:45:00.000Z · LW(p) · GW(p)

I liked it. I really don't get Robin's desire for short comments. This is the only blog where I've seen that restriction. Is he worried about the high cost of bandwidth? For text?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-12T00:56:00.000Z · LW(p) · GW(p)

I think I can forgive it this once.

Zombies and similar creatures are "conceptually possible" when someone doesn't understand the connection between lower and higher levels of organization, so that the stored propositions about the lower and higher levels of organization are mentally unconnected and can be switched on or off independently. This is a fact about the person's state of mind, not a fact about the phenomenon in question.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-04-30T16:28:31.343Z · LW(p) · GW(p)

I kind of see the point about logical possibility being what you get if you switch off your knowledge of how the world works, and just run off a minimal axiom set. But I don't know what the connection between that particular set of lower and higher level of organisation is, ie the connection between consciousness and mind. I don't think anyone else does. Zombies are logically conceivable for everybody. But conceivability is not about the world, as you say.

comment by Nick_Tarleton · 2007-10-12T01:43:00.000Z · LW(p) · GW(p)

g, beautifully said.

comment by TGGP2 · 2007-10-12T04:07:00.000Z · LW(p) · GW(p)

Hopefully Anonymous had a post about zombies here, in which I made fun of him.

Anticipating experience may be a useful constraint for science, but that is not all there is to know.
If I was going to dispute this I would have to specify what it means to "know" and get into one of those goofy epistemology discussions I derided here. Philosophy is the required method to argue against philosophy, oh bother. Good thing reality doesn't revolve around dispute.

comment by Richard2 · 2007-10-12T06:52:00.000Z · LW(p) · GW(p)

g - No, by 'conceptually possible' I mean ideally conceptually possible, i.e. a priori coherent, or free of internal contradiction. (Feel free to substitute 'logical possibility' if you are more familiar with that term.) Contingent failures of imagination on our part don't count. So it's open to you to argue that zombies aren't conceptually possible after all, i.e. that further reflection would reveal a hidden contradiction in the concept. But there seems little reason, besides a dogmatic prior commitment to materialism, to think such a thing. Most (but admittedly not all) materialist philosophers grant the logical possibility of zombies, and instead dispute the inference to metaphysical possibility. This seems no less ad hoc. Anyway:

"I would like to know... why you think conceptual possibility has anything to do with actual possibility."

I actually wrote a whole thesis on this very question, so rather than further clogging the comments here, allow me to simply provide the link. If you're interested enough to read all that, and still have any objections to my view afterwards, I'd be very interested to hear them - my comments are open. For this page, though, I think I should bow out, unless Eliezer sees fit to address the concerns I raised about the original topic, and especially his treatment of the a priori.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-12T07:09:00.000Z · LW(p) · GW(p)

I have future posts planned that will shed light on this topic, but not today.

comment by g · 2007-10-12T09:15:00.000Z · LW(p) · GW(p)

Richard, I'm unconvinced that you have any way of telling whether the existence of zombies is ideally conceptually possible; the fact that you seem to be able to imagine a zombie world certainly isn't good evidence that it's "free of internal contradiction". (Consider, again, the Riemann hypothesis.)

I don't have anything like a proof that the idea of zombies is in fact incoherent. But if you're right that its coherence would entail the existence of these mysterious psychophysical bridging laws and all the rest of your epiphenomenal apparatus, then it seems to me that that's at least as much evidence against as your alleged ability to imagine it is evidence for. I don't think pointing this out constitutes "dogmatic prior commitment to materialism".

Whether "most materialist philosophers" are right in saying that furthermore the ideal conceptual possibility doesn't suffice to demonstrate that conscious minds don't simply supervene physically on brains, depends on the details of what you mean by ideal conceptual possibility. I suspect that the notion isn't in fact clear enough for such questions to have answers.

Anyway, I'll take a look at your thesis, and shall now also desist from clogging up the comments here.

comment by Constant2 · 2007-10-12T10:05:00.000Z · LW(p) · GW(p)
No, by 'conceptually possible' I mean ideally conceptually possible, i.e. a priori coherent, or free of internal contradiction. (Feel free to substitute 'logical possibility' if you are more familiar with that term.)

Before we discovered that water was H2O, our concept of water did not include that it was H2O. Since our concept did not include that, then surely it would not have been incoherent, at the time, to say that water is not H2O (imagine that this occurs during the period after the discovery of H and O and before the discovery of the composition of water - imagine that there was such a period), since there was nothing in our concept of water at the time that logically contradicted that statement. However, today it is incoherent to say that water is not H2O, because our concept of water includes that it is H2O - water is regularly defined as H2O.

Let us think about the period when, because the concept did not include that it was H2O, it was free of internal contradiction to say that water is not H2O, and therefore logically possible, ideally conceptually possible, and a priori coherent, to say that water is not H2O. Given their concept of water and perhaps even given everything they knew at the time about water, it was logically possible - that is, not in logical contradiction to any fact or concept they possessed at the time - that water is not H2O. Looking back, I find myself reluctant to draw any deep lessons from this about a possible dual nature of water in which, say, H2O takes the role of the material substance and water takes the role of the epiphenomenon. I find myself moved not at all to contemplate the possibility of zombie H2O - H2O which is not water. The only lesson I find myself wanting to draw from this is that they did not know then what we know now.

Now we turn to the present moment, when some claim that a zombie world is logically possible. They may be right to claim that their own concept of consciousness does not logically contradict the denial of consciousness to a physical creature. It is not at all obvious that I should draw any deep lesson from this, any more than in the case of water.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-12T15:53:00.000Z · LW(p) · GW(p)

Well said, Constant.

The logic for why zombies can't exist, very briefly, goes like this:

You see a bright red light.

The mysterious redness-quality of the red light seems inexplicable in merely material terms.

You think, within your stream of consciousness, "The redness of this light seems inexplicable in merely material terms."

You say out loud, "The redness of this light seems inexplicable in merely material terms."

Your lips moved.

Whatever caused your lips to move must lie within the realm of physics because it had a physical effect.

If we sum up all the forces acting on your lips - gravity, electromagnetism, etc. - we will necessarily include the proximal cause of your lips moving. This is because when we sum up the forces acting on your lips, we can tell where your lips will go. In particular, we can tell that they'll move. So if we delete anything that isn't on the force list, your lips go to exactly the same place.

As it so happens, the proximal cause of your lips moving is nervous instructions sent from your motor cortex and cerebellum. It is not possible to imagine a world in which your lips move and all the laws of physics are the same, but there are no nervous impulses from the motor cortex, because the laws of physics include the nervous impulses from the motor cortex, which is why your lips move. Everything that actually causes your lips to move is not epiphenomenal - it has a direct and material effect - meaning that it shows up within low-level physics.

If something makes an atom go from one place to another that isn't within our known physics, then we'll see, when we examine the atom, that the sum of forces says the atom should go one way, but the atom goes another way instead. This would mean that our list of laws of physics, on that low level of organization, was incomplete; we would not be able to account for why an atom goes to one place instead of another without postulating additional physical laws. Not "psychophysical bridging laws", physical laws that directly cause the atom to be in one place rather than another, given its initial conditions.

Exactly the same logic applies as we work our way backward along the causal chain. Whatever caused you to think, "The redness of this light seems inexplicable in merely material terms," must exist within the web of material cause and effect in order to have the final result of your lips moving.

Whatever causes the mysterious redness of red to seem inexplicable in material terms, must exist in the web of material cause and effect, because a thought to this effect appears within your stream of consciousness, and you are capable of speaking your stream of consciousness out loud, which makes your lips move.

This may seem really odd. But just because something seems really odd, doesn't mean it isn't true. Your sense of "really odd" does not give you direct, veridical information about the universe. Even when something seems really, really odd, it's still a fact about you, not a fact about the universe.

Penrose's theory that consciousness involves magical physics is wrong, but coherent. The theory that consciousness involves no physics is not. Admittedly, realizing this requires logic and human beings are not logically omniscient, so in that sense zombie-believers may be "coherent" relative to their own failure to realize that philosophical discussions are themselves material phenomena. Which is of course the point of this post.

The causes of all material effects are also material in the sense that any correct physical prediction will include them by identity, and any predictive method which fails to include them in any way will deliver incorrect predictions. Philosophical discussions and philosophical intuitions are material phenomena and all their causes are within the laws of physics; on pain of our physical predictions being incorrect if the known laws of physics fail to capture-by-identity the causes of all philosophical intuitions.

All this is true a priori.

comment by Richard2 · 2007-10-12T17:18:00.000Z · LW(p) · GW(p)

(Let me just add that the first chapter of my thesis addresses Constant's concerns, and my previously linked post 'why do you think you're conscious?' speaks to Eliezer's worries about epiphenomenalism -- what is sometimes called 'the paradox of phenomenal judgment.' Some general advice: philosophers aren't idiots, so it's rarely warranted to attribute their disagreement to a mere "failure to realize" some obvious fact.)

comment by Constant2 · 2007-10-12T17:40:00.000Z · LW(p) · GW(p)

Richard,

I don't know what you mean about "idiots". My arguments are not intended as insults. In fact I fully expect you to have already dealt with them. However, I have little choice but to answer the particular point you raised at a particular time, because if I try to jump ahead and anticipate all your answers and then your answers to my answers to your answers, the result will probably be an incredibly confusing monologue that is more likely than not to simply have mis-anticipated your actual responses. Aside from that there is the matter of comment length to consider.

comment by TGGP2 · 2007-10-12T18:36:00.000Z · LW(p) · GW(p)

Richard, I don't actually believe philosophers are idiots because I've seen their standardized test scores. I do think they could more productively use their intellects though. If I were to ignore IQ/general intelligence and simply try to judge whether one philosopher does better philosophizing than another, would I be able to do it without becoming a philosopher myself and judging their arguments? I can determine that rocket physicists are good at what they do because they successfully send rockets in the air, I know brain surgeons are because the brains they operate on end up with the behavior they promise. I can't think of anything I would hire a philosopher for, other than teaching a philosophy course. So is the merit of philosophy an entirely circular thing or is there a heuristic the non-philosopher layman can use that will let him know he should pay more attention to philosophers than palm-readers?

Replies from: SilasBarta
comment by SilasBarta · 2009-09-24T15:51:53.874Z · LW(p) · GW(p)

Well put, TGGP, well put.

comment by douglas · 2007-10-13T00:51:00.000Z · LW(p) · GW(p)

Occam's razor states- the explaination of any phenomenon should make use of as few assumptions as possible, eliminating those that make no difference in the observable predictions of the explainatory hypothesis.
That does not say the universe is simple or explainable by material means or any such thing.
In fact those assumptions violate Occam's razor! Saying the universe is simple or explainable by material means are unneeded assumptions.
In order to explain the phenomena of physics we need material, material forces, chance, and freedom. The chance comes from the inherent unpredictablity of certain phenomena (where will the particle land?) and the freedom comes from the experimenters ability to choose what aspect (particle or wave) of the phenomena to view. Currently there is no known material mechanical explaination for chance or freedom. The claim there will someday be one is an unneeded assumption based on faith.
If philosophy is going to be of value, it should agree with the basic facts of science. The mechanistic, material universe was tossed out almost 100 years ago.
Read Max Born's Nobel prize acceptance speech and update your thinking.

comment by RobinHanson · 2007-10-13T02:28:00.000Z · LW(p) · GW(p)

Richard is quite right to point out that philosophers of mind are well aware of the counter arguments that Constant and Eliezer offer. And he is right to insist this is a subtle question to which a few quick comments do not do justice. There are however many philosophers who agree with Constant and Eliezer. See for example the October Philosophical Quarterly article on Anti-Zombies.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-13T05:09:00.000Z · LW(p) · GW(p)

Not to be too uncharitable, but I'd say the arguments of us material monists are simple, and it's only the flaws in the complex dualist arguments that are subtle.

PS: I've got some back copies of the Journal of Consciousness Studies on my bookshelf, so don't necessarily assume that I'm unaware of the big philosophical mess here.

comment by RobinHanson · 2007-10-13T12:58:00.000Z · LW(p) · GW(p)

TGGP, your question illustrates nicely my explanation for why more history than futurism.

This book review claims that the majority position in philosophy rejects the dualism Constant and Eliezer object to - this is most certainly not a dispute between philosophers and scientists.

comment by michael_webster2 · 2007-10-18T20:50:00.000Z · LW(p) · GW(p)

Sorry, have you argued someplace else for either reduction or eliminative materialism?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-10-18T21:05:00.000Z · LW(p) · GW(p)

I have a series of posts planned on that in due time.

comment by Tom_Breton · 2007-10-19T03:16:00.000Z · LW(p) · GW(p)

In a comment on "How to convince we that 2+2=3", I pointed out that the study of neccessary truths is not the same as the possession of neccessary truths (credit to David Deutsch for that important insight). Unfortunately, the discussion here seems to have gotten hung up on a philosophical formulation that blurs that important distinction, a priori. Eliezer's quotative paragraph illustrates the problem:

The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience. Wikipedia quotes Hume: Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe." You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.

All of these definitions seem to assume there is no distinction between the existence of neccessary truths and knowing neccessary truths (more correctly, justifiably assigning extremely high probability to them). But there are neccessary truths that are not knowable by any means we have or expect to have. Eg, the digits of Gregory Chaitin's Omega constant, beyond the first few. Omega is the probability that a random Turing machine will halt. Whatever value it has, it neccessarily has.

(One might say more charitably that these definitions are only categorizing knowledge and say nothing about non-knowledge. If so, they mislead, and also make a subtler mistake. Neccessary truths are not a special type of knowledge, they are topic of knowledge)

One can understand why the mistake is made. Epistemology, the branch of philosophy about how we know what we know, is not looking for a way to assign untouchable status to what seems its most certain knowledge.

comment by Steve_Sailer · 2007-11-02T08:24:00.000Z · LW(p) · GW(p)

We use Occam's Razor because it has tended to work better than Occam's Butterknife.

What's so complicated about that?

comment by Cloud · 2007-12-04T05:56:00.000Z · LW(p) · GW(p)

I read no comments

"You could argue that Occam's Razor has worked in the past, and is therefore likely to continue to work in the future. But this, itself, appeals to a prediction from Occam's Razor."

It seems to me like it is more of an appeal to induction. (Granted the problem Hume raised about induction, but also granted Hume's [and my own] defection to practicality.)

comment by crasshopper · 2008-03-18T15:50:00.000Z · LW(p) · GW(p)

To distinguish the word "arbitrary" from "random", I think of an arbitrator—i.e., an outside judge chooses something. (Maybe this results in a uniform prior for me, if'n I don't know what she'll do. Or maybe I'm a mathematician and I choose to be ready for any choice that arbitrator might make.)

When I'm teaching linear algebra and explain arbitrary parameters to my students, I use exactly this metaphor. How many times does someone else have to come in and arbitrate the value of other variables, before you can tell the questioner what the answer is?

comment by idlewire · 2009-07-17T15:45:23.300Z · LW(p) · GW(p)

Could you not argue Occam's Razor from the conjunction fallacy? The more components that are required to be true, the less likely they are all simultaneously true. Propositions with less components are therefore more likely, or does that not follow?

Replies from: Regex, Richard_Kennaway
comment by Regex · 2015-09-23T07:30:18.331Z · LW(p) · GW(p)

I was wondering this myself. I roughly knew of Solomonoff Induction as related... but apparently that is equivalent! The next thing my memory turned up was "Minimum Description Length" principle, which as it turns out... is also a version of Occam's Razor. Funny how that works.

If we look at the original question again... "If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true?" If I understand the conjunction fallacy correctly, it is strictly true that adding more propositions cannot increase the probability.That is to say, P( A & B) <= P(B)... and P( A & B) <= P(A).

So the argument could be made that B might have probability one and therefore would be an equally probable hypothesis with its addition. So if you start with A, and B has probability less than one it will strictly lower the probability to include it. Thus as far as I can tell, Occam's Razor holds except where additional propositions have probability one.

...But if they have probability one, wouldn't they have to be axiomatically identical to just having proposition A? Or would it perhaps have to be probability one given A? I honestly don't know enough here, but I think the basic idea stands?

Replies from: gjm
comment by gjm · 2015-09-23T16:14:58.986Z · LW(p) · GW(p)

As Richard Kennaway has said, this only deals with cases where one hypothesis is a conjunction including another (e.g., "There is a god" and "There is a god called Bill"), but most cases in which we actually want to apply OR aren't like that; they're more like "geocentric astronomy with circular orbits plus epicycles" and "heliocentric astronomy with elliptical orbits".

Replies from: Regex
comment by Regex · 2015-09-23T22:54:37.445Z · LW(p) · GW(p)

Ah. Yeah that does clear things up a bit. What would a solution look like, then? To show the complexity of an idea impacts its probability... but unless you use the historic argument of 'it's looked that way in the past for stuff like this' I don't see any way of even approaching that.

What if we imagine the space of hypotheses? A simpler hypothesis would be a larger circle because there may be more specific rules that act in accordance with it. 'The strength of a hypothesis is not what it can explain, but what it fails to account for', so a complicated prediction should occupy a very tiny region and therefore have a tiny probability.

Or... is that just another version of Solomonoff Induction, and so the same thing?

Replies from: hairyfigment
comment by hairyfigment · 2015-09-24T02:35:50.313Z · LW(p) · GW(p)

Near as I can tell, you're describing the same conjunction rule from your previous comment!

This conjunction rule says that a claim like 'The laws of physics always hold,' has less probability than, 'The laws of physics hold up until September 25, 2015 (whether or not they continue to hold after).'

Solomonoff Induction is an attempt to find a rule that says, 'OK, but the first claim accounts for nearly all of the probability assigned to the second claim.'

Replies from: Regex
comment by Regex · 2015-09-24T05:08:04.901Z · LW(p) · GW(p)

Hrm, yeah. I think I need more tools and experience to be able to think about this properly.

comment by Richard_Kennaway · 2015-09-23T10:05:51.217Z · LW(p) · GW(p)

Propositions with more parts are not necessarily merely the conjunction of those parts. "A or B" and "A and B" may both be the same amount of complexity, by whatever measure, more than A.

comment by DanielLC · 2009-12-28T19:18:47.542Z · LW(p) · GW(p)

2+2=4 is part of the definition of +. The question isn't why we think 2+2=4. The question is why we're so obsessed with addition. 2 << 2 = 8, but you don't hear people talking about how 2 and 2 makes eight.

You simply can't do anything without something being a priori. Is the universe orderly? Maybe it looks orderly by coincidence. The probability that it looks this way given that it's random is simple enough, but we also need to know the probability that it looks this way and it's not orderly. We need some a priori probability that it isn't orderly, or we simply can't work it out. Occam's Razor isn't just there to tell you that A is more likely than B. It's there to tell you how likely A and B are, which you'll need if you want to know how likely they are given C.

comment by Carinthium · 2010-11-23T13:53:56.547Z · LW(p) · GW(p)

A statement can be true a priori in the sense that no sensory evidence is needed to infer it- the principle of non-contradiction, for example.

comment by shokwave · 2010-11-23T14:14:57.496Z · LW(p) · GW(p)

A priori, translated very roughly but with respect to the spirit of the phrase, means "before experience". It is used with a posteriori which means "after experience". Something known a priori is equivalent to your prior probability; something known a posteriori is equivalent to the posterior probability. That is, when you are concerned with an event, before any experience of the event, your knowledge is a priori.

This is, of course, a slippery slope: that prior is simply the posterior of something else.

Some philosophers have tried to use a radical a priori to avoid this slope: an ideal a priori before any experience at all, a la Descartes' "cogito, ergo sum". This is the equivalent of the universal prior problem: what is your first prior? Unsurprisingly, their answers aren't convincing.

However, a priori / a posteriori is still a useful distinction. The first uses some reasoning as the prior for the probability calculation; the second uses statistics as the prior. Something like existential risk requires a priori reasoning (in this limited sense) about the risks, since we can't have an a posteriori experience about existential catastrophes.

comment by Hul-Gil · 2011-05-30T04:30:24.968Z · LW(p) · GW(p)

(More necromancy!)

I thought Occam's Razor was justified by the fact that every new proposition involved necessarily increased the number of ways in which the entire explanation could fail. Then you require evidence for yet another belief, and since you cannot be 100% accurate in any of your propositions, your accuracy continually decreases as well.

comment by Yosarian2 · 2012-12-31T18:02:54.701Z · LW(p) · GW(p)

A Priori has always just seemed to me like another way to describe what we call "assumption" in classical logic. You can't deduce anything in classical logic without starting from certain assumptions and seeing what you can deduce from them, and one of the strengths of classical logic is that it forces you to actually list your assumptions up front, so someone else can say "I agree with your reasoning, but I think your assumption "B" is invalid".

Trying to take assumptions apart, see if they are valid, see if they can either be proven inductively from evidence or deductively from other assumptions, and trying to figure out where that specific assumption comes from, is a very valid thing to do (hitting the "explain" button), but on some level, I think you are always going to have to have some assumptions in order to use any logical system (either classic logic, or Bayesian reasoning).

comment by brainoil · 2013-04-30T09:54:06.740Z · LW(p) · GW(p)

How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

No no no. The difference between a priori and a posteriori is where the justification lies. You may be counting your fingers when you count 1 + 1. It may be that you won't be able to figure out the answer if someone cut off your fingers. In fact, it may be you won't be able to understand what 1 means if you didn't have your fingers. But the justification for 1 + 1 being 2 is not in your fingers.

So it may be that you are able to observe how your brain operates when you're counting 1 + 1. But even if your brain operated in a different way, 1 + 1 is still 2. If B is taller than A, and C is taller than B, C is taller than A. It may be that you're not able to understand this without three pencils. But C being taller than A is a priori knowledge.

Replies from: Juno_Watt
comment by Juno_Watt · 2013-06-16T18:51:46.347Z · LW(p) · GW(p)

If you define evience as a system getting information from outside, then observing your own brain is not evidence. Inferential Apriori truth is what you can (but don't have to) get in a closed system. Aposteriori truth is what can only be obtained in an open system, one with sensors. And non-inferential, innate knowlege remains a problem.

comment by Juno_Watt · 2013-06-16T18:48:45.112Z · LW(p) · GW(p)

If you are a philosopher whose daily work is to write papers, criticize other people's papers, and respond to others' criticisms of your own papers, then you may look at Occam's Razor and shrug. Here is an end to justifying, arguing and convincing. You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs. And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".

Or the word "intuition".

But to a Bayesian, in this era of cognitive science and evolutionary biology and Artificial Intelligence, saying "a priori" doesn't explain why the brain-engine runs. If the brain has an amazing "a priori truth factory" that works to produce accurate beliefs, it makes you wonder why a thirsty hunter-gatherer can't use the "a priori truth factory" to locate drinkable water. It makes you wonder why eyes evolved in the first place, if there are ways to produce accurate beliefs without looking at things.

The claim that there is some non-infernetial apriori truth, or accurate intuition, is not equivalent to the claim that apriori truth is available about everything. Moreever, non-inferential, soundness-style, apriori truth has an evolutionary justification: we might believe X, despite not having seen it with our own eyes, because only those of our ancestors who believed X survived. Innate knowlede, the naturalistic apriori, must be sharply distinguished from the mystical apriori (and both must be distinguished from inference-from-premises).

James R. Newman said: "The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2. The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience. Wikipedia quotes Hume: Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe." You can see that 1 + 1 = 2 just by thinking about it, without looking at apples."

And that is quite uncontentious, providing that it applies to truth as validity (correct inference from possibly arbitrary premises), and not as soundness, or the non-inferential apriori (for instance, the question of whether ones chosen premises are really true).

"You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2. How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing? When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence."

And when your Pure Thought tell that the principal of non-contradiction is true (something you need in order to infer 1+1=2), you may be benefitting from your ancestor's hard won experience. The problem is that the apriori needs to be defined in terms of isolated systems, and no system is ultimately isolated.

" this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern. In other words, for every form of a priori knowledge obtained by "pure thought", you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation. The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same."

If something can only learnt through empiricism, then offloading to another Engine that doesn't have the appropriate sensors doesn't help. On the other hand, the claim that any apriori inference can be offloaded to another Engine, an isolated external processor does not disprove the existence of the apriori. The aposteriori is that which cannot be learnt by an isolated (no sensors) system; the inferenital apriori is that which can -- but it doesn't matter which Engine is doing the processing.

"There is nothing you can know "a priori", which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain. What do you think you are, dear reader?"

So long as you are talking about inference from premises. But observing their brain is not going to tell me that their premises are true.

"Are the sort of neural flashes that philosophers label "a priori beliefs", arbitrary? "

Those flashes are noninferential/soundness style apriori intuitions, and are not addressed by the forgoing.

You can't excuse calling a proposition "a priori" by pointing out that other philosophers are having trouble justifying their propositions. If a philosopher fails to explain something, this fact cannot supply electricity to a refrigerator, nor act as a magical factory for accurate beliefs. There's no truce, no white flag, until you understand why the engine works.

If the engine does nothing but infer conclusions from premises, however computationally or materialistiically, you still don't know some important things: whether the premises are true, and the conclusions sound.

If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found.

We do, according to our explanations...which were selected for simplicity in the first place. You don't have an insight into the universe separate from explanations.

Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam's Razor, just as you cannot argue anything to a rock. A mind needs a certain amount of dynamic structure to be an argument-acceptor. If a mind doesn't implement Modus Ponens, it can accept "A" and "A->B" all day long without ever producing "B". How do you justify Modus Ponens to a mind that hasn't accepted it? How do you argue a rock into becoming a mind?

Then apriori truths of a non-inferential kind are preconditions of rationality. Which has nothing to do with materialism or computationalism

comment by [deleted] · 2015-02-14T18:32:37.300Z · LW(p) · GW(p)

But of course there's such a thing as an a priori statement! Running a computation forwards without any uncertainty in it yields a result: this is "a priori" in the sense that, since it operates only on abstract mental data with no reference to empirical reality, it requires no experience to "get right" (rather, experience is required to locate a useful computation, out of all possible computations). 2+2 really does equal 4, every time, all the time, because any computation isomorphic to 2+2 must always yield an answer isomorphic to 4.

comment by buybuydandavis · 2016-01-14T12:01:27.369Z · LW(p) · GW(p)

When you add 1 + 1 and get 2 by thinking, these thoughts are themselves embodied in flashes of neural patterns. In principle, we could observe, experientially, the exact same material events as they occurred within someone else's brain.

"the exact same material events"?

He has made a prediction of observable events that I predict is very very very wrong.

Your brain is not my brain. Not the exact same size, shape, or connectivity. Not performing the exact same ocean of processing at any one time that one might also be adding 1+1 to equal 2. For that matter, your brain 5 minutes ago adding 1+1 equal 2 will not exhibit "the exact same material events" as your brain doing it now, or ten minutes from now.

This is one of many times where I wish LessWrong had seen more influence from Korzybski.

comment by bjbernis · 2016-04-29T02:07:36.601Z · LW(p) · GW(p)

I must respectfully disagree with your interpretation of Kant's use of the term "a priori knowledge." Kant says "While all knowledge beings with experience, it does not all arise out of experience." Hence, Kant never himself says that we can have knowledge without ever having had experience, that is to say, the shorthand explanation of Kant's philosophy, namely that "a priori truths are truths that can be attained without experience" does a poor job of representing the nuance of his epistemological system. Again, he says "While all knowledge beings with experience, it does not all arise out of experience" meaning that you have to exist in order to attain a priori knowledge (a harken back to Descartes' "I think therefore I am"); however, a priori truths are not going to be found within the physical laws of the world. Rather, a priori truths are found in the very nature of our mental reasoning process, as well as in the very nature of language itself. This is so regardless of the very real fact that language, alongside the accompanying mental capacity required to wield it, are a large part the result of the very laws of nature Kant has deemed irrelevant. This is because Kant is not concerning himself (in this part of the argument) with the distinction between phenomena (our experience) and noumena (the thing into itself, i.e., the external world outside of our perspective). Rather, Kant states that regardless of the nature of the reality outside of human experience, the nature of our experience itself (regardless of what shaped that experience) can be studied independently of its causes, and in doing so, illuminate many concepts that were until then unknown, e.g., the existence of a priori truths themselves; the categories; and the transcendental deduction.

I hope this insight from a Kantian scholar sheds some light on your very unique and interesting argument.

comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2020-03-25T05:36:18.250Z · LW(p) · GW(p)
When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

I mean, yeah? You can still do that in your armchair, without looking at anything outside of yourself. Mathematical facts are indeed "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe," if you modify the statement a little to say "anywhere else existent" in order to acknowledge that the operation of thought indeed exists in the universe. Do mathematical facts exist independently of the universe? Maybe, maybe not, it probably depends what you mean by "exist" and it doesn't really matter to anyone since either way, you can't discover any mathematical facts without using your brain, which is in the universe. So there's no observable difference between whether Platonic math exists or not.


"free will" is a useful concept which should be kept, even though it has been used to refer to nonsensical things. Just because one can't will what he wills, doesn't mean we shouldn't be able to talk about willing what you do. Similarly, just because you can't get knowledge without thinking, doesn't mean we shouldn't be able to use "a priori knowledge" to talk about getting knowledge without looking.

comment by Idan Arye · 2020-09-17T14:13:29.916Z · LW(p) · GW(p)

Modus Ponens can be justified by truth tables. A is , and  is . By combining these to  we get the truth table  which is only true if both A and B are true.

Of course, one can always reject the notion of truth tables and then we are back to square one...

------

As for Occam's Razor - I used to think of it in terms of avoiding overfitting. More complex explanations have more degrees of freedom which makes it easier for them to explain all the datapoints by "twisting" instead of by uncovering some underlying rule.

Now that I've been exposed to Bayesianism though, and consider beliefs to be defined by the predictions you can get from them, I see Occam's Razor as a matter of pragmatism:

  1. If two explanations yield the exact same predictions, then they are different representations of the same belief. We should choose the simpler one not because it is more true or more accurate - they are identically true and identically accurate - but because it is easier to work with.
  2. If the two explanations yield some different predictions, then we don't even need Occam's Razor - we need to test these predictions and see which explanation is more accurate.
  3. If for whatever reason we can't test these predictions (too expensive? unethical? cannot be done with current technology?) but we still need to pick one, picking the simpler one is still a good rule of thumb because it is bounded - we can always make the explanation more complex, but there is a limit to how much simpler we can make it before it no longer explains the observations. So we have a stopping condition - if the rule was to pick the more complex explanations we would be stuck forever in a race to make the explanation more and more complicated.
Replies from: TAG
comment by TAG · 2020-09-17T17:24:21.609Z · LW(p) · GW(p)

If two explanations yield the exact same predictions, then they are different representations of the same belief

Not at all. A basically false explanation, such as a geocentric model of the solar system, can predict as accurately as a basically true model, so long as you are allowed to add endless numbers of epicycles. That's one of the basic motivations for using Occams Razor. If predictive power and ontological content were identical, there would be no need for it .

Replies from: Idan Arye
comment by Idan Arye · 2020-09-17T20:32:03.576Z · LW(p) · GW(p)

You'll need more than just epicycles to make the geocentric model yield accurate predictions. For example, what will happen if we launch a rocket straight up, and observe the Earth from that rocket?

According to the geocentric model, the Earth does not spin - it is the Sun that revolves around it. So if we launch a rocket straight up, it should not observe the Earth rotating. With our modern model, or even with the heliocentric model, we would predict that the rocket see the Earth rotating because the ratio between the perpendicular velocity the rocket started with and its distance from the Earth gets lower and lower as the rocket gets farther away.

So that's one different prediction.

Now, say that we modify the geocentric model so that the Earth is still in the center, but also rotates. What's is the angular velocity of its rotation? If we calculate it based on the observations from our rocket, we will come the the conclusion that the Sun's rotational velocity is extremely low. So low, in fact, that it should not be able to maintain its centrifugal force - and should have been pulled into the Earth a long time ago.

So you'd have to change the rules of gravity too. And then the rules of relativity. And the description will be infinite - because not only you'll need to match not only the known epicycles, not only the existing scenarios, but any possible setting and formation that can come to mind.

And because it is infinite, it can never be actually used to predict things before they are observed - because calculating these predictions will take infinite time. We can always ever use a finite sub-representation of it, which will not yield accurate predictions for all cases.

But still - if we could, it will not be different than a correct model, just like the sum of an infinite Taylor series is the same as the infinitely differentiable function it is derives from. Even if the Taylor series no longer represents the intuition behind that function.

Replies from: Idan Arye, TAG
comment by Idan Arye · 2020-09-17T23:30:50.217Z · LW(p) · GW(p)

Actually... if you squint a bit there is a compact way to represent the fitted geocentric mode:

  • The Earth is at the center.
  • There is a mysterious force, originating from the earth, that pushes all objects away. It's strength is what you would expect from the Earth's centrifugal force according to the modern model.
  • All the objects in the universe, other than the Earth, are accelerated at the opposite direction and size than what the Earth's acceleration in the modern model.

With relativity in mind these rules may not be enough, but let's ignore that for the sake of the argument.

At this point, I'll ask the neogeocenterists (pun intended), wouldn't it be simpler and easier to just use the modern model for calculating my predictions?

"But then you'll get wrong results!", they'll say.

How so? The centrifugal force from assuming the Earth rotates mimics your mysterious force that pushes all things away, and the acceleration the Earth mimics the acceleration your model adds to all other celestial bodies. Then, when predictions for the relative position and velocity of each pair of objects should be identical in both models.

"Yea, sure, but you'd still get wrong results - the Earth will not be in the cener."

So... what? What difference does being in the center make? If it makes a difference, we should test for that difference and support or disprove your model!

"No, this is not a difference you can test for, but it makes us special!"

Special... how?

"There are countless planets in the universe, and infinite positions to put the center. What is the probability that we are the ones in the center? That we are the only planet that doesn't move? That these mysterious unexplainable forces make sure we are kept in the center of the universe?"

Pretty damn high, I'd say, considering how you picked the origin to be our position, you decided to use our velocity for calculating the relevant velocities of all other objects as if they were absolute velocities, and you are the ones who added these mysterious forces instead of picking a model that does not require them. Sure, I can't scientifically prove that your model is wrong, but you can't prove that all the other models that don't put the Earth in the center are wrong - and therefore you cannot claim that the Earth is special for being in the center of the universe.

--------------------------------------

By defining beliefs as the predictions you can get from them, I don't need Occam's Razor to be true - it is enough that it is useful. The neogeocentric model is not different than the model I use - not in any meaningful way, for if it was different in any meaningful way that would be a difference in predictions that we could test. So I don't need to argue that simpler = truer - I just let them have their complicated representation of the belief, and instead draw the line on trying to get any meaningful insight from it that cannot be obtained from the more compact representation that I use.

comment by TAG · 2020-09-18T17:13:09.941Z · LW(p) · GW(p)

You’ll need more than just epicycles to make the geocentric model yield accurate predictions

It takes more than literal epicycles , but there are any number of ways of complicating a theory to meet the facts.

But still—if we could, it will not be different than a correct model

Of course it is different. Heliocentricism says something different about reality than geocentricism.

Replies from: Idan Arye
comment by Idan Arye · 2020-09-19T10:08:17.952Z · LW(p) · GW(p)

Of course it is different. Heliocentricism says something different about reality than geocentricism.

Different... how? In what meaningful ways is it different?

Replies from: Teerth Aloke, TAG
comment by Teerth Aloke · 2020-09-19T10:28:45.921Z · LW(p) · GW(p)

The debate is apparently about the meaning of 'different'. Someone might define different as, 'predicting different observations' and another as 'different ontological content'.

If there is a box in front of you, which either contains as $20 or $100 note. However, you have very strong reasons to believe that the content of the box shall be unknown to you, forever. Is the question, "Is there a $20 or $100 note in the box?" meaningful. Is the belief in the presence of a $20 note different from the belief in the presence of a $100 note? That is essentially, similar to the problem of identical models.

Replies from: Idan Arye
comment by Idan Arye · 2020-09-19T12:11:25.860Z · LW(p) · GW(p)

If the content of the box is unknown forever, that means that it doesn't matter what's inside it because we can't get it out.

Replies from: TAG
comment by TAG · 2020-09-19T19:57:46.003Z · LW(p) · GW(p)

Whether something is empiricly unknowable forever is itself unknowable ... it's an acute form of the problem of induction.

it doesn’t matter what's inside it

But that isn't quite the same as say ing that statements about what's inside are meaningless. A statement can be meaningful without mattering. And you have to be able to interpret the meaning, in the ordinary sense, in order to be able to notice that it doesn't matter.

Replies from: Idan Arye
comment by Idan Arye · 2020-09-19T22:17:07.411Z · LW(p) · GW(p)

If a universe where the statement is true is indistinguishable from a universe where the statement is false, then the statement is meaningless. And if the set of universes where statement A is true is identical to the set of universes where statement B is true, then statement A and statement B have the same meaning whether or not you can "algebraically" convert one to the other.

Replies from: TAG
comment by TAG · 2020-09-20T11:57:00.143Z · LW(p) · GW(p)

And if the set of universes where statement A is true is identical to the set of universes where statement B is true

They're not, because A and B assert different things.

Replies from: Idan Arye
comment by Idan Arye · 2020-09-20T12:25:11.585Z · LW(p) · GW(p)

If A and B assert different things, we can test for these differences. Maybe not with current technology, but in principle. They yield different predictions and are therefore different beliefs.

Replies from: TAG
comment by TAG · 2020-09-20T14:37:26.319Z · LW(p) · GW(p)

If A and B assert different things, we can test for these differences.

You keep assuming verificationism in order to prove verificationism.

They assert different things because they mean different things, because the dictionary meanings are different.

In the thought experiment we are considering , the contents of the box can be er be tested. Nonetheless $10 and $100 mean different things.

Replies from: Idan Arye, Idan Arye
comment by Idan Arye · 2020-09-20T15:38:11.487Z · LW(p) · GW(p)

In the thought experiment we are considering , the contents of the box can be er be tested. Nonetheless $10 and $100 mean different things.

I'm not sure you realize how strong a statement "the contents of the box can be never be tested" is. It means even if we crack open the box we won't be able to read the writing on the bill. It means that even if we somehow tracked all the $20 and all the $100 bills that were ever printed, their current location, and whether or not they were destroyed, we won't be able to find one which is missing and deduce that it is inside the box. It means that even if we had a powerful atom-level scanner that can accurately map all the atoms in a given volume and put the box inside it, it won't be able to detect if the atoms are arranged like a $20 bill or like a $100 bill. It means that even if a superinteligent AI capable of time reversal calculations tried to simulate a time reversal it wouldn't be able to determine the bill's value.

It means, that the amount printed on that bill has no effect on the universe, and was never affected by the universe.

Can you think of a scenario where that happens, but the value of dollar bill is still meaningful? Because I can easily describe a scenario where it isn't:

Dollar bills were originally "promises" for gold. They were signed by the Treasurer and the secretary of the Treasury because the Treasury is the one responsible for fulfilling that promise. Even after the gold standard was abandoned, the principle that the Treasury is the one casting the value into the dollar bills remains. This is why the bills are still signed by the Treasury's representatives.

So, the scenario I have in mind is that the bill inside the box is a special bill - instead of a fixed amount, it says the Treasurer will decide if it is worth 20 or 100 dollars. The bill is still signed by the Treasurer and the secretary of the Treasury, and thus has the same authority as regular bills. And, in order to fulfill the condition that the value of the bill is never known - the Treasurer is committed to never decide the worth of that bill.

Is it still meaningful to ask, in this scenario, if the bill is worth $20 or $100?

Replies from: TAG
comment by TAG · 2020-09-24T22:27:59.361Z · LW(p) · GW(p)

I can understand that your revised scenario is unverifiable, by understanding the words you wrote, ie. by grasping their meaning. As usual, the claim that some things are unverifiable is parasitic on the existence of a kind of meaning that has nothing to do with verifiability.

comment by Idan Arye · 2020-09-20T16:55:21.669Z · LW(p) · GW(p)

They assert different things because they mean different things, because the dictionary meanings are different.

The Quotation is not the Referent [? · GW]. Just because the text describing them is different doesn't mean the assertions themselves are different.

Eliezer identified evolution with the blind idiot god Azathoth [LW · GW]. Does this make evolution a religious Lovecraftian concept?

Scott Alexander identified the Canaanite god Moloch with the principle that forces you to sacrifice your values for the competition. Does this make that principle an actual god? Should we pray to it?

I'd argue not. Even though Eliezer and Scott brought the gods in for the theatrical and rhetorical impact, evolution is the same old evolution and competition is the same old competition. Describing the idea differently does not automatically make it a different idea - just like describing  as  does not make it a different function.

In case of mathematic functions we have a simple equivalence law: . I'd argue we can have a similar equivalence law for beliefs -  where A and B are beliefs and X is an observation.

This condition is obviously necessary because if  even though  and we find that , that would support A and therefore also B (because they are equivalent) which means an observation that does not match the belief's predictions supports it.

Is it sufficient? My argument for its sufficiency is not as analytical as the one for its necessity, so this may be the weak point of my claim, but here it goes: If , even though they give the same predictions, then something other than the state and laws of the universe is deciding whether a belief is true or false (actually - how much accurate is it). This undermines the core idea of both science and Bayesianism that beliefs should be judged by empirical evidences. Now, maybe this concept is wrong - but if it is, Occam's Razor itself becomes meaningless because if the explanation does not need to match the evidences, then the simplest explanation can always be "Magic!".

Replies from: TAG
comment by TAG · 2020-09-24T22:09:30.975Z · LW(p) · GW(p)

The Quotation is not the Referent. Just because the text describing them is different doesn’t mean the assertions themselves are different.

..because exact synonymy is possible. Exact synonymy is also rare, and it gets less probable the longer the text is.

You need to be clear whether you are claiming that two theories are the same because their empirical content is the same, or because their semantic content is the same.

just like describing f(x)=(x+1)2 as g(x)=x2+2x+1 does not make it a different function.

Those are different...computationally. They would take a different amount of time to execute.

Pure maths is exceptional in its lack of semantics.

f=ma

and

P=IV

..are identical mathematically, but have different semantics in physics.

If A≢B, even though they give the same predictions, then something other than the state and laws of the universe is deciding whether a belief is true or false (actually—how much accurate is it)

If two theories are identical empirically and ontologically, then some mysterious third thing would be needed to explain any difference. But that is not what we are talking about. What we are discussing is your claim that empirical difference is the only possible difference , equivalently that the empirical content of a theory is all its content.

Then the answer to "what further difference could there be" is "what the theories say about reality".

comment by TAG · 2020-09-19T15:55:15.550Z · LW(p) · GW(p)

Semantically and ontologically. The dictionary meanings of the words heliocentric and geocentric are opposites, so they assert different things about the territory.

Note that this the default hypothesis. Whatever I just called "dictionary meaning" is what is usually called "meaning" simpliciter.

Attempts to resist this conclusion are based on putting forward non standard definitions of "meaning", which need to because argued for, not just assumed.

Replies from: Idan Arye
comment by Idan Arye · 2020-09-19T22:40:16.696Z · LW(p) · GW(p)

But this is not the dictionary definition of the geocentric model we are talking about - this we have twisted it to have the exact same predictions as the modern astronomical model. So it no longer asserts the same things about the territory as the original geocentric model - its assertions are now identical to the modern model. So why should it still hold the same meaning as the original geocentric model?

Replies from: TAG
comment by TAG · 2020-09-20T11:48:51.748Z · LW(p) · GW(p)

Dictionaries don't define complex scientific theories.

Our complicated , bad, wrong , neo-geocentric theory is still a geocentric theory.

Therefore it makes different assertions about the territory than heliocentricism.

Replies from: Idan Arye
comment by Idan Arye · 2020-09-20T12:28:12.165Z · LW(p) · GW(p)

So if I copied the encyclopedia definition of the heliocentric model, and changed the title to "geocentric" model, it would be a "bad, wrong , neo-geocentric theory [that] is still a geocentric theory"?

Replies from: TAG
comment by TAG · 2020-09-20T14:27:08.249Z · LW(p) · GW(p)

It would be a theory that didn't work, because you only changed one thing.

Replies from: Idan Arye
comment by Idan Arye · 2020-09-20T14:59:23.914Z · LW(p) · GW(p)

I'm not sure I follow - what do you mean by "didn't work"? Shouldn't it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?

Replies from: Idan Arye
comment by Idan Arye · 2020-09-23T12:44:32.248Z · LW(p) · GW(p)

OK, I continued reading, and in Decoherence is Simple [? · GW] Eliezer makes a good case for Occam's Razor as more than just a useful tool.

In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs - but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.

So, if a simple belief A started with -10 decibels [LW · GW] and a complicated belief B started with -20 decibels, and we get 15 decibels of evidence supporting both, the posterior credibility of the beliefs are 5 and -5 - so we should favor A. Even if we get another 10 decibels of evidence and the credibility of B becomes 5, the credibility of A is now 15 so we should still favor it. The only way we can favor B is if we get enough evidence that support B but not A.

Of course - this doesn't mean that A is true and B is false, only that we assign a higher probability to A.

So, if we go back to astronomy - our neogeocentric model has a higher burden of proof than the modern model, because it contains additional mysterious forces. We prove gravity and relativity and the work out how centrifugal forces work and that's (more or less) enough for the modern model, and the exact same evidences also support the neogeocentric model - but they are not enough for it because we also need evidence for the new forces we came up with.

Do note, though, that the claim that "there is no mysterious force" is simpler than "there is a mysterious force" is taken for granted here...

Replies from: TAG
comment by TAG · 2020-09-24T21:49:10.319Z · LW(p) · GW(p)

I’m not sure I follow—what do you mean by “didn’t work”? Shouldn’t it work the same as the heliocentric theory, seeing how every detail in its description is identical to the heliocentric model?

If you take a heliocentric theory, and substitute "geocentric" for "heliocentric", you get a theory that doens't work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.

In my own words (:= how I understand it) more complicated explanations have a higher burden of proof and therefore require more bits of evidence. If they give the same predictions as the simpler explanations, then each bit of evidence counts for both the complicated and the simple beliefs—but the simpler beliefs had higher prior probabilities, so after adding the evidence their posterior probabilities should keep being higher.

What does "true" mean when you use it?

A geocentric theory can match any observation, providing you complicate it endelessly.

This discussion is about your claim that two theories are the same iff their empirical predictions are the same. But if that is the case, why does complexity matter?

EY is a realist and a correspondence theorist. He thinks that "true" means "corresponds to reality", and he thinks that complexity matters, because, all other things being equal, a more complex theory is less likely to correspond than a simpler one. So his support of Occam's Razor, his belief in correspondence-truth, and his realism all hang together.

But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are denying that they are have any semantic (non empirical content), and, as an implication of that, that they "mean" or "say" nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?

Replies from: Idan Arye
comment by Idan Arye · 2020-09-25T14:49:47.173Z · LW(p) · GW(p)

If you take a heliocentric theory, and substitute "geocentric" for "heliocentric", you get a theory that doens't work in the sense of making correct predictions. You know this, because in previous comments you have already recognised the need for almost everything else to be changed in a geocentric theory in order to make it empirically equivalent to a heliocentric theory.

I only change the title, I don't change anything they theory says. So its predictions are still the same as the heliocentric model.

But you are arguing against realism, in that you are arguing that theories have no content beyond their empirical content, ie their predictive power. You are denying that they are have any semantic (non empirical content), and, as an implication of that, that they "mean" or "say" nothing about the territory. So why would you care that one theory in more complex than another, so long as its predictions are accurate?

The semantics are still very important as a compact representation of predictions. The predictions are infinite - the belief will have to give a prediction for every possible scenario, and scenariospace is infinite. Even if the belief is only relevant for a finite subset of scenarios, it'd still have to say "I don't care about this scenario" an infinite number of times.

Actually, it would make more sense to talk about belief systems than individual beliefs, where the belief system is simply the probability function P. But we can still talk about single beliefs if we remember that they need to be connected to a belief system in order to give predictions, and that when we compare two competing beliefs we are actually comparing two belief systems where the only difference is that one has belief A and the other has belief B.

Human minds, being finite, cannot contain infinite representations - we need finite representations for our beliefs. And that's where the semantics come in - they are compact rules that can be used to generate predictions for every given scenario. And they are also important because the amount of predictions we can test is also finite. So even if we could comprehend the infinite prediction field over scenariospace, we wouldn't be able to confirm a belief based on a finite number of experiments.

Also, with that kind of representation, we can't even come up with the full representation of the belief. Consider a limited scenario space with just three scenarios X, Y and Z. We know what happened in X and Y, and write a belief based on it. But what would that belief say about Z? If the belief is represented as just its predictions, without connections between distinct predictions, how can we fill up the predictions table?

The semantics help us with that because they have less degrees of freedom. With N degrees of freedom we can match any  number of observations, so we need  observations to even start counting them as evidence. I not sure how to come up with a formula for the number of degrees of freedom a semantic representation of a belief has - this depends not only on the numerical constants but also on the semantics - but some properties of it are obvious:

  1. The prediction table representation has infinite degrees of freedom, since it can give a prediction for each scenario independently from the predictions given to the other scenarios.
  2. A semantic representation that's strictly more simple than another semantic representation - that is, you can go from the simple one to the complex one just by adding rules - then the simpler one has less degrees of freedom than the complicated one. This is because the complicated one has all the degrees of freedom the simpler one had, plus more degrees of freedom from the new rules (just adding the rules is some degrees of freedom, even if the rule itself does not contain anything that can be tweaked)

So the simplicity of the semantic representation is meaningful because it means less degrees of freedom and thus requires less evidence, but it does not make the belief "truer" - only the infinite prediction table determines how true the belief is.

Replies from: TAG
comment by TAG · 2020-09-26T12:45:12.174Z · LW(p) · GW(p)

I only change the title, I don’t change anything

Maybe you do, but it's my thought experiment!

The semantics are still very important as a compact representation of predictions.

That isn't what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .

Replies from: Idan Arye
comment by Idan Arye · 2020-09-26T19:12:39.960Z · LW(p) · GW(p)

That isn't what you need to show. You need to show that the semantics have no ontological implications, that they say nothing about the territory .

Actually, what I need to show is that the semantics say nothing extra about the territory that is meaningful. My argument is that the predictions are canonical representation of the belief, so it's fine if the semantics say things about the territory that the predictions can't say, as long as everything it says that does not affect the predictions is meaningless. At least, meaningless in the territory.

The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called "gravity". If you call that force "travigy" instead, it will cause no difference in the predictions. This is because the name of the force if a property of the map, not the territory - if it was meaningful in the territory it should have had impact on the predictions.

And I claim that the "center of the universe" is similar - it has no meaning in the territory. The universe has no "center" - you can think of "center of mass" or "center of bounding volume" of a group of objects, but there is no single point you can naturally call "the center". There can be good or bad choices for the center, but not right or wrong choices - the center is a property of the map, not the territory.

If it had any effect at all on the territory, it should have somehow affected the predictions.

Replies from: TAG
comment by TAG · 2020-09-28T16:41:51.556Z · LW(p) · GW(p)

My argument is that the predictions are canonical representation of the belief, so it’s fine if the semantics say things about the territory that the predictions can’t say, as long as everything it says that does not affect the predictions is meaningless.

  1. How can you say something, but say something meaningless?

  2. Why does not saying anything (meaningful) about the territory buy you? What's the advantage?

Realists are realists because they place a terminal value in knowing what the territory is above and beyond making predictions. They can say what the advantage is ... to them. If you don't personally value knowing what the territory is, that need not apply to others.

The semantics of gravity theory says that the force that pulls objects together over long range based on their mass is called “gravity”. If you call that force “travigy” instead, it will cause no difference in the predictions

Travigy means nothing, or it means gravity. Either way , it doesnt affect my argument.

You don't seem to understand what semantics is. It's not just a matter of spelling changes or textual changes. A semantic change doesn't mean that two strings fail strcmp() , it means that terms have been substituted with meaningful terms that mean something different.

And I claim that the “center of the universe” is similar—it has no meaning in the territory

"There is a centre of the universe" is considered false in modern cosmology. So there is no real thing corresponding to the meaning of string "centre of the universe". Which is to say that the string "centre of the universe" has a meaning , unlike the string "flibble na dar wobble".

If it had any effect at all on the territory, it should have somehow affected the predictions.

The territory can be different ways that produce the same predictions.

comment by TAG · 2020-09-17T17:50:12.789Z · LW(p) · GW(p)

Indeed, it seems there is no way to justify Occam’s Razor except by appealing to Occam’s Razor, making this argument unlikely to convince any judge who does not already accept Occam’s Razor.

That's very much not proven. There are multiple arguments for Occams Razor ,(see the Wikipedia page) , most or all of which aren't circular.