Syntacticism

post by ec429 · 2011-09-23T06:49:40.818Z · LW · GW · Legacy · 62 comments

I've mentioned in comments a couple of times that I don't consider formal systems to talk about themselves, and that consequently Gödelian problems are irrelevant.  So what am I actually on about?

It's generally accepted in mathematical logic that a formal system which embodies Peano Arithmetic (PA) is able to talk about itself, by means of Gödel numberings; statements and proofs within the system can be represented as positive integers, at which point "X is a valid proof in the system" becomes equivalent to an arithmetical statement about #X, the Gödel number representing X.  This is then diagonalised to produce the Gödel sentence (roughly, g="There is no proof X such that the last line of X is g", and incompleteness follows.  We can also do things like defining □ ("box") as the function from S to "There is a proof X in PA whose last line is S" (intuitively, □S says "S is provable in PA").  This then also lets us define the Löb sentence, and many other interesting things.

But how do we know that □S ⇔ there is a proof of S in PA?  Only by applying some meta-theory.  And how do we know that statements reached in the meta-theory of the form "thus-and-such is true of PA" are true of PA?  Only by applying a meta-meta-theory.  There is no a-priori justification for the claim that "A formal system is in principle capable of talking about other formal systems", which claim is used by the proof that PA can talk about itself.  (If I remember correctly, to prove that □ does what we think it does, we have to appeal to second-order arithmetic; and how do we know second-order arithmetic applies to PA?  Either by invoking third-order arithmetic to analyse second-order arithmetic, or by recourse to an informal system.)

Note also that the above is not a strange loop through the meta-level; we justify our claims about arithmeticn by appeal to arithmeticn+1, which is a separate thing; we never find ourselves back at arithmeticn.

Thus the claim that formal systems can talk about themselves involves ill-founded recursion, what is sometimes called a "skyhook".  While it may be a theorem of second-order arithmetic that "the strengthened finite Ramsey theorem is unprovable in PA", one cannot conclude from second-order arithmetic alone that the "PA" in that statement refers to PA.  It is however provable in third-order arithmetic that "What second-order arithmetic calls "PA" is PA", but that hasn't gained us much - it only tells us that second- and third-order arithmetic call the same thing "PA", it doesn't tell us whether this "PA" is PA.  Induct on the arithmetic hierarchy to reach the obvious conclusion.  (Though note that none of this prevents the Paris-Harrington Theorem from being a theorem of n-th order arithmetic ∀n≥2)

What, then, is the motivation for the above?  Well, it is a basic principle of my philosophy that the only objects that are real (in a Platonic sense) are formal systems (or rather, syntaxes).  That is to say, my ontology is the setclass of formal systems.  (This is not incompatible with the apparent reality of a physical universe; if this isn't obvious, I'll explain why in another post.)  But if we allow these systems to have semantics, that is, we claim that there is such a thing as a "true statement", we start to have problems with completeness and consistency (namely, that we can't achieve the one and we can't prove the other, assuming PA).  Tarski's undefinability theorem protects us from having to deal with systems which talk about truth in themselves (because they are necessarily inconsistent, assuming some basic properties), but if systems can talk about each other, and if systems can talk about provability within themselves (that is, if analogues to the □ function can be constructed), then nasty Gödelian things end up happening (most of which are, to a Platonist mathematician, deeply unsatisfying).

So instead we restrict the ontology to syntactic systems devoid of any semantics; the statement ""Foo" is true" is meaningless.  There is a fact-of-the-matter as to whether a given statement can be reached in a given formal system, but that fact-of-the-matter cannot be meaningfully talked about in any formal system.  This is a remarkably bare ontology (some consider it excessively so), but is at no risk from contradiction, inconsistency or paradox.  For, what is "P∧¬P" but another, syntactic, sentence?  Of course, applying a system which proves "P∧¬P" to the 'real world' is likely to be problematic, but the paradox or the inconsistency lies in the application of the system, and does not inhere in the system itself.

EDIT: I am actually aiming to get somewhere with this, it's not just for its own sake (although the ontological and epistemological status of mathematics is worth caring about for its own sake).  In particular I want to set up a framework that lets me talk about Eliezer's "infinite set atheism", because I think he's asking a wrong question.

Followed up by: The Apparent Reality of Physics

62 comments

Comments sorted by top scores.

comment by prase · 2011-09-23T11:45:01.107Z · LW(p) · GW(p)

Say you prove □S (where □S is defined by means of Gödel numbering). Do you believe then that there is a proof of S inside the system? Conversely, if you prove ¬□S, do you actually believe that there is no proof of S? Or do you think that having □S gives us no information about proof of S, because it is based on meta-theory which isn't justified? I don't think one can evade these questions by throwing away meaning. If there is a proof of S, there is a chance that someone finds it. If meta-theory justifications are worthless, one would expect someone eventually proving ¬□S and S for some particular S. I don't think it is going to happen and would find it extremely surprising and interesting if it did. What about you?

Replies from: ec429
comment by ec429 · 2011-09-23T17:36:43.695Z · LW(p) · GW(p)

Well, actually doing mathematics is an 'application' of the Platonic ideal mathematics; the perfect draughtsman's copy does involve semantics simply because that's how the brain works. So I would indeed be surprised to see ¬□S and S proved for some PA-statement S, but only because in my experience thus far it has been predictively accurate to apply second-order arithmetic to PA. Nonetheless, if it did happen, then it wouldn't refute PA, nor would it refute second-order arithmetic (neither has inherent 'truth-value', so the concept of 'refuting' them is unsinnig). What it would refute is the informal application of second-order arithmetic to PA. Though note that it wouldn't refute the third-order arithmetic which proves the statement "second-order arithmetic applies to PA", because "PA" is not PA.

It is, in my opinion, important to distinguish between the formal system X (which is timeless, unchanging, and Platonically real), and our experiences of applying X. Such experiences, which may lead us to make predictions, are not informative about X, rather they are informative about the informal process of applying X. My experiences with arithmetic lead me to predict, informally, that applying second-order arithmetic to PA won't lead to contradiction. That says nothing about either arithmetic itself.

Replies from: prase
comment by prase · 2011-09-23T18:09:13.036Z · LW(p) · GW(p)

It is, in my opinion, important to distinguish between the formal system X (which is timeless, unchanging, and Platonically real), and our experiences of applying X.

Why is it important? I have actually difficulties seeing the boundary. What other information we have about a formal system except experiences of its application?

Replies from: ec429
comment by ec429 · 2011-09-23T18:23:32.005Z · LW(p) · GW(p)

Map-territory. You're not distinguishing formal system X from our knowledge of formal system X.

Replies from: prase
comment by prase · 2011-09-24T21:46:10.486Z · LW(p) · GW(p)

This reply seems a bit too brief. When speaking about e.g. physics, it's clear what's the map (our models and theories) and what's the territory (nature around us). There are two principal ways of obtaining knowledge: exploring the territory by conducting experiments and exploring the map by calculating implications of our theories. In reality experimental results are viewed indirectly through prism of the theory, but the distinction between experimentally established facts and theoretically derived propositions is still pretty clear.

I don't see similar distinction for formal system. All we know about a formal system we know because we apply its rules and derive theorems. We can in fact analyse the system on the meta level, but you seem to be objecting to that. So, what practical importance lies in the map-territory distinction here? What actual mistakes I could commit because I forget to make the distinction?

comment by MrMind · 2011-09-23T09:13:11.870Z · LW(p) · GW(p)

Four observations:

  • you're basically restating formalism
  • the set of all formal systems is an ontology: you can have formal systems that talk about formal systems, and you run into the same problems you embarked to avoid
  • you do not run into completeness issues, but if you want your system to have any utility you have to assert the consistency of the systems you're using
  • the set of all formal system? How do you define that, or check it's consistent? Without resorting to higher order theory?
Replies from: ec429
comment by ec429 · 2011-09-23T17:20:47.981Z · LW(p) · GW(p)

you're basically restating formalism

Not quite: what I have here is game formalism with the additional claim that the games are 'real' in a Platonic sense (and nothing else is).

you can have formal systems that talk about formal systems

But you cannot prove, formally, that formal system A talks about formal system B, without appealing to another formal system C (and then how do you know that C talks about A?). I already made this point in the OP.

if you want your system to have any utility you have to assert the consistency of the systems you're using

Yes, but you only have to make that assertion as a condition of the application; in order to claim that two pebbles plus two pebbles makes four pebbles I have to assert that PA applies to pebbles, which assertion involves asserting that PA is consistent - but if it turns out that PA is inconsistent, that only refutes the assertion that PA applies to pebbles; it does not 'refute' PA, since PA makes no claims of consistency (indeed, PA does not have a notion of "truth", so even Q=PA+"Q is consistent" makes no claims of consistency). Thus, while paradoxes may conceivably destroy the utility of mathematics, they still could not destroy mathematics itself.

the set of all formal system?

Good point, that should have been 'class'. Fixed.

Replies from: MrMind
comment by MrMind · 2011-09-26T09:38:24.126Z · LW(p) · GW(p)

But you cannot prove, formally, that formal system A talks about formal system B, without appealing to another formal system C (and then how do you know that C talks about A?). I already made this point in the OP.

If in formal system C you can proove that formal system A prooves statements about formal system B, and since

Not quite: what I have here is game formalism with the additional claim that the games are 'real' in a Platonic sense (and nothing else is).

then it means that system C is a real meta-theory for system A and B, and since B is real, then A is making 'true' statements.

Thus, while paradoxes may conceivably destroy the utility of mathematics, they still could not destroy mathematics itself.

It's the same thing. What distinguishes mathematics from fantasy narrative is the strict adherence to a set of rules: if some system is inconsistent, then it's equivalent to have no rules (that is, all inconsistent systems have the same proving power).

Good point, that should have been 'class'. Fixed.

The distinction between set and class is only meaningful in a formal system... and I don't think you want a theory able to talk about the quantity of the totality of the real formal systems...

comment by [deleted] · 2011-09-23T18:50:54.773Z · LW(p) · GW(p)

I agree with the main tenets of your view - rejecting the platonic realm of mathematical truths, instead founding mathematics rigorously and finitely on formal systems - but you do not hold fast to it, for example the notion of a formal system "talking about itself" is appears deeply semantic rather than syntactic.

I will speak informally about rigorous mathematics.

to prove that □ does what we think it does, we have to appeal to second-order arithmetic

There are two ways I can communicate to you what PA is:

  • Direct: Write out the grammar in full and spend a few days with you pushing symbols around, showing you which to accept and reject. Excluding human error, any theorem you prove with this system will also be a (PA-)theorem to me.
  • Implemented: Write out the axiomatization in an existing (stronger) formal system (such as ZFC) which we already both understand and have the same conception of.

It is not possible to consider Godel's theorem with only a direct understanding of PA, you need an implemented one. This isn't a problem though, suppose you had a direct understand of PA and any understanding of ZFC, it's very easy to give an implemented understanding of PA because you can easily check that my implementation corresponds to your direct understanding by checking it axiom at a time.

I don't consider formal systems to talk about themselves

Inside our implemented PA we construct a predicate and then prove in ZFC that it satisfies a particular property defined in terms of PA. Specifically we construct a PA formula Bew which in ZFC is proved to satisfy a specification stating that PA |- P is equivalent to PA |- Bew("P"). (I am just making up symbols here, Godel uses a different notation, if it's not clear I can write it out much more formally but I'm sure this make sense). This dissolves the question of whether PA "talks about itself" or not without infinite regress, because one must have a direct understanding of some foundation to ever get started doing rigorous mathematics.

it is a basic principle of my philosophy that [..] formal systems [are real]

This doesn't make sense to me. I have no idea what real means here. PA was created by Peano, of course if he communicated this theory to someone else (by either of the means defined earlier) or someone independently came up with an isomorphic system they will prove the same theorems. This surprising coincidence can be very misleading (it is the cause of platonism after all!) so it is important that one realize why two people share the same theorems: because they start from the same axioms and apply the same deduction rules! Not because there is some magical spirit realm of mathematical truths which they divine from.

Replies from: ec429
comment by ec429 · 2011-09-23T19:12:36.204Z · LW(p) · GW(p)

I agree with the main tenets of your view - rejecting the platonic realm of mathematical truths, instead founding mathematics rigorously and finitely on formal systems - but you do not hold fast to it

This is because you have misconstrued my view. My philosophy of syntacticism is not formalism, nor is it a straight rejection of Platonism. Instead it is a Platonic formalism, in which the Platonic ideals are of the form "thus-and-such is derivable from these axioms with these deduction rules", except that any attempt to talk directly and formally about those Platonic ideals fails (because that involves application of a formal system to a formal system).

It is not possible to consider Godel's theorem with only a direct understanding of PA, you need an implemented one.

The point of my post is that 'implemented PA' in fact implements "PA", and there can be no rigorous proof that the referent of "PA" is PA (because such a rigorous proof in fact ends with ""PA" is PA", and how do we know that this new system's "PA" is PA?)

Inside our implemented PA we construct a predicate and then prove in ZFC that it satisfies a particular property defined in terms of PA.

But how do we prove that (ZFC proves statement about PA) ⇔ (statement about PA)? Only by appealing to a higher-order theory. After all, I can define a theory T by the axioms "1. PA is consistent. 2. PA proves PA is consistent.", where (if necessary) PA is defined by listing PA's axioms, and yet if I were to conclude from this that PA proves PA's consistency without thereby becoming inconsistent, you would scream blue Gödelian murder, because you have a proof in ZFC that if PA proves PA is consistent then PA is not consistent. But I have a proof in T that PA is consistent and proves PA consistent (a trivial one, in fact). So how can you justify privileging ZFC's referent of "PA" over T's referent of "PA"? Only by appeal to some meta-theory - or by informal justification.

I have no idea what real means here.

It is a fact that agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems. Therefore there is some sense in which the theorems are inherent in the (axioms + deduction rules): there is a truth about what those (axioms + deduction rules) lead to, and that truth exists outside of any implementation. But that truth is not like a rock, it is not a physical object - so it must be a metaphysical one. Thus, Platonism.

But then why privilege any particular set of rules as having Platonic existence? Thus, all possible formal systems have Platonic existence.

The "magical spirit realm" does not say that "2+2=4". But it does say that "If you apply the rules of PA to evaluate the expression 2+2, a failure to converge on 4 implies a failure to follow the rules of PA" (which failure might, for instance, be a cosmic ray hitting a neuron).

Do I make sense now?

Replies from: David_Allen, None
comment by David_Allen · 2011-09-23T20:13:21.792Z · LW(p) · GW(p)

Therefore there is some sense in which the theorems are inherent in the (axioms + deduction rules): there is a truth about what those (axioms + deduction rules) lead to, and that truth exists outside of any implementation.

You are experiencing a mind projection fallacy.

The theorems don't exist unless an implementation produces them and once produced they only exist within a context that can represent them.

In the same way, the truth you refer to is generated by and exists within your mind. It has no existence outside of that implementation.

Replies from: ec429
comment by ec429 · 2011-09-23T20:22:39.588Z · LW(p) · GW(p)

If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.

It is not a mind-projection fallacy, any more than "the sheep control my pebbles" is a mind-projection fallacy. It's just that it's operating one meta-level higher.

Replies from: Zetetic, David_Allen
comment by Zetetic · 2011-09-24T06:43:44.783Z · LW(p) · GW(p)

If that is so, then how come others tend to reach the same truth? In the same way that there is something outside me that produces my experimental results (The Simple Truth), so there is something outside me that causes it to be the case that, when I (or any other cognitive agent) implements this particular algorithm, this particular result results.

People have very similar brains and I'd bet that all of the ideas of people that are cognitively available to you shared a similar cultural experience (at least in terms of what intellectual capital was/is available to them).

Viewing mathematics as something that is at least partially a reflection of the way that humans tend to compress information, it seems like you could argue that there is an awful lot of stuff to unpack when you say "2+2 = 4 is true outside of implementation" as well as the term "cognitive agent".

What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been 'set up' by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as "the correct answer", perhaps some phonemes register in our cochlea and we store them in our working memory and compare them with the 'expected' phonemes). There appears to be an underlying regularity, but it isn't clear to me what the true reduction looks like! Is the computation the 'bottom level'? Do we aim to rephrase mathematics in terms of some algorithms that are capable of producing it? Are we then to take computation as "more fundamental" than physics?

Does this make sense?

Replies from: ec429
comment by ec429 · 2011-09-24T07:12:55.343Z · LW(p) · GW(p)

What is clear to me is that when we set up a physical system (such as a Von Neumann machine, or a human who has been 'set up' by being educated and then asked a certain question) in a certain way, some part of the future state of that system is (say with 99.999% likelihood) recognizable to us as output (perhaps certain patterns of light resonate with us as "the correct answer")

But note that there are also patterns of light which we would interpret as "the wrong answer". If arithmetic is implementation-dependent, isn't it a bit odd that whenever we build a calculator that outputs "5" for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)? Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4? Because, if arithmetic is implementation-dependent, you should be able to do so.

Are we then to take computation as "more fundamental" than physics?

Yes! (So long as we define computation as "abstract manipulation-rules on syntactic tokens", and don't make any condition about the computation's having been implemented on any substrate.)

Replies from: Zetetic
comment by Zetetic · 2011-09-24T15:52:47.826Z · LW(p) · GW(p)

But note that there are also patterns of light which we would interpret as "the wrong answer".

I did note that, maybe not explicitly but it isn't really something that anyone would expect another person not to consider.

isn't it a bit odd that whenever we build a calculator that outputs "5" for 2+2, it turns out to have something we would consider to be a wiring fault (so that it is not implementing arithmetic)?

It doesn't seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator). This refocuses the issue on us and the mechanics of how we compress information; we expected information 'X' at time t, but instead received 'Y' and decide that something is wrong with out model (and then aim to fix it by figuring out if it is indeed a wiring problem or a bit-flip or a bug in the programming of the calculator or some electromagnetic interference).

Can you point to a machine (or an idealised abstract algorithm, for that matter) which a reasonable human would agree implements arithmetic, but which disagrees with us on whether 2+2 equals 4?

No. But why is this? Because if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.

The implementation is decidedly correct only if it demonstrates itself to be correct. Only if it fulfills our expectations of it. With a calculator, we are looking for something that allows us to extend our ability to infer things about the world. If I know that a car has a mass of 1000 kilograms and a speed of 200 kilometers for hour, then I can determine whether it will be able to topple a wall given that I have some number that encoded the amount of force it can withstand. I compute the output and compare it to the data for the wall.

Because, if arithmetic is implementation-dependent, you should be able to do so.

I tend to think it depends on a human-like brain that has been trained to interpret '2', '+' and '4' in a certain way, so I don't readily agree with your claim here.

Yes! (So long as we define computation as "abstract manipulation-rules on syntactic tokens", and don't make any condition about the computation's having been implemented on any substrate.)

I'll look over it, but given what you say here I'm not confident that it won't be an attempt at a resurrection of Platonism.

Replies from: ec429
comment by ec429 · 2011-09-24T19:25:32.404Z · LW(p) · GW(p)

It doesn't seem odd at all, we have an expectation of the calculator, and if it fails to fulfill that expectation then we start to doubt that it is, in fact, what we thought it was (a working calculator).

Except that if you examine the workings of a calculator that does agree with us, you're much much less likely to find a wiring fault (that is, that it's implementing a different algorithm).

if (a) [a reasonable human would agree implements arithmetic] and (b) [which disagrees with us on whether 2+2 equals 4] both hold, then (c) [The human decides she ve was mistaken and needs to fix the machine]. If the human can alter the machine so as to make it agree with 2+2 = 4, then and only then will the human feel justified in asserting that it implements arithmetic.

If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case "Has been asked 2+2", which overrides the usual algorithm and just outputs 4... would the human then claim they'd "made it implement arithmetic"? I don't think so.

I'll try a different tack: an implementation of arithmetic can be created which is general and compact (in a Solomonoff sense) - we are able to make calculators rather than Artificial Arithmeticians. Clearly not all concepts can be compressed in this manner, by a counting argument. So there is a fact-of-the-matter that "these {foo} are the concepts which can be compressed by thus-and-such algorithm" (For instance, arithmetic on integers up to N can be formalised in O(log N) bits, which grows strictly slower than O(N); thus integer arithmetic is compressed by positional numeral systems). That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven't heard of positional numeral systems (though their system still beats the Artificial Arithmetician).

I'll look over it, but given what you say here I'm not confident that it won't be an attempt at a resurrection of Platonism.

What's wrong with resurrecting (or rather, reformulating) Platonism? Although, it's more a Platonic Formalism than straight Platonism.

Replies from: Zetetic
comment by Zetetic · 2011-09-24T21:34:17.448Z · LW(p) · GW(p)

If the only value for which the machine disagrees with us is 2+2, and the human adds a trap to detect the case "Has been asked 2+2", which overrides the usual algorithm and just outputs 4... would the human then claim they'd "made it implement arithmetic"? I don't think so.

Well, this seems a bit unclear. We are operating under the assumption that the set up looks very similar to a correct set up, close enough to fool a reasonable expert. So while the previous fault would cause some consternation and force the expert to lower his priors for "this is a working calculator", it doesn't follow that he wouldn't make the appropriate adjustment and then (upon seeing nothing else wrong with it) decide that it is likely that it should resume working correctly.

That fact-of-the-matter would still be true if there were no humans around to implement arithmetic, and it would still be true in Ancient Rome where they haven't heard of positional numeral systems (though their system still beats the Artificial Arithmetician).

Yes, it would be true, but what exactly is it that 'is true'? The human brain is a tangle of probabilistic algorithms playing various functional roles; it is "intuitively obvious" that there should be a Solomonoff-irreducible (up to some constant) program that can be implemented (given sufficient background knowledge of all of the components involved; Boolean circuits implemented on some substrate in such and such a way that "computes" "arithmetic operations on integers" (really it is doing some fancy electrical acrobatics, to be later interpreted first into a form we can perceive as an output, such as a sequence of pixels on some manner of screen arranged in a way to resemble the numerical output we want etc.) and that this is a physical fact about the universe (that we can have things arranged in such a way lead to such an outcome).

It is not obvious that we should then reverse the matter and claim that we ought to project a computational Platonism on to reality any more than logical positivist philosophers should have felt justified in doing that with mathematics and predicate logic a hundred years ago.

It is clear to me that we can perceive 'computational' patterns in top level phenomena such as the output of calculators or mental computations and that we can and have devised a framework for organizing the functional role of these processes (in terms of algorithmic information theory/computational complexity/computability theory) in a way that allows us to reason generally about them. It is not clear to me that we are further justified in taking the epistemological step that you seem to want to take.

I'm inclined to think that there is a fundamental problem with how you are approaching epistemology, and you should strongly consider looking into Bayesian epistemology (or statistical inference generally). I am also inclined to suggest that you look into the work of C.S. Peirce, and E.T. Jaynes' book (as was mentioned previously and is a bit of a favorite around here; it really is quite profound). You might also consider Judea Pearl's book "Causality"; I think some of the material is quite relevant and it seems likely to me that you would be very interested in it.

ETA: To Clarify, I'm not attacking the computable universe hypothesis; I think it is likely right (though I think that the term 'computable' in the broad sense in which it is often used needs some unpacking).

comment by David_Allen · 2011-09-24T00:56:19.788Z · LW(p) · GW(p)

I am arguing against your concept "that truth exists outside of any implementation".

My claim is that "truth" can only be determined and represented within some kind of truth evaluating physical context; there is nothing about the resulting physical state that implies or requires non-physical truth.

As stated here

Our minds are not transparent windows unto veridical reality; when you look at a rock, you experience not the the rock itself, but your mind's representation of the rock, reconstructed from photons bouncing off its surface.

To your question:

If that is so, then how come others tend to reach the same truth?

These others are producing physical artifacts such as writing or speech, which through some chain of physical interactions eventually trigger state changes in your brain. At a higher meta-level, You are taking multiple forms of observations, transforming them within your brain/mind and then comparing them... eventually concluding that "others tend to reach the same truth". Another mind with its own unique perspective may come to a different conclusion such as "Fred is wearing a funny hat."

Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.

Replies from: ec429
comment by ec429 · 2011-09-24T02:10:20.153Z · LW(p) · GW(p)

Your conclusion on truth is a physical state in your mind, generated by physical processes. The existence of a metaphysical truth is not required for you to come to that conclusion.

I think a meta- has gone missing here: I can't be certain that others tend to reach the same truth (rather than funny hats), and I can't be certain that 2+2=4. I can't even be certain that there is a fact-of-the-matter about whether 2+2=4. But it seems damned likely, given Occamian priors, that there is a fact-of-the-matter about whether 2+2=4 (and, inasmuch as a reflective mind can have evidence for anything, which has to be justified through a strange loop on the bedrock, I have strong evidence that 2+2 does indeed equal 4).

That "truth" in the map doesn't imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject. If two minds implement the same computation, and reach different answers, then I simply do not believe that they were really implementing the same computation. If you compute 2+2 but get struck by a cosmic ray that flips a bit and makes you conclude "5!", then you actually implemented the computation "2+2 with such-and-such a cosmic ray bitflip".

I am not able to comprehend the workings of a mind which believes arithmetic truth to be a property only of minds, any more than I am able to comprehend a mind which believes sheep to be a property only of buckets. Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.

Replies from: David_Allen
comment by David_Allen · 2011-09-24T05:31:44.083Z · LW(p) · GW(p)

Your conclusion on sheep is a physical state in your mind, generated by physical processes. But the sheep still exist outside of your mind.

Restating my claim in terms of sheep: The identification of a sheep is a state change within a context of evaluation that implements sheep recognition. So a sheep exists in that context.

Physical reality however does not recognize sheep; it recognizes and responds to physical reality stuff. Sheep don't exist within physical reality.

"Sheep" is at a different meta-level than the chain of physical inference that led to that classification.

That "truth" in the map doesn't imply truth in the territory, I accept. That there is no truth in the territory, I vehemently reject.

"Truth" is at a different meta-level than the chain of physical inference that lead to that classification. There is no requirement that "truth" is in the set of stuff that has meaning within the territory.

When you look at the statement 2+2=4 you think some form of "hey, that's true". When I look at the statement, I also think some form of "hey, that's true". We can then talk and both come to our own unique conclusion that the other person agrees with us. This process does not require a metaphysical arithmetic; it only requires a common context.

For example we both have a proximal existence within the physical universe, we have a communication channel, we both understand English, and we both understand basic arithmetic. These types of common contexts allow us make some very practical and reasonable assumptions about what the other person means.

Common contexts allow us to agree on the consequences of arithmetic.

The short summary is that meaning/existence is formed by contexts of evaluation, and common contexts allow us to communicate. These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.

Replies from: ec429
comment by ec429 · 2011-09-24T07:05:39.872Z · LW(p) · GW(p)

When you look at the statement 2+2=4 you think some form of "hey, that's true". When I look at the statement, I also think some form of "hey, that's true". We can then talk and both come to our own unique conclusion that the other person agrees with us.

I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it's not reflective. Putting two pebbles next to two pebbles also agrees.

Look at the discussion under this comment; I maintain that cognitive agents converge, even if their only common context is modus ponens - and that this implies there is something to be converged upon. At the least, it is 'true' that that-which-cognitive-agents-converge-on takes the value that it does (rather than any other value, like "1=0").

These processes explain your observations and operate entirely within the physical universe. The concept of metaphysical existence is not needed.

Mathematical realism also explains my observations and operates entirely within the mathematical universe; the concept of physical existence is not needed. The 'physical existence hypothesis' has the burdensome detail that extant physical reality follows mathematical laws; I do not see a corresponding burdensome detail on the 'mathematical realism hypothesis'. Thus by Occam, I conclude mathematical realism and no physical existence.

I am not sure I have answered your objections because I am not sure I understand them; if I do not, then I plead merely that it's 8AM, I've been up all night, and I need some sleep :(

Replies from: David_Allen
comment by David_Allen · 2011-09-25T17:34:06.048Z · LW(p) · GW(p)

I think your argument involves reflection somewhere. The desk calculator agrees that 2+2=4, and it's not reflective. Putting two pebbles next to two pebbles also agrees.

Agreement with statements such as 2+2=4 is not a function that desk calculators perform. It is not the function performed when you place two pebbles next to two pebbles.

Agreement is an evaluation performed by your mind from its unique position in the universe.

... this implies there is something to be converged upon.

The conclusion that convergence has occurred must be made from a context of evaluation. You make observations and derive a conclusion of convergence from them. Convergence is a state of your map, not a state of the territory.

Mathematical realism also explains my observations and operates entirely within the mathematical universe; ...

Mathematical realism appears to confuse the map for the territory -- as does scientific realism, as does physical realism.

When I refer to physical reality or existence I am only referring to a convenient level of abstraction. Space, time, electrons, arithmetic, these all are interpretations formed from different contexts of evaluation. We form networks of maps to describe our universe, but these maps are not the territory.

Gottlob Frege coined the term context principle in his Foundations of Arithmetic, 1884 (translated). He stated it as "We must never try to define the meaning of a word in isolation, but only as it is used in the context of a proposition."

I am saying that we must never try to identify meaning or existence in isolation, but only as they are formed by a context of evaluation.

When you state:

Putting two pebbles next to two pebbles also agrees.

I look for the context of evaluation that produces this result -- and I recognize that the pebbles and agreement are states formed within your mind as you interact with the universe. To believe that these states exist in the universe you are interacting with is a mind projection fallacy.

comment by [deleted] · 2011-09-23T23:13:05.924Z · LW(p) · GW(p)

It is a fact that agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems.

Not so! From most interesting formal systems there are exponentially many possible deductions that can be made. All known agents can only make a bounded number of possible deductions. A widely believed conjecture implies that longer-lived agents could only make a polynomial number of possible deductions.

Replies from: ec429
comment by ec429 · 2011-09-23T23:25:48.166Z · LW(p) · GW(p)

Um, that's not quite what I meant... if I prove a theorem you're unlikely to prove its negation, and if I tell you I have a proof of a theorem you're likely to be able to find one. Moreover, in the latter case, if you're not able to find a proof, then either I am mistaken in my belief that I have a proof, or you haven't looked hard enough (and in principle could find a proof if you did). That doesn't mean that every agent with "I think therefore I am" will deduce the existence of rice pudding and income tax, it only means that if one agent can deduce it, the others in principle can do the same. For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming 'proves' is defined by 'a proof of finite length exists', as happens in Gödel-numbering / PA "box")

Replies from: None
comment by [deleted] · 2011-09-23T23:30:21.977Z · LW(p) · GW(p)

Um, that's not quite what I meant... if I prove a theorem you're unlikely to prove its negation,

As you know, I am outside of mainstream opinion on this. But this:

and if I tell you I have a proof of a theorem you're likely to be able to find one.

is flatly wrong if P <> NP and not controversially so. If I reach a point in a large search space there is no guarantee that you can reach the same point in a feasible amount of time.

For any formal system T, the set {X : T proves X} is recursively enumerable, no? (assuming 'proves' is defined by 'a proof of finite length exists', as happens in Gödel-numbering / PA "box")

Recursively enumerable sure. Not feasibly enumerable.

Replies from: ec429
comment by ec429 · 2011-09-24T00:26:35.817Z · LW(p) · GW(p)

But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof. But saying that things are true only by virtue of their proof being feasible... is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?

Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards. (If it doesn't, I'd be very surprised.) Consequently, there is something common.

Similarly, no matter how low your prior probability for "PA is consistent", so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.

Replies from: None, None
comment by [deleted] · 2011-09-24T02:42:41.785Z · LW(p) · GW(p)

But why should feasibility matter? Sure, the more steps it takes to prove a proposition, the less likely you are to be able to find a proof

Incidentally but importantly, lengthiness is not expected to be the only obstacle to finding a proof. Cryptography depends on this.

As to why feasibility matters: it's because we have limited resources. You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources. If you wish to convince people to be less skeptical of infinity (your stated intention), you will have to take feasibility into account or else make a circular argument.

But saying that things are true only by virtue of their proof being feasible... is disturbing, to say the least. If we build a faster computer, do some propositions suddenly become true, because we now have the computing power to prove them?

I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.

Similarly, no matter how low your prior probability for "PA is consistent", so long as that probability is not 0, learning that I have proved a theorem should cause you to decrease your estimate of the probability that you will prove its negation.

Not so. If I have P(PA will be shown inconsistent in fewer than m minutes) = p, then I also have P(I will prove the negation of your theorem in fewer than m+1 minutes) = p. Your ability to prove things doesn't enter into it.

Replies from: ec429
comment by ec429 · 2011-09-24T03:57:54.865Z · LW(p) · GW(p)

lengthiness is not expected to be the only obstacle to finding a proof

True; stick a ceteris paribus in there somewhere.

You are trying to reason about reality from the point of view of a hypothetical entity that has infinite resources.

Not so; I am reasoning about reality in terms of what it is theoretically possible we might conclude with finite resources. It is just that enumerating the collection of things it is theoretically possible we might conclude with finite resources requires infinite resources (and may not be possible even then). Fortunately I do not require an enumeration of this collection.

I am certainly not saying that feasible proofs cause things to be true. Our previous slow computer and our new fast computer cause exactly the same number of important things to be true: none at all. That is the formalist position, anyway.

So either things that are unfeasible to prove can nonetheless be true, or nothing is true. So why does feasibility matter again?

P(I will prove the negation of your theorem in fewer than m+1 minutes) = p

No, it is > p. P(I will prove 1=0 in fewer than m+1 minutes) = p + epsilon. P(I will prove 1+1=2 in fewer than m+1 minues) = nearly 1. This is because you don't know whether my proof was correct.

comment by [deleted] · 2011-09-24T01:04:48.961Z · LW(p) · GW(p)

Me saying I have a proof of a theorem should cause you to update P(you can find a proof) upwards.

A positive but minuscule amount. This is how cryptography works! In less than a minute (aided by my very old laptop), I gave a proof of the following theorem: the second digit of each of each of the prime factors of n is 6, where

n = 44289087508518650246893852937476857335929624072788480361

It would take you much longer to find a proof (even though the proof is very short!).

(If it doesn't, I'd be very surprised.)

Update!

About feasibility, I might say more later.

Replies from: ec429
comment by ec429 · 2011-09-24T01:55:24.935Z · LW(p) · GW(p)

A positive but minuscule amount.

Right - but if there were no 'fact-of-the-matter' as to whether a proof exists, why should it be non-zero at all?

Replies from: None
comment by [deleted] · 2011-09-24T01:57:38.542Z · LW(p) · GW(p)

But that isn't what either of us said. You mentioned P(you can find a proof). I am telling you (telling you, modulo standard open problems) that this can be very small even after another agent has found a proof. This is a standard family of topics in computer science.

Replies from: ec429
comment by ec429 · 2011-09-24T02:31:31.480Z · LW(p) · GW(p)

I am aware it can be very small. The only sense in which I claimed otherwise was by a poor choice of wording. The use I made of the claim that "Agents implementing the same deduction rules and starting from the same axioms tend to converge on the same set of theorems" was to argue for the proposition that there is a fact-of-the-matter about which theorems are provable in a given system. You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount - from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists. In which case, you are not arguing with my conclusion, merely with a step I used in deriving it - a step I have replaced - so does that not screen off my conclusion from that step - so why are you still arguing with me?

Replies from: None
comment by [deleted] · 2011-09-24T02:53:49.784Z · LW(p) · GW(p)

I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.

You accept that my finding a proof causes you to update P(you can find a proof) upwards by a strictly positive amount - from which I infer that you accept that there is a fact-of-the-matter as to whether a proof exists.

No. "I can find a proof in time t" is a definite event whose probability maybe can be measured (with difficulty!). "A proof exists" is a much murkier statement and it is much more difficult to discuss its probability. (For instance it is not possible to have a consistent probability distribution over assertions like this without assigning P(proof exists) = 0 or P(proof exists) = 1. Such a consistent prior is an oracle!)

Replies from: ec429, wedrifid
comment by ec429 · 2011-09-24T04:07:50.687Z · LW(p) · GW(p)

I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.

I wasn't suggesting you were trying to get on my nerves. I just think we're talking past each other.

"A proof exists" is a much murkier statement and it is much more difficult to discuss its probability.

As a first approximation, what's wrong with "\lim_{t -> \infty} P(I can find a proof in time t)"?

Also, I don't see why the prior has to be oracular; what's wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½? But then if the digit is X, then surely a proof exists that it is X (because, in principle, the digit can be computed in finitely many steps); it must be some X in [[:digit:]], so if it is even a proof exists that it is even; otherwise (sharp swerve) one does not, and P=½. Not sure about that sharp swerve; if I condition all my probabilities on |arithmetic is consistent) then it's ok. But then, assuming I actually need to do so, the probabilities would be different if conditioned on |arithmetic is inconsistent), and thus by finding a proof, you find evidence for or against the assertion that arithmetic is consistent. But things you can find evidence on, exist! (They are the sheep that determine your pebbles.) So where did I go wrong? (Did I slip a meta-level somewhere? It's possible; I was juggling them a bit.)

Replies from: None
comment by [deleted] · 2011-09-24T04:27:51.299Z · LW(p) · GW(p)

As a first approximation, what's wrong with "\lim_{t -> \infty} P(I can find a proof in time t)"?

I slipped. It's P(I will find a proof in time t) that is asking for the probability of a definite event. It's not that evaluating this number at large t is so problematic, it's that it doesn't capture what people usually mean by "provable in principle."

Also, I don't see why the prior has to be oracular; what's wrong with, say, P(the 3^^^3th decimal digit of pi is even)=½?

If A is logically implied by B, then P(A) >= P(B), or else you are committing a version of the conjunction fallacy. One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P(nth digit is even) > P(axioms) or P(n digit is odd) > P(axioms). P becomes an oracle by testing, for each assertion x, whether P(x) > P(axioms). There might be ways out of this but they require you to think about feasibility.

Replies from: ec429, ec429
comment by ec429 · 2011-09-24T04:51:38.827Z · LW(p) · GW(p)

It's P(I will find a proof in time t) that is asking for the probability of a definite event. It's not that evaluating this number at large t is so problematic, it's that it doesn't capture what people usually mean by "provable in principle."

Suppose that a proof is a finite sequence of symbols from a finite alphabet (which supposition seems reasonable, at least to me). Suppose that you can determine whether a given sequence constitutes a proof, in finite time (not necessarily bounded). Then construct an ordering on sequences (can be done, it's the union over n in (countable) N of sequences of length n (finitely many), is thus countable), and apply the determination procedure to each one in turn. Then, if a proof exists, you will find it in finite time by this method; thus P(you will find a proof by time t) tends to 1 as t->infty if a proof exists, and is constant 0 forall t if no proof exists.

There's an obvious problem; we can't determine with P=1 that a given sequence constitutes a proof (or does not do so). But suppose becoming 1 bit more sure, when not certain, of the proof-status of a given sequence, can always be done in finite time. Then learn 1 bit about sequence 1, then sequence 1, then seq 2, then seq 1, then seq 2, then seq 3... Then for any sequence, any desired level of certainty is obtained in finite time.

If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?

Replies from: None
comment by [deleted] · 2011-09-24T05:41:37.479Z · LW(p) · GW(p)

If something is provable in principle, then (with a certain, admittedly contrived and inefficient search algorithm) the proof can be found in finite time with probability 1. No?

Finite amount of time, yes. Feasible amount of time, no, unless P = NP. When I said that you were considering agents with unlimited resources, this is what I meant--agents for whom "in principle" is not different from "in practice." There are no such agents under the sun.

Replies from: ec429
comment by ec429 · 2011-09-24T06:51:21.185Z · LW(p) · GW(p)

But you don't have to have unlimited resources, you just have to have X large but finite amount of resources, and you don't know how big X is.

Of course, in order to prove that your resources are sufficient to find the proof, without simply going ahead and trying to find the proof, you would need those resources to be unlimited - because you don't know how big X is. But you still know it's finite. "Feasibly computable" is not the same thing as "computable". "In principle" is, in principle, well defined. "In practice" is not well-defined, because as soon as you have X resources, it becomes possible "in practice" for you to find the proof.

I say again that I do not need to postulate infinities in order to postulate an agent which can find a given proof. For any provable theorem, a sufficiently (finitely) powerful agent can find it (by the above diagonal algorithm); equivalently, an agent of fixed power can find it given sufficient (finite) time. So, while such might be "unfeasible" (whatever that might mean), I can still use it as a step in a justification for the existence of infinities.

comment by ec429 · 2011-09-24T04:38:54.042Z · LW(p) · GW(p)

One can certainly compute the digits of pi, so that since (as non-intuitionists insist anyway) either the $n$th digit is even, or it is odd, we must have P($n$th digit is even) > P(axioms) or P($n$ digit is odd) > P(axioms).

I don't think that's valid - even if I know (P=1) that there is a fact-of-the-matter about whether the nth digit is even, if I don't have any information causally determined by whether the nth digit is even then I assign P(even) = P(odd) = ½. If I instead only believe with P=P(axioms) that a fact-of-the-matter exists, then I assign P(even) = P(odd) = ½ * P(axioms). Axioms ⇏ even. Axioms ⇒ (even or odd). P(axioms) = P(even or odd) = P(even)+P(odd) = (½ + ½) * P(axioms) = P(axioms), no problem. "A fact-of-the-matter exists for statement A" is (A or ¬A), and assuming that our axioms include Excluded Middle, P(A or ¬A) >= P(axioms).

Summary: P is about my knowledge; existence of a fact-of-the-matter is about, well, the fact-of-the-matter. As far as I can tell, you're confusing map and territory.

comment by wedrifid · 2011-09-24T14:28:45.062Z · LW(p) · GW(p)

I am still arguing with you because I think your misstep poisons more than you have yet realized, not to get on your nerves.

Suggestions of dire wrongness? Bah. Now I'm all tempted to read through the context. But so far I've failed to see the point!

Wait. "A proof exists" is murky? What sort of bizarre twisted mathematical reasoning is behind that? (Not that I'm saying mathematics doesn't say equally bizarre and twisted things!)

In the set of all mathematical things you can say either there is one that constitutes a proof or there isn't. That doesn't mean it is possible to establish whether such a proof exists by using all the computing power in the universe but it still exists. Not a big deal.

For instance it is not possible to have a consistent probability distribution over assertions like this without assigning P(proof exists) = 0 or P(proof exists) = 1.

Counter assertion: Yes there is. It's just mundane logical uncertainty. But even if you decide you're not allowed to assign probabilities to logical unknowns (making the whole exercise tautologically pointless) that doesn't make "a proof exists" murky. If anything it makes it arbitrarily more clear cut. It means you're allowed to say 'a proof exists' or the reverse claim but have forbidden yourself the chance to assign probabilities to the possibility. And not allowing yourself to think about it doesn't do anything to change whether the thing exists or not.

Replies from: None
comment by [deleted] · 2011-09-25T02:47:32.709Z · LW(p) · GW(p)

Wait. "A proof exists" is murky?

"A proof exists, and here it is" is not at all murky. "A proof exists, I'm close to one and will show it to you soon" is not at all murky. Either statement could be true or false, and testably so. In the first case, it is just a matter of checking. In the second case, it is just a matter of waiting.

"A proof exists" full stop is in much worse shape. It has no testable consequences.

In the set of all mathematical things you can say either there is one that constitutes a proof or there isn't. That doesn't mean it is possible to establish whether such a proof exists by using all the computing power in the universe but it still exists. Not a big deal.

Certainly if "the set of all mathematical things" (even "the set of all mathematical proofs") has some Platonic existence, then EC429 is in much better shape. But this is exactly the kind of thing that any "infinite set atheist" would be unwilling to take for granted.

It's just mundane logical uncertainty.

Just! Mundane!

Replies from: wedrifid
comment by wedrifid · 2011-09-25T04:37:02.289Z · LW(p) · GW(p)

"A proof exists" full stop is in much worse shape. It has no testable consequences.

It has rather more testable consequences than an item that has moved out of my future light cone. Such things don't stop existing or not existing because I personally am not able to test them. Because the universe isn't about me any more than mathematical propositions are.

It's just mundane logical uncertainty.

Just! Mundane!

Yes. If you are unable to assign probabilities to logical uncertainties then I unequivocally reject your preferred approach to mathematics. It's rather close to useless in a whole lot of situations that matter. Mine is better. To illustrate that I don't even need to construct a scenario involving Omega, bets and saving the lives of cute little puppies. It is probably the sort of thing that could make it possible to construct an AI more easily or more effectively than yours. (I'm not certain about that - but that works almost as well as an illustration.)

Replies from: None
comment by [deleted] · 2011-09-25T05:01:18.699Z · LW(p) · GW(p)

It has rather more testable consequences than an item that has moved out of my future light cone.

Rather less, i.e. none. You at least have memories of something that's moved out of your future light cone.

If you are unable to assign probabilities to logical uncertainties then I unequivocally reject your preferred approach to mathematics. It's rather close to useless in a whole lot of situations that matter.

What did I say that made you think I don't regard logical uncertainty as an important problem? But you're bluffing if you're claiming to have a solution to this problem. Anyway any serious approach to logical uncertainty has to grapple with the fact that we have limited resources for computation (e.g. for computing probabilities!).

Replies from: wedrifid
comment by wedrifid · 2011-09-25T05:48:30.779Z · LW(p) · GW(p)

Rather less, i.e. none. You at least have memories of something that's moved out of your future light cone.

No I don't. I remember hardly anything about stuff that has passed out of my future light cone. Especially about small dark objects. And since actual information is lost when stuff goes out of my future light cone some of those objects no longer have enough evidence about them in the entire light cone to be able to reconstruct what they are. Although I suppose there is a probability distribution over possible combinations of object-going-out-of-the-light-cone that could be constructed based on remaining evidence.

Things existing has nothing to do with me being able to detect them or to remember them or to in principle be able to deduce their existence based on evidence.

As for "a proof exists", that is something that can sometimes be proven to exist or not exist. I'm told that sometimes that you can even prove that a proof exists without having a proof. Which seems to rely on a bizarre definition of proof but hey, mathematicians are into that sort of thing.

My position: Any ways of defining terms such that assigning probabilities to "a proof exists" is forbidden is a broken way to define such terms. It neither matches the intuitive way we use the language nor constitutes a useful way to carve up reality.

What did I say that made you think I don't regard logical uncertainty as an important problem? But you're bluffing if you're claiming to have a solution to this problem.

I'm not bluffing anything. Nor am I writing a treatise on logical uncertainty. But I'll tell you this: If I'm locked in a room, someone reliable generates a random 40 digit number from a uniform distribution right in front of me and I'm not given a computer and I'm asked to bet at even odds that it is not a prime number I am going to take that bet. Because it probably isn't a prime number. That's right, probably not.

It's not because I'm clever and prestigous and am claiming to have deep wisdom about logical uncertainty. It's because I'm not am not stupid and I assign probabilities to stuff so I can win. If being impressive and insightful requires me to only assign 0% and 100% to things like that then I don't want to be impressive and insightful. Because that is literally for losers.

Replies from: None
comment by [deleted] · 2011-09-25T06:20:03.787Z · LW(p) · GW(p)

There's very little in that I disagree with. In particular I think "winning" is a fine summary of what probability is for. But there is nothing for you to win when assigning a probability to a claim that has no testable consequences--winning is a testable consequence.

Replies from: wedrifid
comment by wedrifid · 2011-09-25T06:55:59.334Z · LW(p) · GW(p)

There's very little in that I disagree with.

Just the most fundamental part:

My position: Any ways of defining terms such that assigning probabilities to "a proof exists" is forbidden is a broken way to define such terms. It neither matches the intuitive way we use the language nor constitutes a useful way to carve up reality.

Replies from: None
comment by [deleted] · 2011-09-25T16:14:16.193Z · LW(p) · GW(p)

Serious question: what's special about the phrase "a proof exists" that requires you to assign a probability to it? Presumably there are some grammatically correct assertions that you don't feel should have numbers attached to them.

Replies from: wedrifid
comment by wedrifid · 2011-09-25T16:24:49.408Z · LW(p) · GW(p)

I don't consider "can have probabilities assigned" to be an exception that requires a special case. It gets treated the same as any other logical uncertainty. You can either handle logical uncertainty or you can't.

Replies from: None
comment by [deleted] · 2011-09-25T16:27:45.325Z · LW(p) · GW(p)

I don't understand. You're saying you do have a prior probability for every grammatically correct assertion?

Replies from: wedrifid
comment by wedrifid · 2011-09-25T17:25:07.344Z · LW(p) · GW(p)

You're saying you do have a prior probability for every grammatically correct assertion?

No, that isn't something I've said.

Replies from: None
comment by [deleted] · 2011-09-25T17:43:54.096Z · LW(p) · GW(p)

Then what is it about the assertion "a proof exists" that calls out to have a number attached to it? Why is it similar to "a written proof will exist by tomorrow noon" and not similar to "the exquisite corpse will drink the new wine"?

Replies from: wedrifid
comment by wedrifid · 2011-09-25T22:02:48.973Z · LW(p) · GW(p)

In case it wasn't clear, I rejected this line of reasoning. "A proof exists" is not a special case that needs justification.

Please refer to my first few comments on the subject. They constitute everything I wish to say on the (rather unimportant) subject of whether "a proof exists" is permitted. I don't expect this conversation to produce any new insights.

comment by [deleted] · 2011-09-23T12:04:34.034Z · LW(p) · GW(p)

The only objects that are real (in a Platonic sense) are formal systems (or rather, syntaxes). That is to say, my ontology is the set of formal systems. (This is not incompatible with the apparent reality of a physical universe).

You're talking about reality. Why have you started with the idea that formal systems are real, and only parenthetically mentioned that this is consistent with the reality of the physical universe? You should be going in the other direction. The physical universe is real, and the remaining question is what role do formal systems play in it.

The minimalist answer is that they are tools created by humans that sometimes enable them to make descriptions and predictions about the physical universe. You seem to be so far from this minimalist answer that we can hardly call you a non-Platonist!

Replies from: ec429
comment by ec429 · 2011-09-23T17:24:29.763Z · LW(p) · GW(p)

I guess I expected a short inferential distance. I really did think it was obvious why the apparent reality of a physical universe was consistent with physical nonrealism and formal realism. I'll make another post about this soon.

EDIT: The Apparent Reality of Physics

Replies from: None
comment by [deleted] · 2011-09-23T23:07:16.369Z · LW(p) · GW(p)

From neither of your posts can I glean what could be meant by apparent reality.

Replies from: ec429
comment by ec429 · 2011-09-23T23:29:30.319Z · LW(p) · GW(p)

To a conscious agent within a mathematical system which models rocks and trees and people and the Moon, it looks very much as though rocks and trees and people and the Moon exist; physics appears to be real. But that appearance is in no way contingent upon anything we could reasonably define as the 'existence' of rocks and trees and people and the Moon.

Is that gleanable, or am I still talking gibberish? (I never rule out the latter, particularly on the subject of philosophy.)

Replies from: None
comment by [deleted] · 2011-09-23T23:48:55.722Z · LW(p) · GW(p)

I think that was lack of charity on my part, beg your pardon.

comment by shokwave · 2011-09-23T08:28:35.490Z · LW(p) · GW(p)

But how do we know that □S ⇔ there is a proof of S in PA? Only by applying some meta-theory. And how do we know that statements reached in the meta-theory of the form "thus-and-such is true of PA" are true of PA? Only by applying a meta-meta-theory. There is no a-priori justification for the claim that "A formal system is in principle capable of talking about other formal systems", which claim is used by the proof that PA can talk about itself.

I am not sure how relevant it is, but this put me in the mind of what the tortoise said to achilles.

comment by Jack · 2011-09-23T18:14:26.587Z · LW(p) · GW(p)

Upvoted for being interesting and being part of a conversation that doesn't get had enough here (whatever else the post might be)

comment by Mitchell_Porter · 2011-09-23T08:44:12.040Z · LW(p) · GW(p)

So instead we restrict the ontology to syntactic systems devoid of any semantics; the statement ""Foo" is true" is meaningless.

If reality is "devoid of semantics", then all statements are meaningless.

Replies from: ec429
comment by ec429 · 2011-09-23T17:26:48.341Z · LW(p) · GW(p)

Statements have no meaning inherent in themselves. That doesn't stop us ascribing semantics to things, we just have to stop believing that "P∧(P=>Q) ⊦ Q" has a little XML tag saying "true" stuck to it out there in Plato-space.

comment by DuncanS · 2011-09-23T23:23:26.809Z · LW(p) · GW(p)

I wonder if you've considered the halting problem here. This tends to get away from questions of real/not-real and semantic level distinctions, whilst still demonstrating the nature of incompleteness quite nicely.