# [LINK] Steven Landsburg "Accounting for Numbers" - response to EY's "Logical Pinpointing"

post by David_Gerard · 2012-11-14T12:55:16.910Z · score: 11 (15 votes) · LW · GW · Legacy · 47 comments"I started to post a comment, but it got long enough that I’ve turned my comment into a blog post."

So the study of second-order consequences is not logic at all; to tease out all the second-order consequences of your second-order axioms, you need to confront not just the forms of sentences but their meanings. In other words, you have to understand meanings before you can carry out the operation of inference. But Yudkowsky is trying to derive meaning from the operation of inference, which won’t work because in second-order logic, meaning comes first.

... it’s important to recognize that Yudkowsky has “solved” the problem of accounting for numbers only by reducing it to the problem of accounting for sets — except that he hasn’t even done that, because his reduction relies on pretending that second order logic is logic.

## 47 comments

Comments sorted by top scores.

It seems to me that a) Landsburg's critique is right, b) the problem is still mysterious, c) making it less mysterious would require a new mathematical result, d) that future result is more likely to be negative, like the incompleteness theorem or undefinability of truth.

By far the largest criticism here is the requirements of set theory:

Yudkowsky is relying on a theorem when he says that second-order Peano arithmetic has a unique model. That theorem requires a substantial dose of set theory. So in order to avoid taking numbers as primitive objects, he’s effectively resorted to taking sets as primitive objects. But why is it any more satisfying to take set theory as “given” than to take numbers as “given”? Indeed, the formal study of numbers precedes the formal study of sets by millennia, which suggests that numbers are a more natural starting point than sets are. Whether or not you buy that argument, it’s important to recognize that Yudkowsky has “solved” the problem of accounting for numbers only by reducing it to the problem of accounting for sets — except that he hasn’t even done that, because his reduction relies on pretending that second order logic is logic.

This seems most damning in that it makes most of the philosophical issues about the nature of logic irrelevant.

I think this critique undervalues the insight second order arithmetic gives for understanding the natural numbers.

If what you wanted from logic was a complete, self-contained account of mathematical reasoning then second order arithmetic is a punt: it purports to describe a set of rules for describing the natural numbers, and then in the middle says "Just fill in whatever you want here, and then go on". Landsburg worries that, as a consequence, we haven't gotten anywhere (or, worse, have started behind where we started, since no we need an account of "properties" in order to have an account of natural numbers).

But, thanks to Gödel, we know that a self-contained account of reasoning is impossible. Landsburg's answer is to give up on reasoning: the natural numbers are just there, and we have an intuitive grasp of them. This is, obviously, even more of a punt, and the second order arithmetic approach actually tells us something useful about the concept of the natural numbers.

Suppose someone produces a proof of a theorem about the natural numbers---let's say, for specificity, they prove the twin primes conjecture (that there are infinitely many twin primes). This proof proceeds in the following fashion: by extensive use of the most elaborate abstract math you can think of---large cardinals and sheafs and anabelian geometry and whatnot---the person proves the existence of a set of natural numbers P such that 1) 0 is in P, 2) if x is in P then x+1 is in P, and 3) if x is in P then there are twin primes greater than x.

Second order arithmetic says that this property P must extend all the way through the natural numbers, and therefore must itself be the natural numbers, and so we could conclude the twin primes conjecture.

The rest of this proof is really complicated, and uses things you might not think are valid, so you might not buy it. But second order arithmetic demands that *if* you believe the rest of the proof, you should also agree that the twin primes conjecture holds. This is actually worth something: it says that if you and I agree about all the fancy abstract stuff, we also have to agree about the natural numbers. And that's the most we can ask for.

To phrase this differently, second order arithmetic gives us a universal way of picking the natural numbers out in any situation. Different models of set theory may end up disagreeing about what's true in the natural numbers, but second order arithmetic is a consistent way of identifying the notion in that model of set theory which lines up with our intuitions for the what the natural numbers are. That's a very different account than the one offered by first order logic, but it's a valuable one in its own right.

One more phrasing, just for thoroughness. Landsburg passes off an account of the natural numbers to Platonism: we just intuit the natural numbers. But suppose you and I both purport to have good intuitions of the natural numbers, but we seem to disagree about some property. So we sit down to hash this out; it might turn out that we have some disagreement about what constitutes valid mathematical reasoning, in which case we may be stuck. But if not---if we agree on the reasoning---then eventually the disagreement will boil down to something like this:

You: "And since 0 is turquoise and, whenever x is turquoise, x+1 is also turquoise, all the natural numbers are turquoise."

Me: "Wait, those aren't all the natural numbers. Those are just the small natural numbers."

You: "What? What are the small natural numbers?"

Me: "You know, the initial segment of the natural numbers closed under the property of being turquoise."

You: "There's an initial segment of the natural numbers which includes 0, is closed under adding 1, but isn't every natural number?"

Me: "Sure. The small natural numbers. Haven't you heard of them?"

You: "Wait, are there any initial segments of the *small* natural numbers which include 0 and are closed under adding 1?"

Me: "Sure. The tiny natural numbers, for starters."

You: "Look, you're using the word 'natural numbers' wrong. It means..." and then you explain the second order induction axiom. So apparently whatever I thought was the natural numbers was some other bizzarre object; what the second order induction axiom does for a Platonist is pin down which Platonic object we're actually talking about.

I disagree with your example:

The rest of this proof is really complicated, and uses things you might not think are valid, so you might not buy it. But second order arithmetic demands that if you believe the rest of the proof, you should also agree that the twin primes conjecture holds.

But set theory says the same thing. And set theory, unlike second-order arithmetic, is probably strong enough to formalize the large and complicated proof in the first place. Even if there are elements in the proof that go beyond ZFC (large cardinals etc.), mathematicians are likely to view them as additional assumptions on top of what they see as set theory.

Consider a non-logician mathematician to whom the induction principle is not primarily a formal statement to be analyzed, but just, well, induction, a basic working tool. Given a large proof as you describe, ending in an application of induction. What would be the benefit, to the mathematician, of viewing that application as happening in second-order logic, as opposed to first-order set theory? Why would they want to use second-order *anything*?

To phrase this differently, second order arithmetic gives us a universal way of picking the natural numbers out in any situation. Different models of set theory may end up disagreeing about what's true in the natural numbers, but second order arithmetic is a consistent way of identifying the notion in that model of set theory which lines up with our intuitions for the what the natural numbers are.

I don't see how that works, either.

Let G be the arithmetical statement expressing the consistency of ZFC. There are models of set theory in which G is true, and models in which G is false. Are you saying that second-order arithmetic gives us a better way, a less ambiguous way, to study the truth of G? How would that work in practice?

The way I see it, different models of set theory agree on what natural numbers are, but disagree on what subsets of natural numbers exist. *This ambiguity is not resolved by second-order arithmetic; rather, it's swept under the carpet.*
The "unique" model "pinpointed" by it is utterly at the mercy of the same ambiguity of what the set of subsets of N is, and the ambiguity reasserts itself the moment you start studying the semantics of second-order arithmetic which you will do through model theory, expressed within set theory. So what is it that you have gained?

So apparently whatever I thought was the natural numbers was some other bizzarre object; what the second order induction axiom does for a Platonist is pin down which Platonic object we're actually talking about.

To a Platonist, what you used was not the second order induction axiom; it was just the familiar principle of induction.

I think you're reading too much into what I'm saying. I'm not suggesting that second order arithmetic is useful as a mathematical framework to talk about reasoning, in the way that first-order logic can. I'm saying that second order arithmetic is a useful way to talk about what makes the natural numbers special.

I'm also not suggesting that second order arithmetic has anything deep to add relative to a naïve (but sufficiently abstract) understanding of induction, but given that many people don't have a sufficiently abstract understanding of induction, I think it usefully illustrates just how broadly induction has to be understood. (To be clear, I'm also not suggesting that second order arithmetic is the one true way to think about the natural numbers, only that it's a useful perspective.)

Consider a non-logician mathematician to whom the induction principle is not primarily a formal statement to be analyzed, but just, well, induction, a basic working tool. Given a large proof as you describe, ending in an application of induction. What would be the benefit, to the mathematician, of viewing that application as happening in second-order logic, as opposed to first-order set theory? Why would they want to use second-order anything?

I'm not claiming there is any such benefit. This is the wrong question to ask; neither Eliezer's original post nor my comment were written for an audience of mathematicians concerned with the optimal way to do mathematics in practice.

Let G be the arithmetical statement expressing the consistency of ZFC. There are models of set theory in which G is true, and models in which G is false. Are you saying that second-order arithmetic gives us a better way, a less ambiguous way, to study the truth of G?

No, I am not saying that.

The way I see it, different models of set theory agree on what natural numbers are, but disagree on what subsets of natural numbers exist.

If there are models where G is true and models where G is false then different models of set theory don't agree on what natural numbers are.

This ambiguity is not resolved by second-order arithmetic; rather, it's swept under the carpet. The "unique" model "pinpointed" by it is utterly at the mercy of the same ambiguity of what the set of subsets of N is, and the ambiguity reasserts itself the moment you start studying the semantics of second-order arithmetic which you will do through model theory, expressed within set theory. So what is it that you have gained?

No, the ambiguity is not resolved by second-order arithmetic. The ambiguity is not resolved by *anything*, so that doesn't seem like a very useful objection.

Also, note that I didn't refer to either a unique model nor to pinpointing. (I'm aware that Eliezer did, but you're replying to me.) What I said was: "[S]econd order arithmetic gives us a universal way of picking the natural numbers out in any situation. Different models of set theory may end up disagreeing about what's true in the natural numbers, but second order arithmetic is a consistent way of identifying the notion in that model of set theory which lines up with our intuitions for the what the natural numbers are"

...some other bizzarre object...

I object to the notion that the hypernaturals are bizarre. Down with ω-consistency!

/geek

Heh - I'm amazed at how many things in this post I alternately strongly agree or strongly disagree with.

It’s important to distinguish between the numeral “2″, which is a formal symbol designed to be manipulated according to formal rules, and the noun “two”, which appears to name something

OK... I honestly can't comprehend how someone could simultaneously believe that "2" is just a symbol and that "two" is a blob of pure meaning. It suggests the inferential distances are actually pretty great here despite a lot of surface similarities between what Landsburg is saying and what I would say.

Instead, the point of the Peano axioms was to model what mathematicians do when they’re talking about numbers

I'm happy with this (although the next sentence suggests I may have been misinterpreting it slightly). Another way I would put it is that the Peano axioms are about making things precise and getting everyone to agree to the same rules so that they argue fairly.

Like all good models, the Peano axioms are a simplification that captures important aspects of reality without attempting to reproduce reality in detail.

I'd like an example here.

I’d probably start with something like Bertrand Russell’s account of numbers: We say that two sets of objects are “equinumerous” if they can be placed in one-one correspondence with each other

This is defining numbers in terms of sets, which he explicitly criticizes Yudkowsky for later. I'll take the charitable interpretation though, which would be "Y thinks he can somehow avoid defining numbers in terms of sets... which it turns out he can't... so if you're going to do that anyway you may as well take the more straightforward Russell approach"

the whole point of logic is that it is a mechanical system for deriving inferences from assumptions, based on the forms of sentences without any reference to their meanings

I really like this definition of logic! It doesn't seem to be completely standard though, and Yudkowksy is using it in the more general sense of valid reasoning. So this point is basically about the semantics of the word "logic".

But Yudkowsky is trying to derive meaning from the operation of inference, which won’t work because in second-order logic, meaning comes first.

I think this is a slight strawmanning of Yudkowsky. Here Yudkowsky is trying to define the meaning of one particular system - the natural numbers - not define *the entire concept of meaning*.

he’s effectively resorted to taking sets as primitive objects

So when studying logic, model theory etc. we have to make a distinction between the system of reasoning that we are studying and "meta-reasoning". Meta-reasoning can be done in natural language or in formal mathematics - generally I prefer when it's a bit more mathy because of the thing of precision/agreeing to play by the rules that I mentioned earlier. I don't see math and natural language as operating at different levels of abstraction though - clearly Landsburg disagrees (with the whole 2/two thing) but it's hard for me to understand why.

Anyway, if you're doing model theory then your meta-reasoning involves sets. If you use natural language instead though, you're probably still using sets at least by implication.

full-fledged Platonic acknowledgement

I think this is what using natural language as your meta-reasoning feels like from the inside. Landsburg would say that the natural numbers exist and that "two" refers to a particular one; in model theory this would be "there exists a model N with such-and-such properties, and we have a mapping from symbols "0", "1", "2" etc. to elements of N".

It seems enormouosly more plausible to me that numbes are “just out there” than that physical objects are “just out there”

So he's saying that the existence of numbers implies the existence of the physical universe, and therefore that the existence of numbers is more likely?? Or is there some quality of "just out there"ness that's distinct from existence?

mathematicians from Pythagoras through Dedekind had absolutely no problem talking about numbers in the absence of the Peano axioms.

So, they were lucky. It could have been that that-which-Pythagoras-calls-number was not that-which-Fibonacci-calls-numbers.

Are there *absolutely no* examples of cases where mathematicians disagreed about some theorem about some not-yet-axiomatized subject, and then it turns out the disagreement was because they were actually talking about *different* things?

(I know of no such examples, but I would be surprised it none exist)

Are there absolutely no examples of cases where mathematicians disagreed about some theorem about some not-yet-axiomatized subject, and then it turns out the disagreement was because they were actually talking about different things?

There is such an example -- rather more complicated than you're describing, but the same sort of thing: Euler's theorem about polyhedra, before geometry was formalised. This is the theorem that F-E+V = 2, where F, E, and V are the numbers of faces, edges, and vertices of a polyhedron. What is a polyhedron?

Lakatos's book "Proofs and Refutations" consists of a history of this problem in which various "odd" polyhedra were invented and the definition of a polyhedron correspondingly refined, until reaching the present understanding of the theorem.

Upvoted for "Proofs and Refutations" reference

I don't know if this is true, but I once had a lecturer tell me that there used to be considerable debate over the question of whether a derivative of a differentiable function was necessarily continuous, which ultimately boiled down to the two sides having different definitions of continuity, but not realising it since neither had ever fully set down their axioms and definitions.

I've heard that before also but haven't seen a source for it. But keep in mind that that's a question about functions on the real numbers, not about what is being called "numbers" here which seems to be a substitute for the integers or the natural numbers.

I thought he was asking if it had ever happened in any not-yet-axiomatised subject, presumably looking for examples other than arithmetic.

Yes. I think the mathematicians were lucky that it didn't happen on the sort of integers they were discussing (there was, after all, great discussion about irrational numbers, zero , later imaginary numbers, and even Archimedes' attempt to describe big integers was probably not without controversy ).

Hmm, in that case, it might be relevant to point out examples that don't quite fit Plasmon's situation but are almost the same: There are a variety of examples where due to a lack of rigorous axiomatization, statements were believed to be true that just turned out to be false. One classical example is the idea that of a function continuous on an interval and nowhere differentiable. Everyone took for granted that for granted that a continuous function could only fail differentiability at isolated points until Weierstrass showed otherwise.

For another one, Russell's paradox seems like it was a consequence naively assuming our intuitions about what counts as a 'set' would necessarily be correct, or even internally consistent.

Plasmon:

*So, they were lucky. It could have been that that-which-Pythagoras-calls-number was not that-which-Fibonacci-calls-numbers.*

Why do you imagine that the introduction of an axiomatic system would address this problem?

To quote, just put a greater-than sign `>`

at the beginning of the first line.

Why do you imagine that the introduction of an axiomatic system would address this problem?

Because then the problem is not "Does this non-axiomatized stuff obey that theorem ?" but "Does that theorem follow from these axioms ?". One is a pure logic problem, and proofs may be checked by automated proof-checkers. The other directly or indirectly relies on the mathematician's intuition of the non-axiomatized subject in question, and can't be checked by automated proof-checkers.

Because then the problem is not "Does this non-axiomatized stuff obey that theorem ?" but "Does that theorem follow from these axioms ?". One is a pure logic problem, and proofs may be checked by automated proof-checkers. The other directly or indirectly relies on the mathematician's intuition of the non-axiomatized subject in question, and can't be checked by automated proof-checkers.

Except insofar as the mathematicians, unknown to each other, have different ideas of what constitutes a valid rule of inference.

A logical system is a mathematical obect. If the problem is that different mathematicians might think they're talking about the same object when they're really talking about different ones, then I don't see why logical systems, out of all mathematical objects, should be particularly exempt from this problem.

Also, of course, to put this in the context of the original post, if we're talking about second order logic, then of course there can be no automated proof-checkers.

Also, of course, to put this in the context of the original post, if we're talking about second order logic, then of course there can be no automated proof-checkers.

You appear to be confused here. The rest of your post is good.

I should have said this more carefully. *If* one allows enough rules of inference so that all the logical consequences of the axioms can be proved, then there are no automated proof checkers. So you can have proof checkers, but only at the cost of restricting the system so that not all logical consequences (i.e. implications that are true in every model) can be proved.

I think there's a definitional issue here. Eugine is using "proof checker" to mean an algorithm that given a sequence of statements in an axiomatic system verifies that the proof is formally valid. You seem to mean by proof checker something like a process that goes through listing all valid statements in the system along with a proof of the statement.

JoshuaZ: No, I mean the former. The problem is that you have enough rules of inference to allow you to extract all logical consequences of your axioms, then that set of rules of inference is going to be too complicated to explain to any computer. (i.e. the rules of inference are non-recursive.)

Do you mean that there can be no automated proof-checkers which are sound and complete (and terminating)? Surely there can be useful sound checkers (which terminate)? The existence of Coq and other Dependently typed languages suggests that's true.

LessWrong uses Markdown for comment formatting. Click the button labelled "Show help" under the right-hand side of the comment box.

Well, that would've worked fine in standard Markdown.

**[deleted]**· 2012-11-14T19:15:00.143Z · score: 0 (0 votes) · LW(p) · GW(p)

What is "standard Markdown"?

Specified here - I think it is canonically produced by `Markdown.pl`

, downloadable there.

Notably, Markdown allows inline html. Most implementations that work with client data, like that of Reddit/Lw, only allow a limited subset of Markdown functionality. (Most also add functionality).

**[deleted]**· 2012-11-14T20:11:44.823Z · score: 1 (1 votes) · LW(p) · GW(p)

Oh, I see -- a specification in the style of "only perl can parse perl."

But then these "most implementations" are not implementations of "standard Markdown," hence my confusion.

Oh, I see -- a specification in the style of "only perl can parse perl."

All universal programming languages (assembler, C, CLIP, Lisp, Cobol, Python, Java) can parse perl as well.

All universal programming languages (assembler, C, CLIP, Lisp, Cobol, Python, Java) can parse perl as well.

Only if they implement Perl, perfectly mimicking the functionality of `perl`

(the only spec for Perl). Amongst other difficulties, Perl has available the full power of Perl at the preprocessing stage.

But then these "most implementations" are not implementations of "standard Markdown,"

Note the qualifier - "most implementations that work with client data". Markdown is also used extensively to generate static content that is not user-generated.

**[deleted]**· 2012-11-15T19:17:25.392Z · score: 0 (0 votes) · LW(p) · GW(p)

There are still multiple implementations used in generating static content, no two of which really do the same thing, e.g., pandoc, multimarkdown, and etc. These are all still arguably "markdown" (at least, they call themselves that) but don't conform to standard markdown as you understand it.

**[deleted]**· 2012-11-19T05:25:33.712Z · score: 1 (1 votes) · LW(p) · GW(p)

Consider also the parallel 'postulate', reluctantly introduced as an axiom in Euclid. People tried to prove it as a theorem for two thousand years, until it was realized that its negation defined entirely different kinds of geometry.

A not-quite-but-close fit: the distinction between prime and irreducible elements of a number field became necessary because unique factorization into primes failed in simple number fields.

**[deleted]**· 2012-11-18T08:08:09.748Z · score: 0 (0 votes) · LW(p) · GW(p)

Various theorems, lemmas, and other principles equivalent to the Axiom of Choice (e.g. Zorn's lemma) were argued over until it was established (by Kurt Gödel and Paul Cohen) that the AoC is entirely independent of the ZF axioms, i.e. ZFC and ZF!C are both consistent systems. I think this is the canonical example.

"The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" — Jerry Bona

**[deleted]**· 2012-11-16T15:58:15.132Z · score: 1 (1 votes) · LW(p) · GW(p)

From Landsburg's "Accounting for Numbers," point 9:

numbers are indeed just “out there”

and that they are directly accessible to our intuitions.

[Emphasis mine.]

I'm conflicted on whether to think the world is actually physical in a special way, or simply mathematical in such a way that all mathematical structures exist and are equally real. My greatest sticking point to accepting the latter position is the emphasized part of the quote.

Okay, if I grant that numbers are "out there," we do seem to interact with them via a cognitive algorithm we call "intuition." But how the heck does that work!? Nothing I know about cognitive science suggests that intuitions are somehow specially equipped to interact with numbers, or are fundamentally different than the brain's other cognitive algorithms.

The crux of the argument for a mathematical universe - so far as I understand it - is parsimony. But in order for parsimony to be the relevant criteria, both hypotheses need have the same experimental predictions. In other words, if the mathematical universe hypothesis is correct, we would expect it to be consistent with our non-mathematical universe understanding of cognitive science. But with respect to intuitions, this doesn't seem to be the case.

Without having a satisfying account of how intuitions work in a mathematical universe, the MUH isn't consistent with my current understanding of reality. Is there such an explanation of intuitions? If not, maybe a more fruitful question to ask might be:

What kind of cognitive algorithms generate the feeling that numbers are "out there"?

What kind of cognitive algorithms generate the feeling that numbers are "out there"?

Perhaps the same sorts that generate feelings like 'lines and edges are out there' or that 'we say distinct words' (rather than continuously slurred together sounds) or which leads to http://en.wikipedia.org/wiki/Subitizing

Is there really that much difference between Second Order Logic, and First Order Logic quantifying over sets, or over classes?

Put it another way, Eliezer is using SOL to pinpoint the concept of a (natural) "number". But to do that thoroughly he also needs to pinpoint the concept of a "property" or a "set" of numbers with that property, so he needs some axioms of set theory. And those set theory axioms had better be in FOL, because if they are also in SOL he will further need to pinpoint properties of sets (or proper classes of sets), so he needs some axioms of class theory. Somewhere it all bottoms out in FOL, doesn't it?

Why isn't it necessary to pin point FOL?

Well, the topic is "logical pinpointing", and the attempt to logically pinpoint logic itself sounds rather circular...

However, if we really want to, we can describe a computer program which systematically checks any given input string to decide whether it constitutes a valid string of deductions in FOL. Then "first order logic" is anything to which the program gives the answer "Valid". That's possible for FOL, but not for SOL. If you want to further pinpoint what a "program" is, the best way to do that is to find a computer and run the d**n thing!

I agree with pretty much all of Landsburg's criticisms. We've raised similar points before.