Edward Nelson claims proof of inconsistency in Peano Arithmetic

post by JoshuaZ · 2011-09-27T12:46:13.730Z · LW · GW · Legacy · 116 comments

Contents

116 comments

We've discussed Edward Nelson's beliefs and work before. Now, he claims to have a proof of a contradiction in Peano Arithmetic; which if correct is not that specific to PA but imports itself into much weaker systems. I'm skeptical of the proof but haven't had the time to look at it in detail. There seem to be two possible weakpoints in his approach. His approach is to construct a system Q_0^* which looks almost but not quite a fragment of PA and then show that PA both proves this system's consistency and proves its inconsistency. 

First, he may be mis-applying the Hilbert-Ackermann theorem-when it applies is highly technical and can be subtle. I don't know enough to comment on that in detail. The second issue is that in trying to show that he can use finitary methods to show there's a contradiction in Q_0^* he may have proven something closer to Q_0^* being omega-inconsistent. Right now, I'm extremely skeptical of this result.  

If anyone is going to find an actual contradiction in PA or ZFC it would probably be Nelson. There some clearly interesting material here such as using a formalization of the  surprise examiation/unexpected hanging to get a new proof of of Godel's Second Incompleteness Theorem. The exact conditions which this version of Godel's theorem applies  may be different from the conditions under which the standard theorem can be proven. 

116 comments

Comments sorted by top scores.

comment by [deleted] · 2011-10-01T14:57:51.322Z · LW(p) · GW(p)

Nelson has withdrawn his claim. Link.

Replies from: gwern
comment by gwern · 2011-10-01T17:52:12.122Z · LW(p) · GW(p)

Is that a withdrawal of the entire approach or just one part of it?

Replies from: breckes
comment by breckes · 2011-10-01T22:10:48.425Z · LW(p) · GW(p)

On the FOM list, he writes:

Terrence Tao, at http://golem.ph.utexas.edu/category/2011/09/ and independently Daniel Tausk (private communication) have found an irreparable error in my outline. (...)

(...) I withdraw my claim.

The consistency of P remains an open problem.

Replies from: gwern, None
comment by gwern · 2011-10-01T22:37:52.942Z · LW(p) · GW(p)

Thanks, that's very useful for http://predictionbook.com/predictions/3508

comment by [deleted] · 2011-10-01T23:08:55.792Z · LW(p) · GW(p)

I am interested in how this event has influenced peoples view of the consistency of arithmetic. For those who had a viewpoint on it prior to this, what would you have estimated before and after and with what confidence? I will not state my own view for fear of damaging the results.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-10-02T02:05:10.342Z · LW(p) · GW(p)

It seems important in this context to distinguish between consistency of different systems

Overall, my consistency estimate for PA if anything has gone up in this context. Because if Edward Nelson tries really hard and fails to find a contradiction that's more evidence that there isn't one. How much should it go up by? I'm not sure.

On the other hand, this may very well make my estimate for consistency of ZFC go down. That's because although in this specific case Nelson's approach didn't work, it opens up new avenues into demonstrating inconsistency, and part of me could see this sort of thing being more successfully applied to ZFC (although that also seems unlikely). (Also note that there have been attempts by a much larger set of mathematicians in the last few years to find an inconsistency in ZF and that has essentially failed.)

One thing that this has also done is made me very aware in a visceral fashion that number theory and logic are not the same things even when one is talking about subsystems of PA. I already knew that, but maybe had not fully emotionally processed it as much as I should have as when I saw Nelson's outline, and then the subsequent discussions and had a lot of trouble following the details. The main impact of this is to suggest that when making logic related claims (especially consistency and independence/undecidability issues) I should probably not rate my own expertise as highly as I do. As a working mathematician, I almost certainly have more relevant expertise than a random individual, but I've probably been overestimating how much my expertise matters. In both the cases of ZF/ZFC and the case of PA, reducing my confidence means increasing the chance that they are inconsistent. But, I'm not at all sure by how much this should matter. So maybe this should leave everything alone?

So overall I'd say around .995 consistency for PA and .99 consistency for ZF. Not much change from the old values.

comment by XiXiDu · 2011-09-27T14:53:10.372Z · LW(p) · GW(p)

More here, including a comment by Terence Tao.

Replies from: None
comment by [deleted] · 2011-09-27T17:12:13.157Z · LW(p) · GW(p)

Specifically, Tao's comment:

I have read through the outline. Even though it is too sketchy to count as a full proof, I think I can reconstruct enough of the argument to figure out where the error in reasoning is going to be. Basically, in order for Chaitin's theorem (10) to hold, the Kolmogorov complexity of the consistent theory T has to be less than l. But when one arithmetises (10) at a given rank and level on page 5, the complexity of the associated theory will depend on the complexity of that rank and level; because there are going to be more than 2^l ranks and levels involved in the iterative argument, at some point the complexity must exceed l, at which point Chaitin's theorem cannot be arithmetised for this value of l.

(One can try to outrun this issue by arithmetising using the full strength of Q_0^*, rather than a restricted version of this language in which the rank and level are bounded; but then one would need the consistency of Q_0^* to be provable inside Q_0^*, which is not possible by the second incompleteness theorem.)

I suppose it is possible that this obstruction could be evaded by a suitably clever trick, but personally I think that the FTL neutrino confirmation will arrive first.

He's such a glorious mathematician. <3

Replies from: breckes
comment by breckes · 2011-09-27T20:48:14.484Z · LW(p) · GW(p)

He gave a more detailed comment on the n-Category Café.

Replies from: None
comment by [deleted] · 2011-09-27T23:11:44.583Z · LW(p) · GW(p)

[...] a theory slightly weaker than PA that he calls Q 0 *. Of course, we know that this theory cannot prove its own consistency (if it is consistent), by the second incompleteness theorem

This confuses me. Second incompleteness applies to extensions of PA . I'll bet Terry Tao knows this too. So what does this remark mean?

comment by XiXiDu · 2011-09-27T13:58:51.249Z · LW(p) · GW(p)

FTL neutrinos and now a proof of inconsistency in Peano Arithmetic? What next?

Replies from: Thomas
comment by Thomas · 2011-09-27T17:13:04.959Z · LW(p) · GW(p)

I have devised a little proof of inconsistency of the Newtonian mechanics, years ago.

http://critticall.com/alog/Antinomy_inside_mechanics.pdf

Can you spot the error?

Replies from: Richard_Kennaway, Alejandro1, lessdazed, ArisKatsaris, rwallace, Oscar_Cunningham, None, AlexMennen, Vaniver
comment by Richard_Kennaway · 2011-09-27T19:38:26.459Z · LW(p) · GW(p)

Vg vf pbeerpg gung ng g=0, rnpu cnegvpyr unf n yrsgjneqf irybpvgl. Ohg vg vf abg pbeerpg gung gur prager bs tenivgl ng g=0 unf n yrsgjneqf irybpvgl. Guvf vf orpnhfr gur flfgrz orunirf qvfpbagvahbhfyl ng g=0, naq fb gur irybpvgl bs gur prager bs tenivgl pnaabg or qrgrezvarq ol gnxvat n jrvtugrq nirentr bs gur vavgvny irybpvgvrf bs gur pbzcbaragf.

Va zber qrgnvy:

Yrg P(g) or gur cbfvgvba bs gur prager bs tenivgl ng gvzr g.

Gur irybpvgl bs gur prager bs tenivgl ng g=0 vf gur yvzvg nf g -> 0 bs (P(g)-P(0))/g.

Ng nal gvzr g, P(g) vf gur fhz bire a bs Z(a)C(a,g), jurer Z(a) vf gur znff bs gur agu cbvag naq C(a,g) vgf cbfvgvba.

Gur nethzrag eryvrf ba pbzchgvat qP/qg nf gur fhz bire a bs Z(a)qC(a,g)/qg. Guvf fjncf gur beqre bs gjb yvzvgvat cebprffrf, fhzzngvba naq qvssreragvngvba, juvpu vf inyvq sbe jryy-orunirq fhzf, ohg abg sbe guvf bar.

For intuition, imagine that there were just the first three particles in the sequence, and that the number 10 is replaced by something much larger. The two leftmost particles will rapidly oscillate about one another, while drifting slowly towards the rightmost. Now imagine the first four particles are present. The two leftmost will oscillate madly about each other, while that system as a whole oscillates rapidly about the third from the left (I'm conveniently ignoring what happens at the instants when particles meet and pass through each other), and the ensemble of the three leftmost drifts towards the rightmost. Repeat.

Replies from: AlexMennen
comment by AlexMennen · 2011-09-27T20:32:13.036Z · LW(p) · GW(p)

It's acceleration that matters, not velocity (the initial velocity of all points is zero, or at least that's how I thought of it). However, your argument does generalize nicely to acceleration, and could possibly be the correct resolution.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-09-27T21:47:03.508Z · LW(p) · GW(p)

Yes, stupid mistake of mine. Replace velocity by acceleration and position by velocity and the same argument works. The fallacy is in the vagrepunatr bs yvzvgf.

comment by Alejandro1 · 2011-09-29T16:37:30.451Z · LW(p) · GW(p)

Similar, for those who enjoyed discussing this problem: Did you know that Newtonian mechanics is indeterministic?

Replies from: Sniffnoy
comment by Sniffnoy · 2011-10-01T07:04:41.166Z · LW(p) · GW(p)

These equations don't make sense dimensionally. Are there supposed to be constants of proportionality that aren't being mentioned? Are they using the convention c=1? Well, I doubt it's relevant (scaling things shouldn't change the result), but...

Edit: Also, perhaps I just don't know enough differential equations, but it's not obvious to me that a curve such as he describes exists. I expect it does; it's easy enough to write down a differential equation for the height, which will give you a curve that makes sense when r>0, but it's not obvious to me that everything still works when we allow x=0.

Replies from: Alejandro1
comment by Alejandro1 · 2011-10-01T15:00:40.156Z · LW(p) · GW(p)

These equations don't make sense dimensionally. Are there supposed to be constants of proportionality that aren't being mentioned?

That is my guess. The simplest way IMO would be replace the g in eq. 1 by a constant c with units of distance^(-1/2). The differential equation becomes r'' = g c r^1/2, which works dimensionally. The nontrivial solution (eq. 4) is correct with an added (c g)^2 in front.

it's not obvious to me that a curve such as he describes exists.

I'm not sure what you mean here. What could be wrong in principle with a curve h = c r^3/2 describing the shape of a dome, even at r = 0?

Replies from: Sniffnoy
comment by Sniffnoy · 2011-10-01T20:29:45.004Z · LW(p) · GW(p)

Notice "r" here does not mean the usual radial distance but rather the radial distance along the curve itself. I don't see any obvious barrier to such a thing and I expect it exists, but that it actually does isn't obvious to me; the resulting differential equation seems to have a problem at h=0, and not knowing much about differential equations I have no idea if it's removable or not (since getting an explicit solution is not so easy).

Replies from: Alejandro1
comment by Alejandro1 · 2011-10-02T01:52:21.847Z · LW(p) · GW(p)

You piqued my curiosity, so I sat to play a little with the equation. If R is the usual radial coordinate, I got:

dh/dR = c y^(1/3) / [(1 - c^2 y^(2/3))^(1/2)]

with y = 3h/(2c), using the definition of c in my previous comment. (I got this by switching from dh/dr to dh/dR with the relation between sin and tan, and replaciong r by r(h). Feel free to check my math, I might have made a mistake.) This was easily integrated by Mathematica, giving a result that is too long to write here, but has no particular problem at h = 0, other than dR/Dh being infinite there. That is expected, it just means dh/dR = 0 so the peak of the dome is a smooth maximum.

comment by lessdazed · 2011-09-27T20:51:43.659Z · LW(p) · GW(p)

Vf vg gung gur vavgvny irybpvgl bs gur znggre vf mreb, gur vavgvny naq riraghny nppryrengvba bs gur pragre bs tenivgl vf mreb, ohg gur pragre bs tenivgl vgfrys ortvaf jvgu n yrsgjneq irybpvgl?

comment by ArisKatsaris · 2011-09-27T19:16:49.604Z · LW(p) · GW(p)

This is a good one, but I won't try doing the math to figure out the specific error with it (other than the obvious problem of (rot13) gur arrq gb pnyphyngr gur rssrpg bs nyy gur znffrf, abg whfg gur vzzrqvngryl ba gur evtug naq gur vzzrqvngryl ba gur yrsg.)

Here's what my intuition is telling me: Gung jung jvyy npghnyyl unccra vf gung gur znffrf jvyy pbairetr fbzrjurer va gur pragre, jvgu fbzr bs gur znffrf nsgre gur zvqqyr cbvag zbivat evtugjneqf vafgrnq. V vzntvar gung V'q unir gb hfr vagrtengvba bire gur ahzore bs vasvavgr znffrf ba gur yrsg, gb frr gung gur zngu sbepr gur jubyr guvat gb onynape.

comment by rwallace · 2011-09-27T19:07:13.857Z · LW(p) · GW(p)

An elegant puzzle.

Gur fbyhgvba vf rnfvrfg gb frr vs jr pbafvqre n svavgr ahzore bs vgrengvbaf bs gur cebprqher bs fcyvggvat bss n fznyyre znff. Va rnpu vgrengvba, gur yrsgzbfg znff onynaprf gur obbxf ol haqretbvat irel encvq nppryrengvba gb gur evtug. Va gur yvzvg nf gur ahzore bs vgrengvbaf tbrf gb vasvavgl, jr unir na vasvavgrfvzny znff onynapvat gur obbxf jvgu vasvavgryl ynetr nppryrengvba.

Replies from: None
comment by [deleted] · 2011-09-28T00:44:20.698Z · LW(p) · GW(p)

I think the solution I came up with is, in spirit, the same as this one.

Pbafvqre gur nzbhag bs nppryrengvba rnpu cnegvpyr haqretbrf. Gur snegure lbh tb gb gur yrsg, gur terngre gur nppryrengvbaf naq gur fznyyre gur qvfgnaprf, naq, guhf, gur yrff gvzr vg gnxrf orsber n pbyyvfvba unccraf. (V pbhyq or zvfgnxra gurer.) Sbe rirel cbfvgvir nzbhag bs gvzr, pbyyvfvbaf unccra orsber gung nzbhag bs gvzr. Gurersber, gur orunivbe bs gur flfgrz nsgre nal nzbhag bs gvzr vf haqrsvarq.

Replies from: Tyrrell_McAllister, rwallace
comment by Tyrrell_McAllister · 2011-09-28T15:33:47.961Z · LW(p) · GW(p)

Jung vs jr fhccbfr gung gur obqvrf pna cnff guebhtu rnpu bgure? Ubj qbrf gur flfgrz orunir nf gvzr cebterffrf?

Replies from: pengvado
comment by pengvado · 2011-09-28T20:49:37.819Z · LW(p) · GW(p)

Vg'f rnfl rabhtu gb rkgraq arjgbavna zrpunavpf gb unaqyr gjb-cnegvpyr pbyyvfvbaf. Whfg cvpx bar bs "rynfgvp pbyyvfvba" be "cnff guebhtu rnpu bgure", naq nccyl gur pbafreingvba ynjf naq flzzrgevrf. Ohg V'z abg njner bs nal cebcbfnyf gb unaqyr guerr-be-zber-cnegvpyr pbyyvfvbaf: va gung pnfr, pbafreingvba bs raretl naq zbzraghz vfa'g rabhtu gb qrgrezvar gur bhgchg fgngr, lbh'q arrq fbzr npghny qlanzvpf gung ner qrsvarq ba gur vafgnag bs pbyyvfvba.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-09-29T03:17:20.317Z · LW(p) · GW(p)

Do you have a reference for how to extend Newtonian mechanics to collisions or passing-through of point particles subject to gravity? how about people complaining about 3 body collisions?

Bringing in energy and momentum doesn't sound helpful to me because they are both infinite at the point of collision.

My understanding is that both of the choices has a unique analytic extension to the complex plane away from the point of collision. Most limiting approaches will agree with this one. In particular, Richard Kennaway's approach (to non-collision) is perturb the particles into another dimension, so that the particles don't collide. The limit of small perturbation is the passing-through model.

For elastic collisions, I would take the limit of small radius collisions. I think this is fine for two body collisions. In dimension one, I can see a couple ways to do three body collisions, including this one. The other, once you can do two body collisions, is to perturb one of bodies, to get a bunch of two body collisions; in the limit of small perturbation, you get a three body collision. But if you extend this to higher dimension, it results in the third body passing through the collision (which is a bad sign for my claim that most limiting approaches agree). When I started writing, I thought the small radius approach had the same problem, but I'm not sure anymore.

comment by rwallace · 2011-09-28T04:27:58.144Z · LW(p) · GW(p)

Yes, I think so too.

comment by Oscar_Cunningham · 2011-09-27T18:31:22.984Z · LW(p) · GW(p)

Gur prager bs znff vf ng gur bevtva naq fgnlf gurer?

comment by [deleted] · 2011-09-27T18:29:37.174Z · LW(p) · GW(p)

Clever! It took me a couple of minutes to get it: Gur yrsgzbfg cbvag (gur 1 xt znff) jvyy zbir gb gur evtug, xrrcvat gur pragre bs tenivgl jurer vg vf. Fb vg'f abg gehr gung "jr pna cebir guvf sbe rirel cbvag.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2011-09-27T19:11:25.443Z · LW(p) · GW(p)

You are misreading which one's the rightmost mass. The rightmost mass is the one with 1kg.

Replies from: None
comment by [deleted] · 2011-09-27T19:18:10.777Z · LW(p) · GW(p)

Whoops, you're right. I assumed +1/10 meant +1/10th from +1. I had the whole thing backwards.

comment by AlexMennen · 2011-09-27T19:11:30.500Z · LW(p) · GW(p)

Gurer vf ab reebe. Gur cevapvcyr gung gur pragre bs znff pnaabg nppryrengr pbzrf sebz gur snpg gung, sbe nal cbvag znff, gur sbepr vg rkregf ba bgure znffrf vf rknpgyl bccbfvgr gb gur sbepr bgure znffrf rkreg ba vg. Guvf qbrf abg nccyl gb vasvavgr pbyyrpgvbaf bs znffrf sbe gur fbzr ernfba gung na vasvavgr frevrf bs barf qbrf abg nqq gb mreb, rira gubhtu vg vf gur qvssrerapr orgjrra gjb vqragvpny frevrf, rnpu gur fhz bs nyy cbfvgvir vagrtref.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-09-27T22:01:11.701Z · LW(p) · GW(p)

Vs lbh guvax gurer vf n fcrpvsvp qviretrag frevrf, anzr vg.

Replies from: AlexMennen
comment by AlexMennen · 2011-09-28T00:49:12.517Z · LW(p) · GW(p)

Gur fhz bs gur gbgny sbeprf ba nyy cnegvpyrf.

sum(n=0 to inf)
{
    [sum(m=0 to n-1) G*(1/2)^n*(1/2)^m/((1/10)^m-(1/10)^n)^2]
    - [sum(m=n-1 to inf) G*(1/2)^n*(1/2)^m/((1/10)^n-(1/10)^m)^2]
} 

vf yrff guna 0 sbe rirel grez va gur bhgre fhz, rira gubhtu lbh pna pnapry bhg nyy gur grezf ol ernpuvat vagb gur vaare fhzf.

Replies from: wedrifid, Douglas_Knight
comment by wedrifid · 2011-09-28T02:26:26.946Z · LW(p) · GW(p)

To put code blocks in mark down put four spaces before the lines you want to be code. I assume that is what you intended?

Replies from: AlexMennen
comment by AlexMennen · 2011-09-28T02:45:17.739Z · LW(p) · GW(p)

Edited, although I'm not sure it's all that much more readable now.

Replies from: wedrifid
comment by wedrifid · 2011-09-28T02:52:22.027Z · LW(p) · GW(p)

I was thinking of including a line break or two. ;)

Replies from: AlexMennen
comment by AlexMennen · 2011-09-28T02:58:41.076Z · LW(p) · GW(p)

Good idea.

comment by Douglas_Knight · 2011-09-28T18:43:43.232Z · LW(p) · GW(p)

bx, gung vf n ceboyrzngvp erneenatrzrag. Ohg juvpu fvqr vf pbeerpg? Lbh znqr n pynvz nobhg guvf va lbhe bevtvany pbzzrag, ohg V guvax lbh tbg vg onpxjneqf.

Replies from: AlexMennen, AlexMennen
comment by AlexMennen · 2011-09-29T02:26:02.495Z · LW(p) · GW(p)

My previous attempt at a reply didn't make much sense. And this is deep enough into the thread that I won't bother with rot13.

Let's try again: The description given by Thomas accurately models how each individual particle accelerates (towards the origin). It thus also accurately models how the center of mass accelerates in the non-limiting case.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2011-09-29T02:54:04.180Z · LW(p) · GW(p)

I think Richard Kennaway's answer is correct and it seems to me to be opposite to your original answer. The non-limiting case is not interesting. The question is whether there is a coherent extension of Newtonian mechanics to infinitely many particles. It does cover this case and the center of mass still works.

Replies from: AlexMennen
comment by AlexMennen · 2011-09-29T03:18:49.779Z · LW(p) · GW(p)

The question is whether there is a coherent extension of Newtonian mechanics to infinitely many particles.

I think this paradox shows that the answer is no.

It does cover this case and the center of mass still works.

I can't figure out what this sentence was intended to mean.

Replies from: Douglas_Knight
comment by AlexMennen · 2011-09-28T23:30:24.379Z · LW(p) · GW(p)

V guvax gung va gur aba-yvzvgvat pnfr, gur neenatrzrag fubjvat rirelguvat qevsgvat gbjneqf gur bevtva vf pbeerpg naq gur neenatrzrag fubjvat gur pragre bs znff fgnlvat fgvyy vf va reebe. Bs pbhefr, va gur yvzvg nf gur ahzore bs cbvag znffrf vapyhqrq tbrf gb vasvavgl, vg'f gur bgure jnl nebhaq.

comment by Vaniver · 2011-09-27T18:25:55.704Z · LW(p) · GW(p)

Well, for starters we live in an integer universe, so there is a leftmost point.

Even beyond that, though, the third law suggests that for every force pulling mass leftward, you have a force pulling mass rightward- and so the center of mass should not move as the collection collapses.

[edit]In case basic physics is not obvious to you, an explanation of what the second line means can be found here.

[EDIT] What I am trying to say is that the problem is that an infinity is not taken as a limit, and thus the approach to the problem is confused.

Replies from: Vaniver
comment by Vaniver · 2011-09-27T20:03:25.236Z · LW(p) · GW(p)

I am curious why this received downvotes. Is it because people expected me to rot13 my answer?

Replies from: benelliott, ArisKatsaris
comment by benelliott · 2011-09-27T20:17:06.177Z · LW(p) · GW(p)

I didn't downvote, but if I had it would have been because you seem to be missing the point a bit.

It may well be the case that we live in a discrete universe without infinities (I assume that's what you meant by integer) but this is not the case for Newtonian Mechanics, which asserts that both space and time remain continuous at every scale. Thomas never claimed to have a paradox in actual physics, only in Newtonian mechanics, so what you are saying is irrelevant to his claim, you might was well bring up GR or QM as a solution.

Appealing to the third law also doesn't help, his whole point is that if you do the calculations one way you get one answer and if you do them another way you get another answer, hence the use of the word paradox.

Replies from: Vaniver
comment by Vaniver · 2011-09-27T21:46:40.347Z · LW(p) · GW(p)

Appealing to the third law also doesn't help, his whole point is that if you do the calculations one way you get one answer and if you do them another way you get another answer, hence the use of the word paradox.

His point is if you only do some of the calculations, you can come up with the wrong answer. I fail to see how that implies a paradox rather than just sloppiness.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-27T22:31:54.917Z · LW(p) · GW(p)

If I can do a calculation and stop and then do another calculation and get a contradiction that's still a contradiction. it doesn't matter that I can do other calculations that would lead to a non-contradiction.

Replies from: Vaniver
comment by Vaniver · 2011-09-27T23:07:24.450Z · LW(p) · GW(p)

If he put a single equal sign in his 'proof', I would be more charitable. As it is, it's not clear to me that he actually did any calculations or showed any contradictions.

Replies from: benelliott
comment by benelliott · 2011-09-27T23:32:26.660Z · LW(p) · GW(p)

I'm not sure he did, but I have done the calculations and it seems to check out (although I may have made a mistake). The only laws I used were F=ma and F=Gm_1m_2/(r^2), of which the third law should emerge as an immediate consequence rather than needing to be added in on top.

Replies from: Vaniver
comment by Vaniver · 2011-09-28T00:07:03.573Z · LW(p) · GW(p)

The reason they "check out" is because you calculate the force caused by N+1 particles on N particles. Because your calculation has an external particle, the CoM has an acceleration. This is entirely an artifact of how the limit is taken, and is thus a sign of sloppiness and incompleteness.

If you did the calculations for the system of N particles, then took the limit as N approached infinity, you would get no CoM acceleration. This really has nothing to do with Newtonian physics.

Replies from: benelliott
comment by benelliott · 2011-09-28T00:12:03.884Z · LW(p) · GW(p)

you calculate the force caused by N+1 particles on N particles

I don't think I do this.

If you did the calculations for the system of N particles, then took the limit as N approached infinity

Obviously the problem is with an infinity not taken as a limit. If you had said that, instead of saying other irrelevant things, then I doubt anyone would have objected.

Replies from: Vaniver
comment by Vaniver · 2011-09-28T03:53:49.442Z · LW(p) · GW(p)

I don't think I do this.

Does your leftmost particle have a rightward acceleration which makes the weighted average of acceleration (i.e. CoM acceleration) 0?

If you had said that, instead of saying other irrelevant things, then I doubt anyone would have objected.

I have edited the ancestral post to say that. Hopefully, the superior articulation will cause its karma to rise into the positives.

Replies from: benelliott
comment by benelliott · 2011-09-28T08:40:55.082Z · LW(p) · GW(p)

Since I wasn't using a limit I didn't have a leftmost particle.

Replies from: Vaniver
comment by Vaniver · 2011-09-28T14:05:54.841Z · LW(p) · GW(p)

Then you are calculating the force caused by N+1 particles on N particles. For every particle i, you look at the i-1 to the right, add up their gravitational force, and see that it is dwarfed by force from the particle to the left- particle number i+1.

If you have a finite number of particles, the mistake vanishes. If you have an infinite number of particles but you add particles all at once instead of half of i and half of i+1 at once, the mistake vanishes.

Replies from: benelliott
comment by benelliott · 2011-09-28T15:44:54.324Z · LW(p) · GW(p)

I calculated using all the particles to the left rather than just one, and so every pair got taken into account once for each member of that pair.

Replies from: Vaniver
comment by Vaniver · 2011-09-28T15:47:44.959Z · LW(p) · GW(p)

Then you were calculating the gravitational effect of an infinite number of external particles on a finite number of particles, which makes things worse.

[Edit]: Note that each pair has to be taken into account twice. The N+1 vs. N error results from having an extra force floating around; the N+Infinity vs. N error results from having an infinite number of extra forces floating around. If you add forces as linked pairs, the error disappears.

Replies from: benelliott
comment by benelliott · 2011-09-28T16:05:47.134Z · LW(p) · GW(p)

Note that each pair has to be taken into account twice.

Once for each member of the pair = twice

Then you were calculating the gravitational effect of an infinite number of external particles on a finite number of particles, which makes things worse.

No, I calculated it for every particle, and therefore I calculated it on an infinite number of particles. Obviously my calculations only considered one particle at any time, but the same could be said of calculations in problems involving finitely many particles.

Replies from: Vaniver
comment by Vaniver · 2011-09-28T16:26:02.170Z · LW(p) · GW(p)

Suppose you have a system of N+1 particles.

You calculate the gravitational force on the first particle. There are N forces on it; you now have N unpaired forces.

Now you calculate the gravitational force on the second particle. There are N forces on it; you now have 2N-2 unpaired forces (you added N more forces, but one of the new ones and one of the old ones are paired).

Then you calculate the gravitational force on the third particle. There are N forces on it; you now have 3N-6 unpaired forces (you added N more forces, and 3 choose 2 are paired).

The number of unpaired forces will grow until it peaks and then shrinks down to zero (as the number of forces that pair up will eventually overcome the added forces).

What happens if you take this approach with an infinite number of particles? Well, at each moment you add an infinite number of new forces, and only a finite number pair up. Thus, as soon as you start the calculations you have an infinite number of unpaired forces and that number only grows.

The right way to do this is the buddy system: never let a force enter the calculations without its reverse force. To do this, you add particles to your consideration one by one. The number of considered forces is N choose 2, and you never have unpaired forces.You let the number of considered particles grow to infinity, and it still works.

Replies from: benelliott
comment by benelliott · 2011-09-28T18:47:54.708Z · LW(p) · GW(p)

What happens if you take this approach with an infinite number of particles? Well, at each moment you add an infinite number of new forces, and only a finite number pair up. Thus, as soon as you start the calculations you have an infinite number of unpaired forces and that number only grows.

It certainly doesn't grow, it becomes infinity and stays there.

Okay, this is getting into set theory, but its actually the case that adding infinitely many things and then taking finitely many away can eventually leave you with zero.

Suppose at step n you add the following numbers, 2n-1, 2(2n-1), 4(2n-1)... (2^k)(2n-1) for all k. Then you take away the number n.

This procedure adds infinitely many at every step, and only takes one away, so by your argument it should only ever grow and end up as infinity. However, its pretty clear that every number gets added once and then taken away once, so we end up with nothing.

I think its pretty clear that a similar thing is going on here.

Again, I agree that the limits are being done wrong, I just disagree with your description of the N vs N+1 thing.

Replies from: Vaniver, JoshuaZ
comment by Vaniver · 2011-09-28T19:08:10.184Z · LW(p) · GW(p)

It certainly doesn't grow, it becomes infinity and stays there.

Okay, this is getting into set theory, but its actually the case that adding infinitely many things and then taking finitely many away can eventually leave you with zero.

It's not clear to me what you mean by "actually the case." I agree that it could be the convention among set theorists that things work that way, and that there may even be benefits from taking that approach. In my experience, it only serves to foster confusions about infinity.

Consider a simple version of your scenario: start off with a ball labeled 1 in an urn of infinite capacity. Add two balls labeled with the two integers immediately above the largest label in the urn, then remove the ball with the smallest label from the urn. Repeat those two steps an infinite number of times. It is clear that every time you perform the two steps, the number of balls in the urn increases by one, but also that for any individual ball, at some point it will get removed from the urn. I disagree with the conclusion that the set of balls in the urn is empty after an infinite number of steps; I would rather call it ill-defined, as it should have both no members and an infinite size. (If you don't paint numbers on the balls, and just add two balls then remove one, it's obvious the urn contains an infinite number of balls after an infinite number of steps. Why would painting numbers on balls make them disappear from the urn? Because you run out of numbers?)

I just disagree with your description of the N vs N+1 thing.

How would you describe it?

Replies from: benelliott, lessdazed
comment by benelliott · 2011-09-28T20:16:52.652Z · LW(p) · GW(p)

How would you describe it?

I would call it the mistake of taking a theorem about Newtonian physics that was proven by assuming finitely many objects, and generalising it to infinitely many (which would be allowed by limits, but this isn't using limits.

(If you don't paint numbers on the balls, and just add two balls then remove one, it's obvious the urn contains an infinite number of balls after an infinite number of steps. Why would painting numbers on balls make them disappear from the urn? Because you run out of numbers?)

I would say that if you don't paint numbers on the balls you could end up with 0, or infinity, or something else, depending on how you do it. If you do it randomly you do have a probability 1 of ending up with infinity, but if you're careful you can make it less.

Replies from: Vaniver
comment by Vaniver · 2011-09-28T21:10:22.573Z · LW(p) · GW(p)

I would call it the mistake of taking a theorem about Newtonian physics that was proven by assuming finitely many objects, and generalising it to infinitely many (which would be allowed by limits, but this isn't using limits.

I'm confused. Are you saying that the CoM would accelerate in this situation, and thus the proof that CoMs only accelerate under external forces does not apply to this situation, or are you saying that this counterexample is misconstructed, and the CoM would not accelerate?

Replies from: benelliott
comment by benelliott · 2011-09-28T21:30:40.037Z · LW(p) · GW(p)

I thought I was saying the former, although I'm no longer actually sure.

comment by lessdazed · 2011-09-28T19:30:21.908Z · LW(p) · GW(p)

Perhaps someone should start a discussion thread in which any mention of infinity or its synonyms is forbidden.

Replies from: benelliott
comment by benelliott · 2011-09-28T20:17:31.819Z · LW(p) · GW(p)

Why would you want to?

comment by JoshuaZ · 2011-09-28T19:00:31.114Z · LW(p) · GW(p)

Suppose at step n you add the following numbers, 2n-1, 2(2n-1), 4(2n-1)... (2^k)(2n-1) for all k. Then you take away the number n.

This procedure adds infinitely many at every step, and only takes one away, so by your argument it should only ever grow and end up as infinity. However, its pretty clear that every number gets added once and then taken away once, so we end up with nothing.

Minor note: this is the Ross-Littlewood paradox.

Replies from: benelliott
comment by benelliott · 2011-09-28T20:12:36.201Z · LW(p) · GW(p)

Technically speaking its a slight modification, since usually the paradox involves adding a finite number than taking away 1, rather than infinitely many.

comment by ArisKatsaris · 2011-09-27T22:15:01.290Z · LW(p) · GW(p)

I didn't downvote you, however both your objections were rather besides the point. One doesn't deal with the issue of Newtonian mechanics, and whether they were indeed as obviously flawed from the beginning -- it feels a bit like explaining the paradox of Achilles reaching the turtle by saying we're living in an integer universe. That may be true, but it feels a rather weak dodging-the-issue explanation.

As for the second answer, it doesn't really state where the calculations in the presentation of the paradox are wrong or are missing something. It just says that they are, so it's a non-solution.

Replies from: Vaniver
comment by Vaniver · 2011-09-27T23:06:07.905Z · LW(p) · GW(p)

As for the second answer, it doesn't really state where the calculations in the presentation of the paradox are wrong or are missing something. It just says that they are, so it's a non-solution.

Maybe this is because I was a physics student, but to me the missing pieces implied by the second answer are so obvious as to make it not worth my time to type them and yours to read them. Apparently I was mistaken, so here they are.

By the principle of superposition, we can break down a N-body problem into (N choose 2) 2-body problems. Consider the mass M and the mass m, a distance R away from each other (with m lighter and at the leftward position x). Their CoM is at x+RM/(M+m). The force FMm, leading to the acceleration am, is GMm/R^2, leading to GM/R^2. The force FmM, leading to the acceleration aM, is -GMm/R^2, leading to -Gm/R^2 (these are negative because it is being pulled leftward). To determine the acceleration of the center of mass, we calculate m*am+M*aM=GMm/R^2-GMm/R^2=0. The CoM of that pair will not move due to forces exerted by that pair. This is independent of M, m, and R. When we add a third mass, that consists of adding two new pair systems- each of which has a CoM acceleration of 0. This can be continued up to arbitrarily high N.

That may be true, but it feels a rather weak dodging-the-issue explanation.

Whenever there's a paradox that mentions infinity, delete the infinity and see if the paradox still exists. Odds are very high it won't. A lot of diseased mathematical thinking is the result of not being clear with how you take limits, and so when I find a "paradox" that disappears when you get rid of infinity, that's enough for me to drop the problem.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-09-28T02:55:37.600Z · LW(p) · GW(p)

Whenever there's a paradox that mentions infinity, delete the infinity and see if the paradox still exists. Odds are very high it won't. A lot of diseased mathematical thinking is the result of not being clear with how you take limits, and so when I find a "paradox" that disappears when you get rid of infinity, that's enough for me to drop the problem.

This seems like an unproductive attitude. If everyone took this attitude they would never have hammered out the problems in calculus in the 19th century. And physicists would probably not have every discovered renormalization in the 20th century.

A better approach is to try to define rigorously what is happening with the infinities. When you try that, either it becomes impossible (that is there's something that seems intuitively definable that isn't definable) or one approach turns out to be correct, or you discover a hidden ambiguity in the problem. In any of those cases one learns a lot more than simply saying that there's an infinity so one can ignore the problem.

Replies from: Vaniver
comment by Vaniver · 2011-09-28T03:57:26.145Z · LW(p) · GW(p)

This seems like an unproductive attitude. If everyone took this attitude they would never have hammered out the problems in calculus in the 19th century. And physicists would probably not have every discovered renormalization in the 20th century.

I will make sure to not spread that opinion in the event that I travel back in time.

A better approach is to try to define rigorously what is happening with the infinities. When you try that, either it becomes impossible (that is there's something that seems intuitively definable that isn't definable) or one approach turns out to be correct, or you discover a hidden ambiguity in the problem. In any of those cases one learns a lot more than simply saying that there's an infinity so one can ignore the problem.

I agree with you that this is a better approach. However, the problem in question is "find the error" not "how conservation of momentum works," and so as soon as you realize "hm, they're not treating this infinity as a limit" then the error is found, the problem is solved, and your curiosity should have annihilated itself.

comment by [deleted] · 2011-09-29T20:12:44.318Z · LW(p) · GW(p)

So by now Nelson's outline has been challenged by the formidable Terry Tao, and Nelson (himself formidable!) has responded to this challenge and isn't budging. Link.

The FTL thread has attracted many confident predictions about the ultimate outcome. But this one hasn't. Is this because people find the subject less interesting? Or because they are less confident?

For what it's worth, here's the timeline of my thoughts/beliefs, in silly internal-monologue form. Maybe the numbers shouldn't be taken too seriously, and I'm not trying to bait anyone into betting 200 grand against my 10 dollars, but I'd be interested to hear what people have been thinking as (or if) they've been following this.

A few days ago: "Contrary to almost everybody, I don't think there is any reason to believe PA is consistent. Someone, e.g. Nelson or the much younger Voevodsky, might show it is inconsistent in my lifetime." P approx 5%.

Nelson's announcement: "Holy shit he's done it." P approx 40%

Tao's challenge: "Oh, I guess not, poor guy how embarrassing." P back down to 5%

Nelson neither ignores nor caves to Tao: "Beats me, maybe he's done it after all." P now maybe at 10%.

Replies from: JoshuaZ, benelliott
comment by JoshuaZ · 2011-09-30T12:39:09.561Z · LW(p) · GW(p)

I've put up on Prediction Book a relevant prediction. The main reason I've given only a 75% percent chance that it may take a lot of time to study the matter in detail.

If you are putting 10% on the chance that Nelson's proof will turn out to be correct, then 10% seems way too high for essentially reasons that benelliot touched on below. My own estimate for this is being correct is around 0.5%. (That's sort of a meta estimate which tries to take into account my own patterns of over and underconfidence for different types of claims.) I'd be willing to bet 100$ to 1$ which would be well within both of our stated estimates.

Replies from: None, benelliott
comment by [deleted] · 2011-09-30T23:34:28.747Z · LW(p) · GW(p)

I've put up on Prediction Book a relevant prediction. The main reason I've given only a 75% percent chance that it may take a lot of time to study the matter in detail.

Interesting. Has your prior for "a contradiction will be found in the next hundred years" moved since Nelson's announcement?

My 5/40/5/10 numbers are (were) my estimation of the chance that a contradiction will be found in my lifetime. I'm far less certain that Nelson will have it sewed up in a few months, as he's claiming, but it's still quite likely he's made a big breakthrough.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-10-01T22:58:59.251Z · LW(p) · GW(p)

Oh. I see. I thought your numbers were for Nelson's proof working. Hmm, given that, they seem more reasonable but still too high. I'd need to think more carefully about how likely I'd estimate things.

comment by benelliott · 2011-09-30T21:27:01.414Z · LW(p) · GW(p)

I'd be willing to bet 100$ to 1$ which would be well within both of our stated estimates.

Depending on the exact conditions, I might be prepared to take that.

comment by benelliott · 2011-09-30T12:09:49.595Z · LW(p) · GW(p)

The issue is mainly that I don't think I'm qualified to determine the outcome of the debate, Nelson and Tao are both vastly better at maths than me, so I don't have much to go on. I suspect others are in a similar predicament.

I've been trying to understand their conversation and if I understand correctly Tao is right and Nelson has made a subtle and quite understandable error, but I also estimate P(my understanding is correct) < 0.5 is not very large, so this doesn't help much, and even if I am right there could easily be something I'm missing.

I would also assign this theorem quite a low prior. Compare it to P =! NP, when a claimed proof of that comes out, mathematicians are usually highly sceptical (and usually right), even if the author is a serious mathematician like Deolalikar rather than a crank. Another example that springs to mind is Cauchy's failed proof of Fermat's last theorem (even though that was eventually proven, I still think a low prior was justified in both cases).

This, if correct, would be a vastly bigger result than either of those. I don't think it would be an exaggeration to call this the single most important theorem in the history of mathematics if correct, so I think it deserves a much lower prior than them. Even more so, since in the case of P =! NP most mathematicians at least think it's true, even if they are sceptical of most proofs, in this case most mathematicians would probably be happy to bet against it (that counts for something, even if you disagree with them).

I don't have much money to lose at this point in my life, but I'd be happy to bet $50 and $1 that this is wrong.

Replies from: None, None
comment by [deleted] · 2011-09-30T23:37:04.817Z · LW(p) · GW(p)

I think there's a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while "PA is inconsistent" is a much lonelier affair. A standard reply is that every time a mathematician proves an interesting theorem without encountering a contradiction in PA, he has given evidence for the consistency of PA. For various reasons I don't see it that way.

Same question as for JoshuaZ: has your prior for "a contradiction in PA will be found within a hundred years" moved since Nelson's announcement?

Replies from: benelliott
comment by benelliott · 2011-10-01T00:18:09.587Z · LW(p) · GW(p)

has your prior for "a contradiction in PA will be found within a hundred years" moved since Nelson's announcement?

Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate.

Like yours, it also moved back down when Tao responded, back up a bit when Nelson responded to him, and back down a bit more when Tao responded to him and I finally managed a coherent guess at what they were talking about.

I think there's a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while "PA is inconsistent" is a much lonelier affair.

I'm not sure this is an important difference. I think scepticism about P =! NP proofs might well be just a valid even if far fewer people were working on it. If anything it would be more valid, lots of failed proofs gives you lots of chances to learn from the mistakes of others, as well as building avoiding routes which are proven not to work by others in the field. Furthermore, the fact that huge numbers of mathematicians work on P vs NP but have never claimed a proof suggests a selection effect in favour of those who do claim proofs, which is absent in the case of inconsistency.

Furthermore, not wanting to be unfair to Nelson, but the fact he's working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true.

I'm not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).

Replies from: None
comment by [deleted] · 2011-10-01T00:30:07.879Z · LW(p) · GW(p)

Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate.

Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one?

I'm not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).

Sorry if I gave you a bad impression: I am not trying to start a debate in any adversarial sense. I am just curious.

Furthermore, not wanting to be unfair to Nelson, but the fact he's working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true.

Of that there's no doubt, but it speaks well of Nelson that he's apparently resisted the temptation toward self-deceipt for decades, openly working on this problem the whole time.

Replies from: benelliott
comment by benelliott · 2011-10-01T06:37:14.388Z · LW(p) · GW(p)

Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one?

The announcement came as a surprise, so the update wasn't negligible. I probably wouldn't have gone as low as million-to-one before, but I might have been prepared to estimate a 99.9% chance that arithmetic is consistent. However, I'm not quite sure how much of this change is a Bayesian update and how much is the fact that I got a shock and thought about the issue a lot more carefully.

comment by [deleted] · 2011-09-30T20:00:48.560Z · LW(p) · GW(p)

Cauchy? Don't you mean Lamé?

Anyway if you lose the bet 1 = 0 therefore you don't need to transfer any money.

Replies from: benelliott
comment by benelliott · 2011-09-30T21:11:39.589Z · LW(p) · GW(p)

Cauchy? Don't you mean Lamé?

From what gathered both went down a similar route at the same time, although it is possible that I am mistaken in this. I named Cauchy as he seems to be the best-known of the two, and therefore the one with the highest absolutely-positively-not-just-some-crank factor.

Anyway if you lose the bet 1 = 0 therefore you don't need to transfer any money.

If I lose the bet (not that anyone has yet agreed to accept the bet), then payment will be based on restricted number theory without the axiom of induction.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-30T20:21:01.749Z · LW(p) · GW(p)

And here I thought there wasn't anything besides c I'd bet on at 99-to-1 odds.

Two! Two things in the universe I'd bet on at 99-to-1 odds. Though I'm not actually going to do it for more than say $2k if anyone wants it, since I don't bet more than I have, period.

Replies from: benelliott
comment by benelliott · 2011-09-30T21:07:56.853Z · LW(p) · GW(p)

Would you bet at 99-to-1 odds that I will not win the lottery tomorrow?

comment by XiXiDu · 2011-09-27T15:16:14.207Z · LW(p) · GW(p)

Assume for a second that FTL communication is possible and that PA is inconsistent. How could this possibly influence a proof of AI friendliness that has been invented before those discoveries were made and how can one make an AI provably friendly given other possible fundamental errors in our understanding of physics and mathematics?

Replies from: Jack, JoshuaZ
comment by Jack · 2011-09-27T19:11:51.932Z · LW(p) · GW(p)

We'd have to know what a proof of friendliness looks like to answer this.

comment by JoshuaZ · 2011-09-27T17:49:31.976Z · LW(p) · GW(p)

So this is an interesting set of questions. I don't think FTL issues should matter in this context. Inconsistency in PA is more of an issue. I'm not terribly worried about an actual inconsistency in PA. So much goes wrong if that's the case that AI issues might actually be small (no really. It could be that bad.). That said, however, this sort of worry makes a lot more sense for proofs in ZF or ZFC. I'd be a lot more worried about a contradiction showing up in ZF than in PA. So one might want to make sure that any such proofs use as small a segment of ZF as possible.

Replies from: None
comment by [deleted] · 2011-09-28T04:16:46.306Z · LW(p) · GW(p)

I'm not terribly worried about an actual inconsistency in PA. So much goes wrong if that's the case that AI issues might actually be small (no really. It could be that bad.)

That is an understandable reaction. But since encountering Nelson's ideas a while ago, I've found myself thinking less and less that it would be that bad at all. There is no very strong evidence that completed infinities need to be grappled with at a physical or real-world level. The main "finitary" thing you lose without PA is superexponentiation. But that shows up surprisingly rarely, in the universe and in math. (Or maybe not so surprisingly!)

Interestingly, superexponentiation shows up very often on less wrong. Maybe that's a good answer to XiXiDu's question. If Nelson is right, then people should stop using 3^^^3 in their thought experiments.

Replies from: ArisKatsaris, JoshuaZ
comment by ArisKatsaris · 2011-09-28T12:34:22.600Z · LW(p) · GW(p)

If Nelson is right, then people should stop using 3^^^3 in their thought experiments.

...I don't understand how this can be. 3^^3 is just (3^(3^3))= 3^27 = 7625597484987 And 3^^^3 is just (3^(3^(3^(... 7625597484987 times ...))))

Superexponentiation is just made of exponentiation many times. And exponentiation is made of multiplication, and multiplication is made of addition.

How can superexponentiation be made invalid without making invalid even normal addition?

Replies from: None, JoshuaZ
comment by [deleted] · 2011-09-28T12:43:25.137Z · LW(p) · GW(p)

I can say more if you want, but maybe see here and here for an explanation.

comment by JoshuaZ · 2011-09-28T13:42:21.208Z · LW(p) · GW(p)

Any finite calculation of superexponentiation will be valid. But as I understand it you can't in general in Nelson's formulation prove that superexponentation is well-defined in general.

comment by JoshuaZ · 2011-09-28T12:34:16.328Z · LW(p) · GW(p)

My understanding is that Nelson doesn't necessarily believe in exponentiation either although his work here doesn't have the direct potential to wreck that.

The real issue isn't PA as much as the fact that there are a very large number of theorems that we use regularly that live in axiomatic systems that model PA in a nearly trivial fashion. For example, ZFC minus replacement, choice and foundation. Even much weaker systems where one replaces the power set axiom with some weaker axiom allows you to construct PA. For example if you replace the power set axiom with the claim that for any two sets A and B their cartesian product always exists. Even this is much stronger than you need. Similarly, you can massively reduce the allowed types of input sets and predicates in the axiom schema of specification and still have a lot more than you need. You can for example restrict the allowed predicate sets to at most three quantifiers and still be ok (this may be restrictable to even just two quantifiers but I haven't worked out the details).

And what Nelson's proof (if valid) breaks is weaker than PA which means that if anything the situation is worse.

It may be for example that any axiomatic system which lets us talk about both the real numbers and the integers allows Nelson's problem. Say one for example one has an axiomatic system which includes Nelson's extreme finitary version of the natural numbers, include the very minimal amount of set theory necessary to talk about sets, include second order reals, and an axiom embedding your "natural numbers" into the reals. Can you do this without getting all sorts of extra stuff thrown in? I don't know and this looks like an interesting question from a foundational perspective. It may depend a lot on which little bits of set theory you allow. But, I'm going to guess that the answer is no. I'll need to think about this a bit more to make the claim more precise.

Replies from: None
comment by [deleted] · 2011-09-29T02:25:10.660Z · LW(p) · GW(p)

My understanding is that Nelson doesn't necessarily believe in exponentiation either although his work here doesn't have the direct potential to wreck that.

I know this isn't the main point of your comment but I got carried away looking this stuff up.

In his 1986 book, he defines a predicate exp(x,k,f) which basically means "f is a function defined for i < k and f(i) = x^i". And he defines another predicate epsilon(k) which is "for all x there exists f such that exp(x,k,f)". I think you can give a proof of epsilon(k) in his regime in O(log log k) steps. In standard arithmetic that takes O(1) steps using induction, but O(log log k) is pretty cheap too if you're not going to iterate exponentiation. So that's a sense in which Nelson does believe in exponentiation.

On the other hand I think it's accurate to say that Nelson doesn't believe that exponentiation is total, i.e. that for all n, epsilon(n). Here's an extract from Nelson's "Predicative Arithmetic" (Princeton U Press 1986). Chapter heading: "Is exponentiation total?"

Why are mathematicians so convinced that exponentiation is total (everywhere defined)? Because they believe in the existence of abstract objects called numbers. What is a number? Originally, sequences of tally marks were used to count things. Then positional notation -- the most powerful achievement of mathematics -- was invented. Decimals (i.e. numbers written in positional notation) are simply canonical forms for variable-free terms of arithmetic. It has universally assumed, on the basis of scant evidence, that decimals are the same kind of thing as sequences of tally marks, only expressed in a more practical and efficient notation. This assumption is based on the semantic view of mathematics, in which mathematical expressions, such as decimals and sequences of tally marks, are regarded as denoting abstract objects. But to one who takes a formalist view of mathematics, the subject matter of mathematics is the expressions themselves together with the rules for manipulating them -- nothing more. From this point of view, the invention of positional notation was the creation of a new kind of number.

How is it then that we can continue to think of the numbers as being 0, S0, SS0, SSS0, ...? The relativization scheme of Chapter 5 [heading: "Induction by relativization"] explains this to some extent. But now let us adjoin exponentiation to the symbols of arithmetic. Have we again created a new kind of number? Yes. Let b be a variable-free term of arithmetic. To say that the expression 2^b is just another way of expressing the variable-free term of arithmetic, SS0 SS0 ... SS0 with b occurrences of SS0, is to assume that b denotes something that is also denoted by a sequence of tally marks. (The notorious three dots are a direct carry-over from tally marks.) The situation is worse for expressions of the form 2^2^b -- then we need to assume that 2^b itself denotes something that is also denoted by a sequence of tally marks.

Mathematicians have always operated on the unchallenged assumption that it is possible in principle to express 2^b as a numeral by performing the recursively indicated computations. To say that it is possible in principle is to say that the recursion will terminate in a certain number of steps -- and this use of the word "numbers" can only refer to the primitive notion; the steps are things that are counted by a sequence of tally marks. In what number of steps will the recursion terminate? Why, in somewhat more that 2^b steps. The circularity in the argument is glaringly obvious.

Replies from: benelliott
comment by benelliott · 2011-09-30T15:32:11.824Z · LW(p) · GW(p)

In what number of steps will the recursion terminate? Why, in somewhat more that 2^b steps. The circularity in the argument is glaringly obvious.

This argument applies just as well to addition and multiplication.

Replies from: None, None
comment by [deleted] · 2011-09-30T20:04:31.033Z · LW(p) · GW(p)

See Nelsons paper where he talks about the qualitative difference between types of recursion.

comment by [deleted] · 2011-10-01T00:15:52.437Z · LW(p) · GW(p)

I think I can explain that. Nelson doesn't believe in any kind of Platonic number, but he does believe in formal number systems and that they can sometimes accurately describe the world. We have essentially two number systems: the tally-mark system (e.g. SSSSS0 and so on), and the positional system (e.g. 2011 and so on). Both of these are successful at modeling aspects of the world.

Most people regard these two number systems as equivalent in some sense. Specifically, there is a procedure for rewriting a positional number as a tally number, and most people believe that this procedure terminates "at least in principle." (For instance, one can formally prove this in PA using the induction axiom). But Nelson denies it. He believes that the positional number system and the tally number system are different, and that the positional number system is strictly stronger. I think your remark --

This argument applies just as well to addition and multiplication.

-- applies to the tally system, but not to the positional system. But Nelson does not believe that the positional system is inconsistent. So he is happy to adjoin axioms to the tally system that allow for working with addition and multiplication.

Replies from: benelliott
comment by benelliott · 2011-10-01T00:28:46.651Z · LW(p) · GW(p)

My knowledge of complexity theory is very limited, but I thought exponentiation could be done in polynomial time. Is that incorrect, or am I misunderstanding Nelson's argument (please don't tell me he also doubts complexity theory, we must be allowed some fun :P).

EDIT: Yes, I was wrong, the claim of polynomial time in the length of b may well be too strong, but it does seem to use less than 2^b. I think I can use the position system to compute 2^b in O(b^2) steps, which seems to lead us straight from accepting multiplication to accepting exponentiation.

Replies from: None
comment by [deleted] · 2011-10-01T01:14:25.278Z · LW(p) · GW(p)

I think so, in positional notation. That's what I was alluding to with my O(log log k) comment. But there is no polynomial time algorithm to reduce 2^n to a tally number (obviously).

I once read something by Nelson about the polynomial hierarchy in complexity theory, but I can't find it now.

comment by [deleted] · 2011-09-27T14:16:10.837Z · LW(p) · GW(p)

This aspect is very interesting:

Qea. If this were normal science, the proof that P is inconsistent could be written up rather quickly. But since this work calls for a paradigm shift in mathematics, it is essential that all details be developed fully. At present, I have written just over 100 pages beginning this. The current version is posted as a work in progress at http://www.math.princeton.edu/~nelson/books.html, and the book will be updated from time to time. The proofs are automatically checked by a program I devised called qea (for quod est absurdum, since all the proofs are indirect). Most proof checkers require one to trust that the program is correct, something that is notoriously difficult to verify. But qea, from a very concise input, prints out full proofs that a mathematician can quickly check simply by inspection. To date there are 733 axioms, definitions, and theorems, and qea checked the work in 93 seconds of user time, writing to files 23 megabytes of full proofs that are available from hyperlinks in the book.

Replies from: None, vi21maobk9vp, benelliott
comment by [deleted] · 2011-09-28T00:47:06.143Z · LW(p) · GW(p)

It seems easier to me to simply trust Coq than to read through 23 megabytes of proofs by eye. But I'm not entirely certain what the purpose of qea is.

Replies from: vi21maobk9vp, None
comment by vi21maobk9vp · 2011-09-28T09:21:39.412Z · LW(p) · GW(p)

It looks like Qea does verify the proof and generates human-readable text. In that case, my trust in it or Coq would be based on their relative size adjusted for amount of testing these tools received. Coq uses a non-trivially unique proof system (calclulus of coinductive constructions, if I remember correctly), which counts against it. Did anything look what proof system Qea uses?

comment by [deleted] · 2011-09-28T02:20:26.130Z · LW(p) · GW(p)

Yeah, I've heard about this "notorious difficulty" in verifying proof checkers before, and I don't understand what it could mean. Humans are way more unverifiable.

I bet the real purpose of qea is that it allowed its author to avoid learning to use something else. But I find it interesting that he recognizes the importance of computer verification for something like this, and maybe it indicates that this has been in the works for a while. It doesn't have to be half-baked to be wrong, though.

Replies from: benelliott
comment by benelliott · 2011-09-28T08:39:14.928Z · LW(p) · GW(p)

Yeah, I've heard about this "notorious difficulty" in verifying proof checkers before, and I don't understand what it could mean. Humans are way more unverifiable.

I think the idea is that for a point this controversial he is well aware that mathematicians may actually object to his trustworthiness (they've objected to less questionable things in the past), and want to verify the proof for themselves. I think he may well be right in this. However, I don't see why he can't give a full explanation (his current paper isn't) for humans as well, since this would probably be finished sooner and would probably save a lot of his own time if there is a mistake.

comment by vi21maobk9vp · 2011-09-27T20:05:56.435Z · LW(p) · GW(p)

Actually, I would be more interested in the size of checking core of Qea. Quickly inspecting 23 megabytes of proofs sounds like a bad idea. A converter to TSTP first-order proof format (for example) would allow me to use a few different bug-for-bug-independent well-tested checkers for this format that are already out there; that I would trust more than both human inspection and any single-implementation prover.

comment by benelliott · 2011-09-27T14:53:40.176Z · LW(p) · GW(p)

To date there are 733 axioms

How do you get to 733 axioms? Maybe I'm being stupid, but doesn't PA run on just 5?

Replies from: Sniffnoy, None, ArisKatsaris
comment by Sniffnoy · 2011-09-28T02:42:40.385Z · LW(p) · GW(p)

Strictly speaking, PA uses infinitely many axioms -- the induction axiom is actually an axiom schema, one axiom for each predicate you can plug into it. If you actually had it as one axiom quantifying over predicates, that would be second-order.

Replies from: None
comment by [deleted] · 2011-09-28T03:03:07.385Z · LW(p) · GW(p)

I think that Nelson denies that there is a completed infinity of predicates that you can plug into the schema.

Replies from: Sniffnoy
comment by Sniffnoy · 2011-09-28T03:57:44.311Z · LW(p) · GW(p)

Well, you'd certainly only need finitely many to prove inconsistency.

comment by [deleted] · 2011-09-27T15:00:50.580Z · LW(p) · GW(p)

I think 733 is counting axioms, definitions, and theorems all.

Replies from: benelliott
comment by benelliott · 2011-09-27T15:07:30.480Z · LW(p) · GW(p)

That would explain it.

comment by ArisKatsaris · 2011-09-27T14:59:29.168Z · LW(p) · GW(p)

It said "733 axioms, definitions, and theorems"

I'm guessing 733 is the sum of the axioms, definitions and theorems, not just the axioms alone.

comment by false_vacuum · 2011-10-07T18:38:03.752Z · LW(p) · GW(p)

Here's a summary and discussion of the affair, with historical comparison to the Gödel results and their reception (as well as comments from several luminaries, and David Chalmers) on a philosophy of mathematics blog whose authors seem to take the position that the reasons for consensus in the mathematical community are mysterious. (It is admitted that "arguably, it cannot be fully explained as a merely socially imposed kind of consensus, due to homogeneous ‘indoctrination’ by means of mathematical education.") This is a subject that needs to be discussed more on LW, in my opinion.