The Crackpot Offer

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-08T14:32:49.000Z · score: 52 (53 votes) · LW · GW · Legacy · 72 comments

When I was very young—I think thirteen or maybe fourteen—I thought I had found a disproof of Cantor’s Diagonal Argument, a famous theorem which demonstrates that the real numbers outnumber the rational numbers. Ah, the dreams of fame and glory that danced in my head!

My idea was that since each whole number can be decomposed into a bag of powers of 2, it was possible to map the whole numbers onto the set of subsets of whole numbers simply by writing out the binary expansion. The number 13, for example, 1101, would map onto {0, 2, 3}. It took a whole week before it occurred to me that perhaps I should apply Cantor’s Diagonal Argument to my clever construction, and of course it found a counterexample—the binary number (. . . 1111), which does not correspond to any finite whole number.

So I found this counterexample, and saw that my attempted disproof was false, along with my dreams of fame and glory.

I was initially a bit disappointed.

The thought went through my mind: “I’ll get that theorem eventually! Someday I’ll disprove Cantor’s Diagonal Argument, even though my first try failed!” I resented the theorem for being obstinately true, for depriving me of my fame and fortune, and I began to look for other disproofs.

And then I realized something. I realized that I had made a mistake, and that, now that I’d spotted my mistake, there was absolutely no reason to suspect the strength of Cantor’s Diagonal Argument any more than other major theorems of mathematics.

I saw then very clearly that I was being offered the opportunity to become a math crank, and to spend the rest of my life writing angry letters in green ink to math professors. (I’d read a book once about math cranks.)

I did not wish this to be my future, so I gave a small laugh, and let it go. I waved Cantor’s Diagonal Argument on with all good wishes, and I did not question it again.

And I don’t remember, now, if I thought this at the time, or if I thought it afterward . . . but what a terribly unfair test to visit upon a child of thirteen. That I had to be that rational, already, at that age, or fail.

The smarter you are, the younger you may be, the first time you have what looks to you like a really revolutionary idea. I was lucky in that I saw the mistake myself; that it did not take another mathematician to point it out to me, and perhaps give me an outside source to blame. I was lucky in that the disproof was simple enough for me to understand. Maybe I would have recovered eventually, otherwise. I’ve recovered from much worse, as an adult. But if I had gone wrong that early, would I ever have developed that skill?

I wonder how many people writing angry letters in green ink were thirteen when they made that first fatal misstep. I wonder how many were promising minds before then.

I made a mistake. That was all. I was not really right, deep down; I did not win a moral victory; I was not displaying ambition or skepticism or any other wondrous virtue; it was not a reasonable error; I was not half right or even the tiniest fraction right. I thought a thought I would never have thought if I had been wiser, and that was all there ever was to it.

If I had been unable to admit this to myself, if I had reinterpreted my mistake as virtuous, if I had insisted on being at least a little right for the sake of pride, then I would not have let go. I would have gone on looking for a flaw in the Diagonal Argument. And, sooner or later, I might have found one.

Until you admit you were wrong, you cannot get on with your life; your self-image will still be bound to the old mistake.

Whenever you are tempted to hold on to a thought you would never have thought if you had been wiser, you are being offered the opportunity to become a crackpot—even if you never write any angry letters in green ink. If no one bothers to argue with you, or if you never tell anyone your idea, you may still be a crackpot. It’s the clinging that defines it.

It’s not true. It’s not true deep down. It’s not half-true or even a little true. It’s nothing but a thought you should never have thought. Not every cloud has a silver lining. Human beings make mistakes, and not all of them are disguised successes. Human beings make mistakes; it happens, that’s all. Say “oops,” and get on with your life.

72 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Tom_McCabe · 2007-09-08T15:13:52.000Z · score: 37 (36 votes) · LW · GW

"So I found this counterexample, and saw that my attempted disproof was false, along with my dreams of fame and glory."

I know how that feels. When I was 14 or so, I took a course on cryptography, and the textbook proclaimed that modular inverses were the basis of public-key algorithms like RSA. I felt that modular inverses were crackable, and I plodded along on the problem for a few weeks, until I finally discovered a polynomial-time algorithms for doing modular inverses. It turned out that I had reinvented Euclid's algorithm, and the textbook authors were idiots.

comment by Baughn · 2012-02-04T17:11:46.881Z · score: 5 (4 votes) · LW · GW

Well, that's a pretty impressive "error" though. :-)

comment by Phil · 2007-09-08T15:57:45.000Z · score: 8 (7 votes) · LW · GW

Not to draw attention away from your main argument, but how does 1101 map onto {0, 2, 3}? It's probably obvious, but I don't see it.

comment by CCC · 2012-10-03T09:35:30.930Z · score: 8 (7 votes) · LW · GW

It's the positions of the ones, starting from position zero on the far right. Similarly, 19 (10011) would map to {0, 1, 4}.

comment by Sebastian_Hagen2 · 2007-09-08T16:17:43.000Z · score: 2 (4 votes) · LW · GW

Phil: Build the set from the used exponents of the powers of two. For instance, 1101[2] = 20 + 22 + 2**3

comment by Stuart_Armstrong · 2007-09-08T17:48:29.000Z · score: 6 (5 votes) · LW · GW

So I found this counterexample, and saw that my attempted disproof was false, along with my dreams of fame and glory.

Feels familiar - when I was younger, I proved the Poincaré conjecture, and Fermat's last theorem (twice). I generally managed to slay my proofs by myself, though I felt not regret at being wrong, just frustration and anger at myself.

Even now, as a mathematical researcher, it's very hard to give up a nice result that can't be proved. But I manage. And I do feel that there is a silver lining: greater, more confident accuracy.

comment by Robin_Hanson2 · 2007-09-08T17:50:52.000Z · score: 11 (10 votes) · LW · GW

After the fact you could see you made a mistake. But the key question is: what were the clearest signals at the time, the sort of signals you had a chance to notice and recognize? What is the warning to others? Presumably it is not to give up after your first failure.

comment by Stuart_Armstrong · 2007-09-08T17:54:54.000Z · score: 12 (11 votes) · LW · GW

But the key question is: what were the clearest signals at the time, the sort of signals you had a chance to notice and recognize?

In my case, it was the fact that brilliant mathematicians had tried to prove these results for generations. No matter how brilliant I think myself, it would be unlikely for me to have found a simple proof where everyone else had failed.

comment by Stuart_Armstrong · 2007-09-08T17:59:26.000Z · score: -1 (10 votes) · LW · GW

Minor quibble: since binary 0.1111... is 1, you need a number like 0.1010101... to get an actual counterexample.

comment by alex_zag_al · 2012-09-19T02:25:04.689Z · score: 0 (0 votes) · LW · GW

I think he means that the set of all whole numbers would correspond to an infinite string of ones, which is not equal to any whole number.

comment by alex_zag_al · 2012-09-19T14:29:37.752Z · score: -2 (1 votes) · LW · GW

He's looking for a correspondence between the natural numbers and their subsets because the subsets have a correspondence with the interval of reals [0,1], right? So .1111... = 1 is a counterexample, since it corresponds to the set of all whole numbers. Being equal to 1 doesn't make it representable by a finite subset.

comment by thomblake · 2012-09-19T15:12:36.735Z · score: 7 (6 votes) · LW · GW

You're both wrong, as pointed out later down in the comments. Eliezer wasn't referring to 0.1111...; he was referring to the infinite string ...1111.0. That doesn't represent any finite whole number, but it does represent an infinite set of whole numbers.

And yes, being equal to 1 does make it representable by a finite subset. Notably, {0}.

comment by Daniel_Humphries · 2007-09-08T18:22:16.000Z · score: 13 (12 votes) · LW · GW

It seems like one of the key factors in your story, Eliezer, is that you had read that book on math cranks. You were able to make the leap from your project of disproving Cantor and see its implications for the rest of your life thanks in part to having the example of the math crank in your mind.

Seeking evidence outside the immediate domain of inquiry can be tricky because it might lead one to include evidence that has no bearing on the actual problem, but because human endeavors don't happen in a vacuum, it's a great way of checking yourself for more general errors (like tilting at windmills).

comment by Joshua_Fox · 2007-09-08T19:10:39.000Z · score: 11 (12 votes) · LW · GW

I was not displaying ... any ... virtue

Most math teachers would be delighted if a student was able to understand Cantor's proof, think critically enough to search for a counter-proof, think creatively enough to describe a counter-proof (and based on different mathematical constructs at that), even though the proof was wrong at some critical steps.

This would be quite an achievement even for those who do not go on to the crucial last step of thinking self-critically enough to find the mistake in that "proof."

comment by Sebastian_Hagen2 · 2007-09-08T19:22:01.000Z · score: 6 (5 votes) · LW · GW

Minor quibble: since binary 0.1111... is 1, you need a number like 0.1010101... to get an actual counterexample.

Afaict, the original post doesn't contain any mention of binary fractions. An infinite binary sequence consisting entirely of ones doesn't represent any finite integer.

comment by Tom_Breton · 2007-09-08T19:30:05.000Z · score: 11 (10 votes) · LW · GW

It seems to be a common childhood experience on this list to have tried to disprove famous mathematical theorems.

Me, I tried to disprove the four-color map conjecture when I was 10 or 11. At that point it was a conjecture, not a theorem. I came up with a nice moderate size map that, after a apparently free initial labelling and a sequence of apparently forced moves, required a fifth color.

Fortunately the first thing that occured to me was to double-check my result, and of course I found a 4-color coloring.

comment by pnrjulius · 2012-06-30T03:23:29.339Z · score: -1 (2 votes) · LW · GW

I did exactly the same thing.

I also discovered shortly thereafter that I could force an n-coloring if I allowed discontinuous regions, which might seem trivial... except that real nations on real maps are sometimes discontinuous (Alaska, anyone?).

comment by Andrew_Clough · 2007-09-08T22:29:03.000Z · score: 11 (10 votes) · LW · GW

I expect that many people who grew up to be scientists and mathematicians attempted to create famous proofs when they were young, but I also expect that for many engineers such as myself our youthful folly went more along the direction of perpetual motion machines. I'd actually like to see some research on what the correlations really are.

comment by Flynn · 2007-09-09T00:15:42.000Z · score: 6 (5 votes) · LW · GW

LOL. Color me for both, Andrew. Perpetual motion using magnetic levitation in a vacuum at 10. Attempting to come up with a simple proof of Fermat's Theorem at 20 (if there was an easy way to determine n-roots of non-primes, I'd have been SET! :-) )

comment by pnrjulius · 2012-06-30T03:25:02.807Z · score: 3 (2 votes) · LW · GW

Actually, perpetual motion using vacuum energy might really be feasible, since the vacuum energy keeps expanding itself... at present, it looks sort of like a loophole in the laws of nature.

On the other hand, quantum gravity may close this loophole.

comment by DaFranker · 2012-08-06T20:24:10.416Z · score: 7 (6 votes) · LW · GW

Expansion of the original point: Finding various "loopholes" in the "laws of nature" that would allow FTL/perpetual-motion/infinite-scalable-free-energy/[insert-absurdly-surreal-technology-here].

I did that from age 16 (initially as a bored-by-this-math-class-let's-think-about-something-else tactic, gradually becoming more serious) onwards to around 19, when I finally realize that the "loopholes" aren't actually in the laws of nature, just in how shitty our (or in many of those cases, mine specifically) understanding of them is.

If there exists any loophole in the laws of nature such that something impossible becomes possible through this loophole, then the map was upside-down, and it was a feature of the laws of nature all along; the laws of nature had always permitted it, we just didn't know how. The Universe doesn't rape itself.

comment by [deleted] · 2012-09-17T21:04:23.231Z · score: 4 (5 votes) · LW · GW

Building pmms based on loopholes in the laws of physics is probably a good way to design experiments. Physics says X reality says Y.

comment by James_Bach · 2007-09-09T01:21:35.000Z · score: 8 (13 votes) · LW · GW

Something seems out of kilter about this, Eliezer.

When I was 13, I thought I had a proof in principle that there must be a minimum possible distance-- because to move is to move a finite distance, but no sum of infitesimal distances can compose a finite distance. I shared my idea with a professional physicist, who dismissed my idea using an appeal to authority. I don't care how fabulous the authority was, nor how ignorant I may have been, it was a terrible thing to for him to do that. It killed my enthusiasm for questioning physics, or math, at the time.

Reasoning, even mathematical reasoning, is not about just about right and wrong. It's also about how we model the world and apply our models to it. See Imre Lakatos's wonderful Proofs and Refutations for a look at how proofs are not just proofs, they are assertions about what's worth talking about and what we mean by our words.

And reasoning is also about honing our skills. We must develop the guts to recognize when we are wrong, but also the guts not worry so much about being wrong that we give up before we learn very much.

I once discovered a way to trisect an angle with a compass and straight edge. This has been proven to be impossible, apparently, but I did it. Later I discovered that I used an operation that wasn't "allowed" (an approximation maneuver), even though I performed the maneuver with only a compass and straight edge. To me, the proof that it can't be done is obviously incorrect, by any practical standard. Show me an angle and I can trisect it to an arbitrarily high degree of accuracy with my mechanical procedure. I challenge the "rules" set out by whomever thinks he's the know-all on what can be done with a compass and straight edge.

I hope other 13 year-olds don't read your essay and decide that the rational attitude is never to try to reinvent or challenge the Ancient Ones.

comment by Tom_McCabe · 2007-09-09T02:00:10.000Z · score: 4 (3 votes) · LW · GW

"I challenge the "rules" set out by whomever thinks he's the know-all on what can be done with a compass and straight edge."

I would be interested to see what you can get out of a compass and straightedge if you change the allowable operations. You could wind up with something much more complex than the things the ancient Greeks studied (think of how much more complex a Riemannian manifold is than a Euclidean n-space, once you remove a few of Euclid's axioms).

comment by [deleted] · 2011-08-24T11:47:41.237Z · score: 19 (18 votes) · LW · GW

I know this is an old comment, but the answer is actually quite nice.

What the compass and straight-edge basically give you is the capacity for solving quadratic equations. There's a field of numbers between the rational and real numbers called the Constructible numbers that completely characterizes what can be done there.

Alternative techniques (e.g., folding) can allow one to solve cubic equations, and so the field of numbers that can be constructed in this way is an extension of the Constructible numbers.

So the full answer to "what you can get if you change the allowable operations" is that construction techniques correspond to field extensions of the rational numbers, and this characterizes their expressive power.

comment by Liron · 2011-09-18T20:25:57.379Z · score: 6 (5 votes) · LW · GW

You are more than a paper-machine, you are a paper-based math expert.

comment by Douglas_Knight2 · 2007-09-09T04:15:34.000Z · score: 5 (4 votes) · LW · GW

The ancient Greeks themselves played around with the rules. Archimedes used a "marked straightedge" to trisect an angle.

The first hit on google for trisect an angle is about ways to do it, not discussions of impossibility.

comment by Robin_Hanson2 · 2007-09-09T11:20:11.000Z · score: 13 (12 votes) · LW · GW

It seems to me that unless Eliezer was unusual in some other important way not described, he was not at close risk of becoming a math crank.

comment by pnrjulius · 2012-06-30T03:27:35.335Z · score: 9 (8 votes) · LW · GW

But we know that he was unusual: He has a very high IQ. This by itself raises the probability of being a math crank (it also raises the probability of being a mathematician of course).

It's similar to how our LW!Harry Potter has increased chances of being both hero and Dark Lord.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-09T14:01:18.000Z · score: 4 (5 votes) · LW · GW

I'm also getting that impression, Robin. I'd say, "But there may be a selection effect in the people who comment at Overcoming Bias", but perhaps that would be, well, clinging.

This of course begs the question of where math cranks do come from.

comment by Douglas_Knight2 · 2007-09-09T14:41:27.000Z · score: 9 (8 votes) · LW · GW

While many people have mentioned similar disappointments, no one has echoed "I'll get that theorem eventually...even though my first try failed!" That was what seemed like a really bad sign when I read the essay before the comments. But I think we're really bad at communicating feelings, so I don't know how the feelings relate, how strong they were, and especially, how the commenters see the parallels with their reactions.

comment by Quill_McGee · 2012-10-03T08:11:38.135Z · score: 3 (2 votes) · LW · GW

Does it count if the state of trying lasted for a long(but now ended) time? because if so, I kept on trying to create a bijection between the reals and the wholes, until I was about 13 and found an actual number that I could actually write down that none of my obvious ideas could reach, and find an equivalent for all the non obvious ones.( 0.21111111..., by the way)

comment by V_V · 2012-10-03T08:45:41.128Z · score: 11 (9 votes) · LW · GW

While many people have mentioned similar disappointments, no one has echoed "I'll get that theorem eventually...even though my first try failed!" That was what seemed like a really bad sign when I read the essay before the comments.

I think it's worse than that. Many people mentioned that they have tried to solve open conjectures, which is something that would require exceptional intelligence, expecially without many years of experience. But if you are a smart teenager, thinking that you are exceptionally intelligent falls in the range of normal juvenile hubris.

Yudkowsky didn't try to solve an open conjecture. He tried to disprove a theorem. A theorem that was proved one hundred years ago, and has been known by pretty much everybody who had a math education since then. Thus, Yudkowsky didn't just think he was exceptionally intelligent, he thought that everyone else was basically an idiot.

That's actually a bad symptom of crackpot thought patterns, IMHO.

comment by mrsbayes · 2007-09-09T21:40:11.000Z · score: 3 (2 votes) · LW · GW

This argument that one should admit when they're wrong doesn't generalize beyond the exact reasoning of mathematical proofs and the like. In probablistic reasoning one can be, indeed usually is, wrong but close. The whole Bayesian worldview is predicated on the assumption that being a little bit wrong, or less wrong than the next guy, means you are probably on a more correct track towards the truth. But it doesn't and can't prove that, given just a few more important bits of information, the guy who's currently "more wrong" is right after all. So just how far from 100% probability must one be before one should admit that one is wrong? At what point does searching for more data relevant to a low-probability hypothesis become crackpottery? Should there not be more than just a single probability figure by which one makes this decision?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-10T05:10:01.000Z · score: 5 (4 votes) · LW · GW

Would any regular commenters/readers object if I deleted comments like those from "a woo just like you"? I've always been nervous around censorship, especially where it carries the appearance of conflict of interest, but lack of censorship also carries its penalties. If I don't get any requests not to do so, I'll delete the comment tomorrow.

comment by James_Blair · 2007-09-10T05:52:30.000Z · score: 6 (5 votes) · LW · GW

As I'm not much of a contributor, you can take my suggestion with a grain of salt but: Why not file away all deleted non-spam comments to a place where they can be read, but are out of the way? That way, moderators don't have to worry so much about censoring people and can instead focus on keeping discussions civil/troll-free.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-10T06:10:17.000Z · score: 4 (3 votes) · LW · GW

I would much prefer that, but I don't think this blog has the technology.

comment by The_Vicar · 2007-09-10T07:01:56.000Z · score: 5 (4 votes) · LW · GW

Do you remember the title of the book? It sounds interesting, speaking as a lapsed mathematician.

comment by Barkley__Rosser · 2007-09-11T17:38:01.000Z · score: 3 (2 votes) · LW · GW

Not sure if this is cranky or not, but when I was youthful I noticed that the Lorentz transformation of space-time due relativistic effects, square root of one minuc v squared over c squared, implies an imaginary solution for an v greater than c, that is for traveling faster than the speed of light. Now, most sci fi stories suggest that one would go backwards in time if one exceeded the speed of light, but I deduced that one would go into a second time dimension.

Of course the problem is that as long as Einstein is right, it is simply impossible to exceed the speed of light, thereby making the entire speculation irrelevant.

comment by pnrjulius · 2012-06-30T03:28:58.607Z · score: 3 (2 votes) · LW · GW

Well, some rather serious physicists have considered the idea: tachyons

comment by wizzwizz4 · 2019-08-08T12:52:11.476Z · score: 1 (1 votes) · LW · GW

That's imaginary mass implying superluminal velocity with real energy. Similar, but the other way around.

comment by Gil · 2007-09-12T17:24:31.000Z · score: 9 (10 votes) · LW · GW

I don't like the formulation: "A thought you should never have thought."

I'd prefer, "An idea you should have quickly rejected."

I suspect that many genuine innovations might first appear to be mistakes or unwarranted challenges to the prevailing wisdom. They should be thought. And they should be considered and criticized. But, we should be ready to reject them if they don't survive the criticism.

comment by billswift · 2007-09-13T06:56:07.000Z · score: 2 (1 votes) · LW · GW

Don't know what your blogging software allows, but richarddawkins.net now has a separate thread for off-topic posts; you click on a label at the end of the article to get to the off-topic thread.

comment by markrkrebs · 2010-02-26T09:27:21.819Z · score: 9 (8 votes) · LW · GW

I love this site. Found it when looking at a piece of crackpot science on the internet and, wondering, typed "crackpot" into google. I am trying to argue with someone who's my nemesis in most every way, and I'm trying to do it honestly. I feel his vested interest in the preferred answer vastly biases his judgment & wonder what biases do I have, and how did they get there. You seem to address a key one I liken to tree roots, growing in deep and steadfast wherever you first happen to fall, whether it's good ground or not.

Not unlike that analogy, I landed here first, on your post, and found it very good ground indeed.

comment by RobinZ · 2010-02-26T12:43:19.843Z · score: 19 (18 votes) · LW · GW

Wecome to LessWrong!

If you want another couple threads to start exploring, one very good starting place is What Do We Mean By Rationality? and its links; then there is the massive collection of posts accumulated in the Sequences which you can pick over for interesting nuggets. A lot of posts (and comments!) will have links back to related material, both at the top and throughout the text.

comment by Paul Crowley (ciphergoth) · 2010-02-26T13:09:44.982Z · score: 15 (14 votes) · LW · GW

Just to make it explicit: I really appreciate your "welcome" comments, they're good for the site. Thanks.

comment by RobinZ · 2010-02-26T13:15:47.527Z · score: 6 (5 votes) · LW · GW

You're welcome! I saw some other people doing it, and thought that I should do so as well.

comment by TheOtherDave · 2010-10-26T17:07:13.780Z · score: 8 (7 votes) · LW · GW

Let me echo ciphergoth. The effect is broader than you might think; it was because of one of these sorts of comments that I (years later) found the introduction thread when I did.

Admittedly, most readers probably don't start from the beginning and work their way forward. But some of us do!

comment by Kevin · 2010-02-28T03:23:27.301Z · score: -1 (2 votes) · LW · GW

This is one of my favorite crackpot writings. It does seem plausible that held breath underwater swimming is really good exercise. http://www.winwenger.com/ebooks/guaran.htm

comment by taryneast · 2011-03-21T21:48:24.215Z · score: 4 (3 votes) · LW · GW

Any sort of swimming is good exercise. Hypoxia, however, is bad for you... IMO it's better do the swimming with the oxygen ;)

comment by Normal_Anomaly · 2011-03-30T14:34:10.899Z · score: 2 (1 votes) · LW · GW

Since everyone is sharing their stories, here's mine. When I was around 10, a family friend introduced me to the four-color map problem. I spent months trying to draw a map that required five colors, and one time I thought I had it. I dreamed of fame and glory for a few hours, then I showed the map to a relative who colored it with four colors. Shortly after, I accepted that I wasn't going to get it and stopped.

comment by benelliott · 2011-08-24T11:40:06.247Z · score: 2 (1 votes) · LW · GW

Shortly after, I demonstrated to my own satisfaction that it was impossible and stopped.

If you really did prove the 4-colour theorem at age 10 then dreams of fame and glory would have been quite justified.

comment by Kingreaper · 2011-08-24T13:51:48.036Z · score: -2 (1 votes) · LW · GW

Demonstrated satisfactorily =/= 100% proven

Most things outside mathematics can never be proven, but many can easily be demonstrated satisfactorily.

comment by benelliott · 2011-08-24T17:13:45.811Z · score: 2 (3 votes) · LW · GW

The 4-colour theorem is not outside mathematics. I think I sort of understand what you mean by 'demonstrated satisfactorily' but in my experience such demonstrations aren't worth much, not only can they be wrong in principle, they are often wrong in practice. Mathematics is nothing if not counter-intuitive.

comment by Normal_Anomaly · 2011-08-26T21:02:42.759Z · score: 2 (1 votes) · LW · GW

I don't mean I proved it. I just meant that I worked at it long enough that I became pretty confident I wouldn't be able to do it. Edited my earlier post to make that clearer.

comment by benelliott · 2011-08-26T21:53:54.814Z · score: 2 (2 votes) · LW · GW

I understand that, I was being intentionally facetious.

comment by handoflixue · 2011-06-11T11:29:18.517Z · score: 5 (4 votes) · LW · GW

what a terribly unfair test to visit upon a child of thirteen. That I had to be that rational, already, at that age, or fail.

I always find it odd that you seem to write as though there is no hope of redemption when one makes a mistake of this magnitude. Certainly, lifetimes can be lost to such mistakes. But then, sometimes, it only takes a week to realise our folly, neh?

comment by ec429 · 2011-09-16T12:26:39.089Z · score: 4 (3 votes) · LW · GW

I fear that I might be currently trapped in this error: I've always resented Gödel's Incompleteness Theorems. When I was about 17 I thought I'd disproved 1IT (turned out I'd just reconstructed the proof of 2IT and missed the detail that Con(T)≠ProvT(Con(T))). It took me about a year after that to realise that, no, I wasn't going to disprove the ITs no matter how much I wanted to, and I accepted that trying to disprove them anyway would be a crackpot thing to do. Since then I've been trying to construct a philosophical framework of mathematics in which the ITs become irrelevant. Have I, in fact, taken the Crackpot Offer?

comment by Vladimir_Nesov · 2011-09-16T13:40:12.115Z · score: 7 (6 votes) · LW · GW

From your description it looks like you might have. You should retract failed conjectures, not rectify them. Another (less efficient) way to recover is to get expertise in the topic strong enough to sever incorrect intuitions (it doesn't always work in itself, human "ability" for rationalization is strong too). I think if you know math (specifically logic, algebra and set theory) less than on graduate level, you should either drop what you're doing, or get to that level.

comment by ec429 · 2011-09-16T14:03:15.489Z · score: 2 (1 votes) · LW · GW

Well, I'm studying for an undergraduate degree in mathematics at a good university; the "trying to construct..." is just one of several things I do in my copious free time. Also, I'm spending a much smaller proportion of my time on this project than I was spending on trying to disprove the ITs. So it looks to me as though I'm actually behaving rationally, but maybe that's just how the algorithm looks from the inside.

I think that by "make the ITs become irrelevant" I mean that I'm trying to find a philosophy in which the things that make me want the ITs to be false are no longer represented, because if I have any assumption that implies "And therefore the ITs are false" then that assumption is wrong. But again, is that just me rationalising?

comment by pnrjulius · 2012-06-30T03:36:32.863Z · score: 3 (2 votes) · LW · GW

I don't think you're just rationalizing. I think this is exactly what the philosophy of mathematics needs in fact.

If we really understand the foundations of mathematics, Godel's theorems should seem to us, if not irrelevant, then perfectly reasonable---perhaps even trivially obvious (or at least trivially obvious in hindsight, which is of course not the same thing), the way that a lot of very well-understood things seem to us.

In my mind I've gotten fairly close to this point, so maybe this will help: By being inside the system, you're always going to get "paradoxes" of self-reference that aren't really catastrophes.

For example, I cannot coherently and honestly assert this statement: "It is raining in Bangladesh but Patrick Julius does not believe that." The statement could in fact be true. It has often been true many times in the past. But I can't assert it, because I am part of it, and part of what it says is that I don't believe it, and hence can't assert it.

Likewise, Godel's theorems are a way of making number theory talk about itself and say things like "Number theory can't prove this statement"; well, of course it can't, because you made the statement about number theory proving things.

comment by ec429 · 2012-08-14T18:39:14.189Z · score: 2 (1 votes) · LW · GW

There is a further subtlety here. As I discussed in "Syntacticism", in Gödel's theorems number theory is in fact talking about "number theory", and we apply a metatheory to prove that "number theory is "number theory"", and think we've proved that number theory is "number theory". The answer I came to was to conclude that number theory isn't talking about anything (ie. ascription of semantics to mathematics does not reflect any underlying reality), it's just a set of symbols and rules for manipulating same, and that those symbols and rules together embody a Platonic object. Others may reach different conclusions.

comment by raptortech97 · 2012-04-20T02:43:03.978Z · score: 2 (1 votes) · LW · GW

I don't remember ever coming up with a false disproof in math, though I did manage to "solve" perpetual motion machines. I did successfully prove a trivial result in solving quadratic equations in modular arithmetic.

comment by evand · 2012-06-04T14:47:55.937Z · score: -1 (4 votes) · LW · GW

Eliezer, did you realize at the time that what you had done was construct the basic outline of the proof that 2^aleph0 = aleph1? There was an interesting gem hiding in your disproof, had you looked. Reversed stupidity is not intelligence, and all that :)

comment by Bundle_Gerbe · 2012-09-21T13:20:27.343Z · score: 6 (5 votes) · LW · GW

No 2^alpeh0=aleph1 is the continuum hypothesis, which is independent of the standard axioms of math, and can't be proven. I think maybe you mean he was close to showing 2^aleph0 is the cardinality of the reals, but I think he knew this already and was trying to use it as the basis of the proof.

Making mistakes like Eliezer's is a big part of learning math though, if we are looking for a silver lining. When you prove something you know is wrong, usually it's because of some misunderstanding or incomplete understanding, and not because of some trivial error. I think the diagonal argument seems like some stupid syntactical trick the first time you hear it, but the concept is far-reaching. Surely Eliezer came away with a bit better understanding of its implications after he straightened himself out.

comment by cousin_it · 2013-02-15T04:00:30.588Z · score: 0 (0 votes) · LW · GW

With all due respect to Eliezer, there exists an institution that can protect you from the danger described in the post. It's called "math school". Sometime in tenth grade, I came up with a proof of the continuum hypothesis that my teacher couldn't immediately overturn. We had a fun time finding the catch, then moved on to other things.

comment by john_gabriel66 · 2014-01-30T18:40:53.105Z · score: -9 (8 votes) · LW · GW

I think you gave up too soon. Cantor is the father of all mathematical cranks.

A disproof of the Diagonal Argument is found here:

http://www.spacetimeandtheuniverse.com/math/4507-0-999-equal-one-505.html

comment by enigma1 · 2014-08-13T20:45:32.331Z · score: 3 (2 votes) · LW · GW

God, this site is a spiritual mind-meld of crankness. I spent two hours trying to understand this disproof and found the it rested on rejecting that if a == b and c == d, then a - b = c - d