Numeracy neglect - A personal postmortem
post by vlad.proex · 2020-09-27T15:12:45.307Z · LW · GW · 29 commentsContents
My failed enlightenment Numeracy neglect Aesthetic insensitivity Epistemic ignorance Use computers! None 29 comments
My failed enlightenment
I've been thinking about my intellectual education, and what I wish had gone differently.
I am 26 years old. I’ve been reading books and going to school since I was 8. This puts my career as a learner at about 19 years. Honestly? I feel a bit disappointed. I've had a predominantly "humanistic" education, which is a nice way of saying that my gaps in scientific subjects are embarrassing. Meanwhile, I ended up interacting with people who've invested their formative years in getting a solid foundation in mathy and sciency subjects. Inevitably, I found myself envying their skills and wondering where my study time has gone, and what do I have to show for it.
In particular, I have diagnosed myself with a condition I call numeracy neglect. When I reflect on my education, I find that: (1) I was a bright and precocious kid. (2) I was always very curious and had a strong motivation to understand the world. (3) Despite this, and despite all the resources that society invested in me, I managed to go at least 15 years without learning much about mathematics, physics, chemistry, and computer science (to mention just the basics).
This contradiction pains me, but it also makes me curious. How does something like this happen?
Numeracy neglect
I will focus on mathematics, since it's a subject that most people are taught, but it's typically misunderstood and unappreciated.
Reflecting on my experience, I can identify two problems.
1. Aesthetic insensitivity. The inability to experience the beauty of mathematics, and to apply one’s general curiosity to it.
2. Epistemic ignorance. The inability to see or accept the fact that mathematics is the language of science, and if you don't understand mathematics, you won't understand most science. In general, the inability to understand mathematics' relevance and usefulness in life.
Another way to put it is that the first is a failure to grasp the intrinsic value of mathematics [1], while the second is a failure to understand its instrumental value.
How does this apply to my experience?
Aesthetic insensitivity
I was not born with a natural aversion for mathematics. I remember enjoying arithmetic and geometry in elementary school. Today, I feel a deep curiosity for mathematical subjects; I’ve also developed, or perhaps rediscovered, an aesthetic appreciation for mathematical concepts.
Yet something went amiss in the age 11 to 23. My grades in mathematics, physics and chemistry were low. I felt little curiosity for these subjects and would only study for the tests. I did no better when I started university. In my first year, I showed little desire to understand statistics, and passed the exam with the minimum grade. It was only later that I (slowly) began to wake up.
Part of it was due to laziness. I was a fast reader and had an excellent memory. This allowed me to excel in most subjects without much work. In contrast, numerate subjects required more dedication and systematic study.
Perhaps it was also a self-esteem issue. Mathematics is hard. Studying it forces me to confront failure on a regular basis. It’s humiliating to constantly fail on the simplest problems. (I recently downloaded the app Brilliant. It makes me feel anything but.) I was typically praised for intelligence rather than effort. Although some studies have challenged the mindset hypothesis, my experience confirms the trope (at least in hindsight). I had a lot of self-esteem invested in my intelligence, and it was much easier to feel brilliant while repeating philosophy, than to face my struggles with logarithms.
But there was a deeper issue at work. After all, I wasn’t lazy when it came to subjects that I cared about. And there were things that I valued more than my self-esteem, such as the need for knowledge.
Well, I can’t quite put my finger on it, but I would say that at that time, mathematics was not really real to me.
Many students complain that maths is too abstract, too detached from real life. They cannot find enjoyment in it (which is why I speak of ‘aesthetic insensitivity’). However, I'm not sure that abstraction is the real problem. (In my case, I spent a lot of energy on philosophy, which can be very abstract.) Rather, the problem may be one of failing to see the referents.
When I read philosophy, I felt that the concepts written on the page were referring to something real — something the words stood for. We could call them 'ideas', or 'objects in ideaspace [LW · GW]'. You don't read philosophy to see how the writer combines words on a page. You read it because you are interested in the ideas that the words point to. If you understand the words, you can explore the ideas, play with them, break them apart or combine them. This makes philosophy enjoyable and even beautiful.
In contrast, when I was studying mathematics, I wasn't able to really see the referents. I was blind to the reality of mathematical structures. It seemed like a purely syntactical game: we had numbers and symbols, and were taught how to combine them. Of course, I knew that numbers were 'real' in some sense. And I felt that mathematics was generally discovered rather than invented. But I did not get the glorious feeling that I was soaring in ideaspace, stretching my mind to think the unthinkable, and gazing at the fundamental structure of the multiverse. The signifier was there, but the signified was hidden.
It was programming that opened my eyes. As I started learning Python, I understood the difference between the label and the thing [LW · GW]. When coding, one works on two levels: the namespace, which contains the labels for the objects, and the objects themselves. You manipulate objects through their names, but the two levels must be kept apart. As a beginner, I didn't understand this. If you are new to coding, there might come a moment when you feel the need to access a variable name at the object level. Imagine you are saving the weight of various dogs, so you declare < terrier = 22 >. Then you want to print <"the weight of {terrier} is {22}">. You can get 22 by calling the variable <terrier>. But how do you print the name of the variable? Now you start looking for a function that takes the label down to the object level, such that the function <get_varname(terrier)> would return "terrier". This is a bad idea, because you're mixing up labels and objects, referents and referees. In your code, the variable <terrier> is merely an address for the object <22>. It has nothing to do with the object <"terrier">, unless you explicitly point it there.
At some point, I realized that doing maths is not so different. You are manipulating names that refer to objects. It's true, you cannot touch the objects directly; cannot see them except through their names. Still, the objects exist; it isn't a purely syntactical game.
There really is a thing like the number 17. Somewhere out there, in ideaspace. You can call it "17" or "seventeen" or "diecisiete" or "xyz123". You might be unaware of its existence, but that won't make it disappear; you may go around saying it isn't prime, but that won't alter its primeness one bit.
When you do maths, you're not just shuffling symbols on the blackboard. You arrange labels meaningfully, and lo and behold — this gives you access to real mathematical objects! You can explore them, play with them, break them apart and combine them. You can discover the number 17! You can prove its primeness! Prove that primes are infinite! And this can be fun.
To take the coding analogy further, you can imagine a Universal Mathematical Compiler that inputs your notation and translates it into real mathematical operations on real mathematical objects (provided your syntax makes sense). If you understand mathematics, it's like having a virtual machine in your brain that simulates the operations and returns an actual output. This output is not something you invented, or could have predicted in advance. You send a query to the universe, and the universe answers. It's like a message coming from the other side. It's your own private window on the inner workings of the multiverse.
Is this enough to start feeling that mathematics is beautiful, and to develop a passion for it? Perhaps not. But it should at least get one beyond the point where mathematics feels boring and empty.
Epistemic ignorance
Even if you don't like mathematics for its own sake, you should eventually realize that without it you cannot understand science. Donald Knuth said: "Science is what we understand well enough to explain to a computer." Like many aphorisms, this goes too far; Darwin's understanding of evolution was 'scientific' in my book, although his science was not advanced enough that he could have specified a faithful simulation (for instance, he didn't know about DNA). However, having a complete mathematical description of a system and being able to predict its behavior and simulate it on a computer, at least theoretically, is probably as far as scientific understanding can take you.
So why didn't I study more science?
If I could meet my eight-year old self, this is what I'd tell him: "You are curious about the world. To understand the world, you need to understand science. To understand science, you need to understand mathematics. Life is short, and the Art is long. Don't waste your time on dialectical philosophy. Don't get enmeshed in 'critical theory'. Don't think you're smart because you read science news [LW · GW]. Acquire at least a fundamental grasp of mathematics, then get some respected textbooks [LW · GW] and study the fundamental sciences. Make sure you have a basic knowledge of physics, chemistry and biology, as well as the theory of probability, so that you won't be completely ignorant of the nature of the universe, and you'll be less likely to fall prey to supernatural beliefs, psychologisms and mind-projections. Then you can focus on the disciplines that most interest you."
Why did my younger self fail to grasp this? It wasn't a problem of worldview. Early on, I embraced atheism, the scientific worldview, physical reductionism, the whole package. Yet how great was the mismatch between my professed values and my actual choices!
ME: "I believe physics describes the fundamental laws of the universe."
NOBODY: "So... you're studying physics?"
ME: "Gosh, no! I can't even tell you what thermodynamics is."
NOBODY: "Oh. So you don't care about understanding the universe?"
ME: "How dare you! Of course I do! I thirst for knowledge, truth and understanding!"
NOBODY: "So what are you doing to increase your knowledge?"
ME: "I'm studying Kant. Did you know that space and time are a priori forms of experience?"
NOBODY: We need to have a talk.
It all seems a bit absurd today. I have to make an effort of imagination to understand what was going on in my mind. This part is still not clear to me, but for now I can think of three factors.
The first is affect heuristic. I didn't choose which subjects to study based on a ranking of usefulness. Nor was I reflecting on the expected ROI of my study time. I was just going for what felt interesting or titillating or particularly mysterious at any point in time. And if that meant choosing Adorno's Negative dialectics over sitting down and doing physics problems, damn the world. This point ties in to aesthetic insensitivity: I felt that mathematics was neither exciting nor beautiful (at least for me) so I didn't take pains to study it. It was much too late when it occurred to me that the way I feel about a subject has no bearing on its importance or usefulness.
The second factor is that I had an implicit faith in conceptual, dialectical 'knowledge'. The kind of knowledge that makes you feel smart when you say that light is made out of waves [LW · GW], even though you have no mathematical understanding of what a wave is. I confused true understanding for being able to recite a great number of facts about different subjects.
The third factor is that I had no practical application for my knowledge. I didn't make predictions. I didn't give myself the chance to be mistaken. I didn't have a mission that forced me to either learn or fail. At the end of the day, all I did with most of my knowledge was think about it verbally and sometimes talk about it with other people.
Use computers!
If you could change just one thing in how education works today, what would it be?
I will throw my own suggestion, the most direct and effective I can think of: use computers.
No, I don't mean giving the students free tablets so they can watch YouTube videos. I mean putting the computer at the center of your pedagogical system and teaching mathematics and the other exact sciences through it. Currently, the principal medium for doing maths in schools is pen and paper. What if instead people learned theorems and models by reproducing them in code?
After all, computers are a much more natural medium for doing that. Despite their limits, computers can actually simulate and run formal systems, as opposed to... Forlorn students scribbling symbols on their notebooks, trying to make the answer come right?
As soon as children can reasonably learn to read and write in natural language, they should be taught the rudiments of programming. This would show them, for starters, that logic and mathematics are really real — not mere syntactical games. It would also empower them to grow up as shapers, rather than mere users, of technology.
Later, you could have them simulate the models of physics, chemistry and biology. They could engage in competitive or cooperative games which reward curiosity and stimulate them to think. You could have them design the games themselves, or send them to gather data and test theories. The possibilities are endless. This would not be a replacement for theory and the classical blackboard exercises. But it would provide a practical, engaging field of application which may awaken at least some students from chronic boredom and apathy (though it may create special difficulties for others).
Of course, this would require reshaping the whole educational system and turning most teachers into programmers. I didn't say it was easy, or currently feasible. But neither is it beyond the touch of human capacity, I think. Two centuries ago, only 12% of people could read. Today the numbers are basically reversed, with an estimated 14% of the world being illiterate. Yes, some children have serious difficulties with reading, but most can master it to an acceptable degree.
Will something similar happen with programming? I don't know; I can only hope. The spread of literacy was accelerated by cheap newspapers and print books. Personal computers have been around for fifty years, but only in the past two decades they became cheap enough to enter most households. And cheap smartphones are even younger. At least today most people know how to use a computer, which is a start.
Considering the rate at which educational institutions evolve, it might take a few decades before programming becomes a basic subject in most schools. In my opinion, it would be worth it to expend some effort in accelerating the process; the payoffs may be very large.
[1] This Quora post provides a nice description of the aesthetic value of mathematics (unfortunately I haven't been able to find the author): "The Surreal numbers are useful for broadening our minds, filling us with a sense of awe and marvel at what our own minds are capable of and what things exist in our imagination even if they don't fit in our accidental physical universe."
29 comments
Comments sorted by top scores.
comment by Ben Pace (Benito) · 2020-09-28T07:48:00.199Z · LW(p) · GW(p)
This is an excellent first post on the site. Rationality is about thinking in ways that systematically lead to truth, and to do that requires looking at your mind, noticing it’s flaws, and changing it. You’re doing rationality here.
Reading your post, it seems you’ve come to better understand the ways in which you were not focusing on understanding reality. The right next step, I believe, is to then follow this route and engage more directly with reality - getting more practical, and not being satisfied with mere ‘dialectical knowledge’, as you say.
If you do and do that (perhaps write some useful software, or a build useful contraption with a 3D printer), will you please write a post about what you did and what you learned? I’d be thrilled to read it.
Replies from: vlad.proex↑ comment by vlad.proex · 2020-10-01T16:36:05.684Z · LW(p) · GW(p)
Thank you for the welcome and the feedback! Yes, I am set on deepening my engagement with Reality and tackling more practical tasks. I also want to work on keeping score on my judgments and getting better at detecting & analysing my mistakes. I will definitely write more about it when the moment comes.
comment by Rudi C (rudi-c) · 2020-10-05T21:12:58.261Z · LW(p) · GW(p)
How many people actually know how to use a computer though? In my experience, most people can not use computers effectively; I.e., they can't install arbitrary programs (with notable examples being OSes and drivers), they don't understand the basic abstractions (e.g., the filesystem) well, they can't use CLI apps (which severely limits what tools they can use, and is not a skill that only programmers need), ... . They basically want to use the computer like it's an iPhone.
Replies from: vlad.proex↑ comment by vlad.proex · 2020-10-06T19:13:06.510Z · LW(p) · GW(p)
The general trend has been to make computers user-friendly, and to hide the complexity from the user. On one hand, this has been helpful for their diffusion, and I'm sure it benefited a lot of people in a lot of ways (besides making a lot of money). If I think of my parents, for instance, I can't believe they would have ever started to use computers had they been more complicated.
On the other hand, this might be the fundamental obstacle in the way of coding literacy. To do stuff in the modern world, you actually have to know how to read and write. To use computers, you don't need to know how to program (at the level that most people use them). If computers keep getting more intuitive, interactive and user friendly, why should people feel the need to understand them?
(One could imagine a future where technology is so attuned to people's intentions, they can just think, gesture or say what they want, and the machine provides; as a result, they lose interest in reading and writing, and a new dark age of illiteracy begins).
comment by Viliam · 2020-09-29T19:28:01.557Z · LW(p) · GW(p)
So, what now?
There are many free textbooks online (1, 2, 3...), so maybe choose math, download something that starts simple, read it, if it is too complicated put it away and download something else. Download the textbooks to a book reader, so you can read them while you travel, etc. If you get stuck somewhere, search the answer online, if that fails ask in Less Wrong shortform.
You lost some time, but it is not too late to learn. If you start now, after ten years you will be happy that you did.
I used to be good at math at high school, but then I chose computer science at university, and ended up making stupid websites for 20 years. A few months ago, I decided to give it another try, and downloaded a few books. (I think I still have solid high-school knowledge, so I decided to go ahead and chose set theory.) First time I read a book, I didn't understand most of it. Then I read it again and did some of the exercises, and suddenly it made much more sense. Now I understand even some Wikipedia articles which are definitely not written for beginners. (The intersection between "understands an esoteric topic", "can explain it clearly", and "willing to edit Wikipedia articles" is small, sometimes nonexistent.) I don't have much free time with job and kids, but I try to regularly find an hour or two. But I am also picky; if a book doesn't work for me, I throw it away and take another. That's the advantage of free downloading. (Ahem.)
Replies from: clone of saturn↑ comment by clone of saturn · 2020-09-29T23:25:05.686Z · LW(p) · GW(p)
See also: The Best Textbooks on Every Subject [LW · GW]
comment by Данило Глинський (danilo-glinskii) · 2020-10-07T11:36:53.693Z · LW(p) · GW(p)
Good stuff, though I'd like to point on some of your reflections.
> Part of it was due to laziness. I was a fast reader and had an excellent memory. This allowed me to excel in most subjects without much work. In contrast, numerate subjects required more dedication and systematic study.
It is important you say "laziness". Usually laziness is about taking less energy-demanding activity across lots of choices. So it looks like "solving problems" was energy-demanding for you, but other activities were not. Whenever you had to solve problems, it felt "tough", and coupled with lack of reason, you avoided this activity.
But it is interesting to understand, what's happening to other children, who actually do math. Suddenly you realize, that "solving problems" for them is less energy demanding, which is awkward! How can it be that same puzzle has different energy-demanding levels for different children?
If you say this is due to difference between children, you are correct, but what exactly is different? Interest itself can't change energy balance. Ability to read may also be same. As you said, you have good memory, so it also isn't a factor which increases energy level of a puzzle.
Let's look next.
> It was programming that opened my eyes. As I started learning Python, I understood the difference between the label and the thing [LW · GW]. When coding, one works on two levels: the namespace, which contains the labels for the objects, and the objects themselves.
How do you feel this understanding? What exactly makes you able to "see" namespace and objectspace in distinct? Have you had that ability before? What were subjective "energy levels" before doing programming and after?
> At some point, I realized that doing maths is not so different. You are manipulating names that refer to objects.
You say "manipulating", but you do this manipulation in brain, right? What allows you to do this "manipulation"? Did it feel energy-demanding before?
> I didn't give myself the chance to be mistaken. I didn't have a mission that forced me to either learn or fail. At the end of the day, all I did with most of my knowledge was think about it verbally and sometimes talk about it with other people.
This is good. Which kind of "manipulation" gives you chance to be mistaken? You know, if you do "verbal" talk, it often is generated like GPT-3 one -- just created word pattern is predictor for next word pattern. It just can't feel "wrong", it is what follows. But "manipulation" isn't like pattern-after-pattern, it is something different. What is it?
> Later, you could have them simulate the models of physics, chemistry and biology. They could engage in competitive or cooperative games which reward curiosity and stimulate them to think. You could have them design the games themselves, or send them to gather data and test theories. The possibilities are endless.
And here it is important to show, that something is omitted. To be able to simulate physics you first must have physics model based on rules in your head. -> Exact this <- part is tough, not the subsequent simulation. If you don't have rules mindset in your head, simulations just won't "click".
Whole programming won't "click", if you feel mental rule-based transformations energy-demanding.
So yeah, it is not enough to know what math is good for. It is not enough to teach children programming for them to like to understand world. This never taught stuff I talk about -- is a special mindset, which reduces energy-demanding levels for most puzzles, so they no longer feel "tough", but "interesting".
What is this mindset? How does it look like? How to pass it to other people?
↑ comment by vlad.proex · 2020-10-08T18:25:27.801Z · LW(p) · GW(p)
Thank you for your questions, they're proving very useful.
But it is interesting to understand, what's happening to other children, who actually do math. Suddenly you realize, that "solving problems" for them is less energy demanding, which is awkward!
I'm not sure this is the case. We're humans, maths is hard for everyone. I imagine it's more about developing an ethics of work early on and being willing to delay gratification and experience unpleasant sensations for the purpose of learning something valuable. Though of course it takes a basic level of intelligence to find motivation in intellectual work. And there needs to be some specific motivation as well, i.e. math is beautiful, or math is useful.
As for the other questions... You may be getting closer than me at hitting the target here. I think the comparison between GPT-3 talk, where nothing is wrong, and "manipulation", is central.
But "manipulation" isn't like pattern-after-pattern, it is something different. What is it?
I think the whole thing revolves around mental models. Programming "clicks" when the stuff that you do with the code suddenly turns into a coherent mental model, so that you can even predict the result of an operation that you haven't tried before. I became better at programming after watching a few theoretical computer science classes, because I was more proficient at building mental models of how the different systems worked. Likewise, maths clicks when you move from applying syntactical rules to building mental models of mathematical objects.
It's easier to build mental models with programming, because the models that you're working with are instantiated on a physical support that you can interact with. And because it's harder to fool yourself and easier to get feedback. If you screw up, the computer will stop working and tell you. If you screw up with pen and paper, you might not even realize it.
This is not the whole story, but it's a bit closer to what I meant to say.
Replies from: danilo-glinskii↑ comment by Данило Глинський (danilo-glinskii) · 2020-10-10T11:09:52.516Z · LW(p) · GW(p)
We're humans, maths is hard for everyone
This is false, there are a few genius mathematician who early in childhood proved it is easy for some humans.
I think the whole thing revolves around mental models
Exactly! There is even more specific concept in programming psychology, it is called "notional machines". Small little machines in your head which can interpret using rules.
I think those also can transfer to math learning, as after rule-based machines concept is grasped, all the algorithmic, iterative, replacable and transitive concepts from math start making sense.
Replies from: Dale Udall↑ comment by Dale Udall · 2021-06-16T22:37:11.141Z · LW(p) · GW(p)
This is false, there are a few genius mathematician who early in childhood proved it is easy for some humans.
Some outliers are hypernumerate. I'm hyperlexic, so attuned to words that I was able to teach myself to read before my childhood amnesia kicked in, so I never had to learn phonics. This doesn't mean the vast majority of humans aren't congenitally literate or numerate. OP's statement may be nominally false, but the exception proves the rule.
As for teaching the aesthetic beauty of math, I would give each student their own blank copy of the 10x10 multiplication table (with a zeros row and column, making it 11x11) at the start of grade 2, and teach them how to fill it in themselves. After that, they can use it in any math class that semester, but they have to make a new one at the start of each semester after that.
The inherent laziness of humanity will drive them to "cheat" by copying from lines above: filling in half the 4's from the 2's, half the 8's from the 4's, half the 6's from the 3's, and so on. And while they're doing that, they're learning in an indelible way.
comment by ChristianKl · 2020-09-28T17:11:35.919Z · LW(p) · GW(p)
The Surreal numbers are useful for broadening our minds, filling us with a sense of awe and marvel at what our own minds are capable of and what things exist in our imagination even if they don't fit in our accidental physical universe
Surreal numbers are the real numbers plus infinity and infinitesimal numbers. Both of those are used by physicists when they reason about our physical universe.
To the extend that infinitesimal numbers are not used it's because they are ugly. They need additional axioms and it's cleaner to do calculus without them.
Replies from: Richard_Kennaway, algon33↑ comment by Richard_Kennaway · 2020-09-29T18:24:04.977Z · LW(p) · GW(p)
Surreal numbers are the real numbers plus infinity and infinitesimal numbers. Both of those are used by physicists when they reason about our physical universe.
I've never seen physics done with any sort of non-standard reals, let alone the surreals, which are a very specific, "biggest possible" extension of the reals..
Replies from: ChristianKl, TAG↑ comment by ChristianKl · 2020-09-29T23:50:25.591Z · LW(p) · GW(p)
I have plenty of times heard of variables being infinitive in physics and I have seen people do calculus with infinitvely small numbers.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2020-09-30T07:11:43.839Z · LW(p) · GW(p)
Well, there's non-standard analysis, where you actually have infinite and infinitesimal numbers, and there's casual talk of infinite limits, but the latter need not involve the former. Normally it's just a shorthand for the epsilon-delta type of argument that was worked out in the 19th century.
↑ comment by TAG · 2020-09-29T19:22:56.827Z · LW(p) · GW(p)
Yeah, infinities are generally disregarded as "unphysical" is physics.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-09-30T11:28:36.365Z · LW(p) · GW(p)
This basically means that they do appear from time to time, but they are seen as undesireable in the model and thus there's a preference to model differently.
Replies from: TAG↑ comment by algon33 · 2020-09-29T17:56:17.394Z · LW(p) · GW(p)
Not really? The axioms (for hyperreals) aren't much different to that of the reals. Yes, its true that you need some strange constructions to justify that the algebra works as you'd expect. But many proofs in analysis become intuitive with this setup, and surely aid pedagogy. Admittedly, you need to learn standard analysis anyway since the tools are used in so many more areas. But I'd hardly call it ugly.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-09-30T14:40:10.011Z · LW(p) · GW(p)
So why do you think it is that math mostly doesn't get taught in a way where calculus is due to infinitively small numbers?
Replies from: algon33, TAG↑ comment by algon33 · 2020-09-30T16:04:31.283Z · LW(p) · GW(p)
Because of the shift in culture in mathematics, wherein the old proofs were considered unrigorous. Analysis ala Weirstrauss put the old statements on firmer footing, everyone migrated there, and infinitesimals were left to langiush until a transfer principle was proven to give them a rigorous founding. But by that time, standard analysis had born such great fruits that it was deeply intertwined with modern mathematics. And of course, there's been a trend against the infinitary and against the incomputable in the past century.
So there's both institutional inertia due to historical developments, as well as some philosophical objections which really boil down to whether you're fine with infinitary mathematics. I make no arguements concerning the latter, I just note that one can reject infinitary mathematics without believing they're ugly. Now if you're saying not all infinitary mathematics is ugly, just the hypereals, that's a different claim. I can get why one might think they're uglier than e.g. the complex numbers, but I don't get why they'd be ugly, period. May I ask why you think so?
Replies from: ChristianKl, Richard_Kennaway↑ comment by ChristianKl · 2020-09-30T17:17:36.177Z · LW(p) · GW(p)
What do you consider "being fine with infinitary mathematics" is it's not an aesthetic preference? (and thus the word ugly would apply)
Replies from: algon33↑ comment by Richard_Kennaway · 2020-09-30T16:50:36.364Z · LW(p) · GW(p)
Also, to use infinitesimals rigorously takes a fair amount of knowledge of mathematical logic, otherwise what works and what does not is just magic. Epsilon-delta proofs do not need any magic, nor any more logic than that needed to contend with mathematics at all.
Replies from: algon33↑ comment by algon33 · 2020-09-30T17:32:55.989Z · LW(p) · GW(p)
No, to understand why the transfer principle works requires a fair amount of knowledge of mathematical logic. It doesn't follow that you can't perform rigorous proofs once you've accepted it. Or am I missing something here?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2020-10-01T08:14:28.036Z · LW(p) · GW(p)
If you don't understand why the transfer principle works, you would just be accepting it as magic. This is not rigorous.
Replies from: algon33↑ comment by algon33 · 2020-10-01T10:16:36.428Z · LW(p) · GW(p)
I still disagree. You can use Fermat's last theorem rigorously without understanding why it works. Same for the four colour theorem. And which mathematics understand why we can classify finite simple groups the way we do? I'd bet fewer than a percent do. Little wonder, if the proof's 3 volumes long! My point is that there are many theorems a mathematician will use without rigorously knowing why it works. Oh sure, you can tell them a rough story outlining the ideas. But could the prove it themselves? Probably not, without a deep understanding of the area. Yet even without that understanding, they can use these theorems in formal proofs. They can get a machine to check over it.
Now, I admit that's unsatisfying. I agree that if they don't, then they don't have a rigorous understanding of the theorem. Eventually, problems will arise which they cannot resolve without understanding that which they accepted as magic. But is that really so fatal a flaw for teaching students the hyperreals? One only needs a modest amount of logic, perhaps enough for a course or two, to understand why the transfer principle works. Which seems a pretty good investment, given how much model theory sheds light on what we take for grounded.
Now I suppose if you find infinitary mathematics ugly, then is all besides the point. And unfortunately, there's not much I can say against that beyond the usual arguements and personal aesthetics.
Replies from: Richard_Kennaway, vlad.proex↑ comment by Richard_Kennaway · 2020-10-01T13:04:00.546Z · LW(p) · GW(p)
You can understand what these theorems say without knowing how they were proved. But non-standard analysis requires a substantial amount of extra knowledge to even understand the transfer principle. In contrast, epsilon-delta requires no such sophistication.
↑ comment by vlad.proex · 2020-10-01T16:47:37.146Z · LW(p) · GW(p)
To stay on computer science analogies, this reminds me of the principle of abstraction. When you call an API, it sort of feels like magic. A task gets done, and you trust that it was done correctly, and that saves you the time of controlling the code and rewriting it from scratch. "We have only to think out how this is to be done once, and forget then how it is done." (A. Turing, 1947).