The First Rung: Insights from 'Linear Algebra Done Right'

post by TurnTrout · 2018-04-22T05:23:49.024Z · LW · GW · 12 comments

Contents

  Foreword
    Tight Feedback Loops
  Linear Algebra Done Right
    1: Vector Spaces
    2: Finite-Dimensional Vector Spaces
    3: Linear Maps
      Matrix Redpilling
      Dual Maps
      Grueling Dualing 
    4: Polynomials
    5: Eigenvalues, Eigenvectors, and Invariant Subspaces
      Revisiting Material
      Diagonalizability
      Unclear Exercise
    6: Inner Product Spaces
    7: Operators on Inner Product Spaces
      Adjoints
      The Ghost Theorem
    8: Operators on Complex Vector Spaces
    9: Operators on Real Vector Spaces
    10: Trace and Determinant
    Final Verdict
  Forwards
    Timing
    Fluency
    Proofs
        Impatience
    Hiding Ignorance
      LessWrong
    Calculus
    Lost Calling
None
12 comments

Foreword

Linear algebra, my old flame - how I missed you. At my undergraduate institution, linear algebra was my introduction to proof-based mathematics. There are people who shake hands, and there are people who shake hands. You know the type - you grasp their hand, and they clamp down and pull you in, agitating so wildly you fear for the structural integrity of your joints. My first experience with proofs was an encounter of the latter variety.

I received my first homework grade, and I was not pleased with my performance. I promptly went to the library and vowed not to leave until I learned how to write proofs adequately. The hours passed, and, (thankfully for my stomach), I got it. I didn't let up all semester. Immediately before the final exam, I was doing pushups in the hallway, high-fiving my friends, and watching the 'Michael Jordan Top 50 All Time Plays' video while visualizing myself doing that to the test. Do that to the test I did indeed.

This time around, the appropriately-acronymized LADR is the first step on my journey to attain a professional-grade mathematical skillset.

Tight Feedback Loops

In a (possibly maniacal) effort to ensure both mastery of the material and the maturation of my proof skillset, I did nearly every one of the 561 exercises provided. I skipped problems only when I was confident I wouldn't learn anything, or calculus I didn't remember was required (and the payoff didn't seem worth the time spent relearning it now in a shallow manner, as opposed to thoroughly learning more calculus later). If I could sketch a solid proof in my head, I wouldn't write anything down. Even in the latter case, I checked my answers using this site (additional solutions may be found here, although be warned that not all of them are correct).

I also sometimes elected to give myself small hints after being stuck on a problem for a while; the idea was to keep things at the difficulty sweet spot. Specifically, I'd spend 10-20 minutes working on a problem by myself; if I wasn't getting anywhere, I'd find a hint and then backpropagate the correct mental motion instead of what I had been trying to do. I think that focusing on where you were going wrong and what insight you should have had, in what direction you should have looked, is more efficient than just reading solutions.

Over time, I needed fewer hints, even as problem difficulty increased.

My approach was in part motivated by the findings of Rohrer and Pashler:

Surprisingly little is known about how long-term retention is most efficiently achieved... Our results suggest that a single session devoted to the study of some material should continue long enough to ensure that mastery is achieved but that immediate further study of the same material is an inefficient use of time.

The point isn't to struggle per se - it's to improve and to win.

inear lgebra one ight

This book has been previously reviewed [LW · GW] by Nate Soares; as such, I'll spend time focusing on the concepts I found most difficult. Note that his review was for the second edition, while mine is for the third.

True to my vow in the last post [LW · GW], I have greatly improved my proof-babble; a sampling of my proofs can be found here.

If you zip through a page in less than an hour, you are probably going too fast.

Try me.

1: Vector Spaces

In which the author reviews complex numbers, vector spaces, and subspaces.

I kept having trouble parsing

For , the sum is defined by for all .

because my brain was insisting there was a type error in the function composition. I then had the stunning (and overdue) realization that my mental buckets for "set-theoretic functions" and "mathematical functions in general" should be merged.

That is, if you define

then simply has the definition . There isn't "online computation"; the composite function simply has a different Platonic lookup table.

2: Finite-Dimensional Vector Spaces

In which the author covers topics spanning linear independence, bases, and dimension.

3: Linear Maps

In which the author guides us through the fertile territory of linear maps, introducing null spaces, matrices, isomorphisms, product and quotient spaces, and dual bases.

So far our attention has focused on vector spaces. No one gets excited about vector spaces.

Matrix Redpilling

The author built up to matrix multiplication by repeatedly insinuating that linear maps are secretly just matrix multiplications, teaching you to see the true fabric of the territory you've been exploring. Very well done.

Look no further than here and here for an intuitive understanding of matrix multiplication.

Dual Maps

If then the dual map of is the linear map defined by for .

This StackExchange post both articulates and answers my initial confusion.

Grueling Dualing

The double dual space of , denoted , is defined to be the dual space of . In other words, . Define by for and .

Stay with me, this is dualble.

So takes some and returns the curried function . , being in , takes some and returns some . In other words, lets you evaluate the space of evaluation functions with respect to the fixed . That's it!

4: Polynomials

In which the author demystifies the quadratic formula, sharing with the reader those reagents used in its incantation.

Remarkably, mathematicians have proved that no formula exists for the zeros of polynomials of degree 5 or higher. But computers and calculators can use clever numerical methods to find good approximations to the zeros of any polynomial, even when exact zeros cannot be found.
For example, no one will ever be able to give an exact formula for a zero of the polynomial defined by .

...

There are two cats where I live. Sometimes, I watch them meander around; it's fascinating to think how they go about their lives totally oblivious to the true nature of the world around them. The above incomputability result surprised me so much that I have begun to suspect that I too am a clueless cat (until I learn complex analysis; you'll excuse me for having a few other textbooks to study first).

Edit: daozaich writes [LW · GW] about why this isn't as surprising as it seems.

5: Eigenvalues, Eigenvectors, and Invariant Subspaces

In which the author uses the prefix 'eigen-' so much that it stops sounding like a word.

Revisiting Material

Before starting this book, I watched 3Blue1Brown's video on eigenvectors and came out with a vague "understanding". Rewatching it after reading Ch. 5.A, the geometric intuitions behind eigenvectors didn't seem like useful ways-to-remember an exotic math concept, they felt like a manifestation of how the world works. I knew what I was seeing from the hundreds of proofs I'd done up to that point.

Imagine being blind yet knowing the minute details of each object in your room; one day, a miracle treatment restores your eyesight in full. Imagine then seeing your room for the "first time".

Diagonalizability

Intuitively, the diagonalizability of some operator on a finite-dimensional vector space means you can partition (more precisely, express as a direct sum) by the eigenspaces .

Another way to look at it is that diagonalization is the mutation of the basis vectors of so that each column of is one-hot; you then rearrange the columns (by relabeling the basis vectors) so that is diagonal.

Unclear Exercise

On page 156, you'll be asked to verify that a matrix is diagonalizable with respect to a provided nonstandard basis. The phrasing of the exercise makes it seem trivial, but the book doesn't specify how to do this until Ch. 10. Furthermore, it isn't core conceptual material. Skip.

6: Inner Product Spaces

In which the author introduces inner products, orthonormal bases, the Cauchy-Schwarz inequality, and a neat solution to minimization problems using orthogonal complements.

7: Operators on Inner Product Spaces

In which the author lays out adjoint, self-adjoint, normal, and isometric operators, proves the (a) Spectral theorem, and blows my mind with the Polar and Singular Value Decompositions.

Adjoints

Consider the linear functional given by for fixed ; this is then a linear functional on for the chosen . The adjoint produces the corresponding linear functional in ; given fixed , we now map to some linear functional on such that . The left-hand side is a linear functional on , and the right-hand side is a linear functional on .

The Ghost Theorem

My brain was unreasonably excited for this chapter because I'd get to learn about "ghosts" (AKA the Spectral theorem). My conscious self-assurances to the contrary completely failed to dampen this ambient anticipation.

8: Operators on Complex Vector Spaces

In which generalized eigenvectors, nilpotent operators, characteristic and minimal polynomials, and the Jordan Form make an appearance, among others.

9: Operators on Real Vector Spaces

In which real vector spaces are complexified and real operators are brought up to speed with their complex counterparts.

10: Trace and Determinant

In which the curtain is finally pulled back.

We proved the basic results of linear algebra before introducing determinants in this final chapter. Although determinants have value as a research tool in more advanced subjects, they play little role in basic linear algebra (when the subject is done right).

Sassy partial title drop (emphasis mine).

Final Verdict

Overall, I really liked this book and its clean theoretical approach. By withholding and until the end of the book, many properties were arrived at in a natural, satisfying, and enlightening manner. The proofs were clean, and the writing was succinct (although I did miss the subtle wit of Russell and Norvig). This book positively, definitely belongs on the MIRI book list.

Forwards

Timing

This review follows the previous [LW · GW] by exactly four weeks; however, I was at CFAR for a week during that time, dedicated a few days to All of Statistics (my next review), and slowed myself considerably by doing five hundred proofs. If I were treating this as a normal textbook, I imagine it would have taken less than half the time.

The most exciting effect of diving into math like this is that when I don't understand a concept, I now eagerly look to the formalization for clarification (previously, I'd barely be able to track all the Greek).

Fluency

An interesting parallel between learning math and learning languages: when I started picking up French, at first the experience was basically always was "ugh now I have to look up 5 things to even have the vocabulary to ask how to turn on my computer". Eventually, it became natural to belt out et comment est-ce que je peux allumer mon ordi ? L'enfoiré refuse de fonctionner, comme d'habitude ; c'est grand temps que j'en achète un de plus. No checking needed.

And so it went with proofs - "what techniques can I use to translate this statement into the answer" turned into "the proof feels like it's flowing out of my arm?!".

Proofs

I've noticed that when I successfully produce a non-trivial proof, it's nearly always when I have a strong understanding that this is how the world is. The proof is then just translating this understanding to math-ese, pounding away at the shell of the problem with every tool at my disposal to reach this truth.

Imagine a friend of yours fell under the ice. In one situation, you meander, blindfolded and half-deaf, with a vague idea of "I think they were this way?", trying different things and occasionally hearing faint pounding.

Now consider the situation in which you know where they are; it's then a matter of finding the right tools to smash the ice. You strike with everything you have, with every ounce of strength you possess; finally, you break your friend free.

Impatience

My most obvious remaining weak point with proofs is impatience. I have a strong intuition that this impulse is borne from my programming experience. When I write code, I carefully consider pre- and post-conditions, expected use cases, and the context of the problem. When using an external library, things are different; when asked why something is appropriate for use in a (low-stakes) program, it's fine to only provide high-level intuitions.

Similarly, in the few situations in which I have had to prove a novel result, I have found myself being extremely cautious (and rightly so). However, when proving a known result, a strong desire to take shortcuts overtakes me. I'm going to have to keep ironing this out.

Hiding Ignorance

Another aspect of this journey which I greatly enjoy is the methodical elimination of deficiencies and weak points. In my deep learning class, I had great trouble remembering what an eigenvalue was - it was at this moment that I knew I had to get down to business. Working with a surface-level understanding yields superficial results.

I imagine I was not the only person who was somewhat confused. However, being the first to admit confusion feels low-status: "everyone else seems to be following along, so I better be quiet and figure this out on my own time." I've made a point of ignoring this reasoning and asking more questions, and I think it's paid off. Incidentally, everyone else seemed relieved that the question got asked.

LessWrong

I'd like to add that in these posts, I present a somewhat distorted perspective of my academic life; these weak points are the exception, not the norm (ahem). I focus on my weak points because I want to become stronger - to admit them is not necessarily to say "I am weak" (although this may be the case relative to the person I want to become).

Speaking from experience, I feel that this is intimidating to newcomers. The culture can appear highly critical; this has been discussed before. I hope to do my part through these very posts, in which I plainly admit "I forgot eigenvalues. I fixed it - and I'm better off for having done so."

Calculus

The calculus-based exercises in this book and in All of Statistics make me uncomfortable. In the spirit of not hiding ignorance, I'll admit it - I totally forgot how to integrate by parts, among other things. Although MIRI math is mostly discrete, I imagine that I'll still make a quick run through a calculus textbook in the near future.

I also find myself curious about real and complex analysis, but I suspect that's more of a luxury (given timelines). Maybe I'll learn it in my free time at some point.

Lost Calling

I have the distinct feeling of having been incredibly silly for many years; one of the reasons being my pretending that I didn't love math. In high school, I did quite well (and was designated the outstanding mathematics student of my class) as a product of my passion for toying with math in my free time.

However, in college, I just wanted to learn computer science. I'd gloss over the low-level math (although I did do some Project Euler for fun). Instead, I preferred learning to find clever high-level solutions and build up an algorithm-centric problem-solving toolkit. Now that I've truly taken the plunge, the water is just so nice.

I'm sorry to have been away for so long.


If you are interested in working with me or others on the task of learning MIRI-relevant math, if you have a burning desire to knock the alignment problem down a peg - I would be more than happy to work with you. Messaging me [LW · GW] may also have the pleasant side effect of your receiving an invitation to the MIRIx Discord server.

For Ch. 8-10, I did a random sampling of 15% of the practice problems, as opposed to 100% (I was reaching steeply-diminishing returns for relevant gears-level gains).

Please let me know if there's a more appropriate linear-algebraic term for this.

Merely admitting ignorance is not virtuous.

he eighth virtue is humility. To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty. Who are most humble? Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans. Because this world contains many whose grasp of rationality is abysmal, beginning students of rationality win arguments and acquire an exaggerated view of their own abilities. But it is useless to be superior: Life is not graded on a curve. The best physicist in ancient Greece could not calculate the path of a falling apple. There is no guarantee that adequacy is possible given your hardest effort; therefore spare no thought for whether others are doing worse. If you compare yourself to others you will not see the biases that all humans share. To be human is to make ten thousand errors. No one in this world achieves perfection.

The virtue is in shedding ignorance:

he first virtue is curiosity. A burning itch to know is higher than a solemn vow to pursue truth. To feel the burning itch of curiosity requires both that you be ignorant, and that you desire to relinquish your ignorance. If in your heart you believe you already know, or if in your heart you do not wish to know, then your questioning will be purposeless and your skills without direction. Curiosity seeks to annihilate itself; there is no curiosity that does not want an answer. The glory of glorious mystery is to be solved, after which it ceases to be mystery. Be wary of those who speak of being open-minded and modestly confess their ignorance. There is a time to confess your ignorance and a time to relinquish your ignorance.
~ The Twelve Virtues of Rationality

12 comments

Comments sorted by top scores.

comment by TurnTrout · 2019-10-29T17:51:00.640Z · LW(p) · GW(p)

There are two cats where I live. Sometimes, I watch them meander around; it's fascinating to think how they go about their lives totally oblivious to the true nature of the world around them. The above incomputability result (ETA: the insolubility of the quintic) surprised me so much that I have begun to suspect that I too am a clueless cat (until I learn complex analysis; you'll excuse me for having a few other textbooks to study first).

So I finally got around to learning Galois theory / group theory, and I now understand this insolubility result (although my book stated some field theoretic results without proof). It's really, really cool, and I'm feeling a lot of things about it right now.

But also... 2018-Trout was just so adorably ignorant! Not only did I make the goof that daozaich pointed out [LW · GW], but that version of me also called the result an "incomputability" result!

You know the feeling you get when you find a drawing you made your mom when you were five, and it's full of adorable little spelling mistakes? Yeah, that's what I'm feeling right now.

comment by TheMajor · 2018-04-23T14:45:23.705Z · LW(p) · GW(p)

Interesting stuff!

I'm a math grad student and have been an assistant in (amongst other subjects) Linear Algebra courses for almost 5 years now. If you or anybody else here on LessWrong has questions on any math subject hit me up with a message. While I do not have the time to be some sort of online tutor I really love teaching, mathematics is very much my area of expertise and LessWrong readers are (on average) above-average students with more passion for understanding than the average textbook aims for, so I definitely think that investing a non-zero amount of time in this is positive-sum for the community. I'd love to help where I can!

comment by philip_b (crabman) · 2018-04-23T13:49:42.592Z · LW(p) · GW(p)

How many hours did it take you to read the whole book and do all the exercises that you did? I am reading it too, so far I've spent somewhere between 12 and 22 hours and I'm at exercises 2.A. Also I recommend watching https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw/playlists Linear Algebra Essence playlist to get (or remind yourself of) some geometric intuitions.

Replies from: TurnTrout
comment by TurnTrout · 2018-04-23T14:10:57.389Z · LW(p) · GW(p)

I spent 3 hours tops per set of ~20 exercises (allowing myself the hint schedule I detailed), but my overall time was somewhat lower than the 84 hours that implies. On average, my pace was one section + exercises / day.

The hint schedule meant that at first, I was pretty bad and needed help liberally, but that went away quickly. Some exercises took me half an hour or more, but for most I had a proof outline within a minute.

I also was familiar with the content of the first three chapters from my linear algebra course in college.

Hope this is useful!

comment by Dacyn · 2018-04-22T10:53:46.611Z · LW(p) · GW(p)

Typos:

"dualble" -> "dualable" (though to be fair this is not a real word so I guess it is subjective how you spell it)

"the book specify" -> "the book doesn't specify"

Regarding "one-hot", it seems like the linear algebra concept that you want is "an element of the standard basis of R^d". But it also seems that you are thinking about it in a combinatorial rather than linear algebra way, so I am not sure if you really want a linear algebra term for it.

Regarding the insolubility of the quintic, this is certainly an interesting result. But I've always thought it was a little odd to phrase this as that there is no "exact formula" for the root of a quintic polynomial. The theorem is just that there is no formula of a certain type (i.e. one built out of the standard four arithmetic operations plus integer roots). By analogy, there is no exact formula for the root of x^2 = 2 if by "exact formula" you mean an expression that just uses the standard four arithmetic operations. It seems to me that the fact that we can efficiently estimate the square root of 2 to arbitrary accuracy is exactly why it it is reasonable to say that we understand the solution. But the root of an integer polynomial can be estimated to arbitrary accuracy just as efficiently as a square root can.

Replies from: daozaich, TurnTrout
comment by daozaich · 2018-04-22T19:44:13.457Z · LW(p) · GW(p)

Regarding insolubility of the quintic, I made a top level post with essentially the same point, because it deserves to be common knowledge, in full generality.

comment by TurnTrout · 2018-04-22T13:18:45.164Z · LW(p) · GW(p)

Regarding "dualble", I meant that entirely as a phonetic pun.

Thanks for the explanation of the quintic result! I suppose that’s slightly less mystic than my initial understanding.

comment by John_Maxwell (John_Maxwell_IV) · 2018-04-23T07:19:00.074Z · LW(p) · GW(p)

Fun fact: There's also a book called Linear Algebra Done Wrong. I kinda like it.

comment by [deleted] · 2018-04-23T04:50:24.730Z · LW(p) · GW(p)

Awesome!

I just started going through this textbook! I'm working my through the proofs in chapter 1 right now, and I expect to progress through it as a supplement to the linear algebra course I'm taking right now.

comment by habryka (habryka4) · 2018-04-22T07:42:31.218Z · LW(p) · GW(p)

This is great! Linear Algebra Done Right is probably my favorite textbook, and this made me excited to read it again sometime soon.

comment by Hazard · 2018-04-23T02:18:11.803Z · LW(p) · GW(p)

Really love these reviews, as well as the general meta-notes on the approach to each bout of learning and how it went.

comment by philip_b (crabman) · 2018-11-27T22:16:04.008Z · LW(p) · GW(p)

What is the point of spending a section on dual maps, I wonder? Is the sole purpose to show that row rank equals column rank, I wonder? If so, then a lot of my time spent on exercises on dual maps might be wasted.