Book Review: Mathematics for Computer Science (Suggestion for MIRI Research Guide) 2017-07-22T19:26:44.228Z
Writing Collaboratively 2016-06-18T19:47:04.884Z


Comment by richard_reitz on Moloch's Toolbox (1/2) · 2017-11-05T04:53:04.229Z · LW · GW

If I'm understanding you correctly, you don't think people who can't individually affect the equilibrium are evil? Scientists who would be outcompeted, and therefore unable to do science, if they failed to pursue a maximally impressive career seem an example of this. If they're good (in terms of both ability and alignment), there's some wiggle room in there for altruism when it's cheap, but if err too far from having an impressive career, someone else, probably someone not making small career sacrifices for social benefit, gets the attention and funds instead, thereby reducing the total amount of good being done.

I'd be interested in some concrete examples of appealing to people's conscience getting us less-bad equilibria so I can better understand what you're getting at. Is the scientists who all resigned from the board of an Elsevier-owned journal and started their own an example?

Also interested in your thoughts where we're in a suboptimal equilibrium that we're trying to get out of and there's 2+ optimal ones (in the sense that there's no other equilibrium which is a Pareto improvement), but each competes with the other. For instance, suppose it's several years ago and we agree that it's better that gay couples have the same right to legal marriage as straight couples, but the two better equilibria are (1) elimination of marriage as a legal institution and (2) extension of marriage to gay couples, which is good for gay couples who want the legal benefits of marriage and less good for those who don't want marriage but suffer from e.g. less favorable tax treatment. (I'd really like a better example for this, especially after the examples EY gave, but lack a large, responsive group of fb followers I can get to brainstorm examples for me.)

Comment by richard_reitz on LW 2.0 Strategic Overview · 2017-09-15T12:33:16.260Z · LW · GW

Testing effect.

(At this point, I should really know better than to trust myself to write anything at 1 in the morning.)

Comment by richard_reitz on LW 2.0 Strategic Overview · 2017-09-15T04:47:53.310Z · LW · GW

if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the page you are familiar with.

Idea: give sequence-writers the option to include quizzes because this (1) demonstrates a badgeholder actually understands what the badge indicates they understand (or, at least, are more likely to) and (2) leverages the testing effect.

I await the open beta eagerly.

Comment by richard_reitz on 2017 LessWrong Survey · 2017-09-15T04:13:03.852Z · LW · GW

I have taken the survey.

Comment by richard_reitz on I Want To Live In A Baugruppe · 2017-03-23T15:18:00.587Z · LW · GW

Extremely interested, would move anywhere rationalists would set one of these up.

Comment by richard_reitz on Writing Collaboratively · 2016-06-30T01:46:16.622Z · LW · GW

When I first read In Fire Forged, I really liked it, but saw things I could improve. So, I left some high-quality reviews on (that is, reviews that demonstrated I somewhat knew what I was talking about) and then solicited the author. From there, networking (people who you collaborated with can collaborate with you).

Back-engineering, I'd tentatively suggest just posting somewhere with reasonable visibility that selects for writers you'd like to collaborate as, and then ask anyone interested to ping you. Alternatively, you could develop a relationship working on someone else's writing and then ask them to look at your's.

Comment by richard_reitz on Writing Collaboratively · 2016-06-30T01:22:00.759Z · LW · GW

You guys voted to develop Righteous Face Punching Style and add Kagome to your party. What do you need my help in decision-making for? (But, seriously, I probably shouldn't have taken the time to get caught up, much less actively participate. Fun read, though!)

Comment by richard_reitz on Writing Collaboratively · 2016-06-27T03:38:57.809Z · LW · GW

Ha! I give Lighting Up the Dark—also by Velorien—last pass editing.

Thanks for the rec. It looks really good.

Comment by richard_reitz on Writing Collaboratively · 2016-06-21T15:28:38.728Z · LW · GW

Do you have any examples of pieces that were written collaboratively?

In addition to In Fire Forged (in which I did first-round micro, in addition to contributing to worldbuilding), I give a last pass micro to Lighting Up the Dark (rational Naruto fanfic). I contributed a little to the Second Secular Sermon, although verse is really not my thing. I also have a partnership with Gram Stone that includes looking over each other's LW posts.

Do you keep a history of changes and discussions?

In Fire Forged has a Skype group, which keeps an archive of our discussion. Since Google Docs aren't the final publishing form, you can keep comments around, although in practice, once we've resolved an issue, the comment/suggestion usually goes away, so things don't get more cluttered. If you're interested, this is the Google Doc for this piece. But Google Docs doesn't keep a changelog, I have no desire to look back at one, nobody I've talked to has indicated any desire to look back at one, so there is no history of changes.

How do you determine the direction of the story, is there a single leader who makes the big decisions, or is it more egalitarian?

I more fully discussed this here, but the tl;dr is that experience indicates a single-leader setup usually works best, and is also the only setup I've come across. That said, it's egalitarian in the sense that the primary author doesn't give any special consideration to the words they've written or the ideas they've had; in the end, you want the best ideas expressed by the best words on the page. I can't imagine the author who would pass up improvements to their creative baby just because they weren't the ones to come up with them.

Comment by richard_reitz on Writing Collaboratively · 2016-06-21T15:08:23.965Z · LW · GW

I'm sorry my title misled you.

(Since writing has trouble carrying intent: I genuinely feel bad that the title I chose caused you to believe something that wasn't true. I wish I was smart enough to have come up with a title that more precisely communicated what I was and wasn't discussing.)

This is perhaps a case of different projects being best served by different practices. There's certainly nothing stopping you from making a Google Doc where two (or more) authors have editing permission (as opposed to commenting permission).

But it's absolutely true that I'm writing from the perspective of having one primary author. This is because every piece I've worked on has had one primary author. Paul Graham writes: "Design usually has to be under the control of a single person to be any good." Indeed, almost all books of fiction I'm aware of were published by one author. A quick survey indicates that even most TV shows—which have writing staffs—usually have one author, although it's somewhat more common to have several people collaborate as equals to put together a story, which is then written up by one person. This was more or less how Buffy got written, as described by Jane Espenson.

It would certainly have been a major breakthrough if I'd discovered how to have multiple authors consistently work together to make good work. But that's above my pay grade; if a bunch of professional writers who have been in the business for decades have a strong preference for single authorship, I see that as a strong indication that I should generally prefer single authorship.

Also, if this piece comes off as having collaborators mostly making small edits, that's partly because it's true, but partly my own bias. Certainly, in In Fire Forged, we had one or two people who primarily worked with the author on macro level issues (plot, characterization, thematic consistency, etc), while I worked on the micro level. But it's also partly because it's true; outside of two fanfics (plus a poem), I mostly work on nonfiction blog posts. In these, the author knows what they want to say and have said it, and just need to say it better. They may or may not benefit from a fact check (usually not, at least for the pieces I've worked on), but beyond that, most of the room for improvement comes in the form of little changes.

Lastly, I have to thank you. This is the first thing I've actually published. An earlier draft contained a section discussing what I've just said, but I cut it because I didn't think it contained material that was useful to either author or collaborator. Obviously, I was wrong! So, now I have a slightly better sense of when cutting stuff goes too far.

Comment by richard_reitz on Writing Collaboratively · 2016-06-18T21:56:47.857Z · LW · GW

Yes; Eliezer recommended it in an Author's Note, which is how I got involved.

It's also not dead so much as on a very extended hiatus. Our author started a computer game company and has been prohibitively busy for a while now. There's a blog with updates about the lack of updates.

Comment by richard_reitz on Writing Collaboratively · 2016-06-18T19:49:11.308Z · LW · GW

If anyone would like a collaborator for something they're writing for LessWrong or diaspora, please PM me. Anyone interested in being a collaborator can reply to this comment, thereby creating a collaborator repository.

Comment by richard_reitz on A Second Year of Spaced Repetition Software in the Classroom · 2016-05-02T05:38:30.324Z · LW · GW

my classes continue to perform with increasingly minimal note-taking and homework.

Which homework hasn't been assigned because of Anki? Remembering back to my high school English classes, the only homework I can remember doing was reading readings and writing essays. I can't see how either could be displaced by Anki.

Comment by richard_reitz on Lesswrong 2016 Survey · 2016-03-26T02:36:43.387Z · LW · GW

I have taken the survey.

Comment by richard_reitz on After Go, what games should be next for DeepMind? · 2016-03-10T23:03:32.998Z · LW · GW

And yet, humans currently have the edge in Brood War. Humans are probably doomed once StarCraft AIs get AlphaGo-level decision-making, but flawless micro—even on top of flawless* macro—won't help you if you only have zealots when your opponent does a muta switch. (Zealots can only attack ground and mutalisks fly, so zealots can't attack mutalisks; mutalisks are also faster than zealots.)

*By flawless, I mean macro doesn't falter because of micro elsewhere; often, even at the highest levels, players won't build new units because they're too busy controlling a big engagement or heavily multitasking (dropping at one point, defending a poke elsewhere, etc). If you look at it broadly, making the correct units is part of macro, but that's not what I'm talking about when I say flawless macro.

Comment by richard_reitz on Learning Mathematics in Context · 2016-01-29T13:35:53.095Z · LW · GW

Excellent points; "rigorous" would have been a better choice. I haven't yet had the time to study any computational fields, but I'm assuming the ones you list aren't built on the "fuzzy notions, and hand-waving" that Tao talks about.

I should also add I don't necessarily agree 100% with every in Lockhart's Lament; I do think, however, that he does an excellent job of identifying problems in how secondary school math is taught and does a better job than I could of contrasting "follow the instructions" math with "real" math to a lay person.

Comment by richard_reitz on Learning Mathematics in Context · 2016-01-29T10:17:01.001Z · LW · GW

I once took a math course where the first homework assignment involved sending the professor an email that included what we wanted to learn in the course (this assignment was mostly for logistical reasons: professor's email now autocompletes, eliminating a trivial inconvenience of emailing him questions and such, professor has all our emails, etc). I had trouble answering the question, since I was after learning unknown unknowns, thereby making it difficult to express what exactly it was I was looking to learn. Most mathematicians I've talked to agree that, more or less, what is taught in secondary school under the heading of "math" is not math, and it certainly bears only a passing resemblance to what mathematicians actually do. You are certainly correct that the thing labelled in secondary schools as "math" is probably better learned differently, but insofar as you're looking to learn the thing that mathematicians refer to as "math" (and the fact you're looking at Spivak's Calculus indicates you, in fact, are), looking at how to better learn the thing secondary schools refer to as "math" isn't actually helpful. So, let's try to get a better idea of what mathematicians refer to as math and then see what we can do.

The two best pieces I've read that really delve into the gap between secondary school "math" and mathematician's "math" are Lockhart's Lament and Terry Tao's Three Levels of Rigour. The common thread between them is that secondary school "math" involves computation, whereas mathematician's "math" is about proof. For whatever reason, computation is taught with little motivation, largely analogously to the "intolerably boring" approach to language acquisition; proof, on the other hand, is mostly taught by proving a bunch of things which, unlike computation, typically takes some degree of creativity, meaning it can't be taught in a rote manner. In general, a student of mathematics learns proofs by coming to accept a small set of highly general proof strategies (to prove a theorem of the form "if P then Q", assume P and derive Q); they first practice them on the simplest problems available (usually set theory) and then on progressively more complex problems. To continue Lockhart's analogy to music, this is somewhat like learning how to read the relevant clef for your instrument and then playing progressively more difficult music, starting with scales. [1] There's some amount of symbol-pushing, but most of the time, there's insight to be gleaned from it (although, sometimes, you just have to say "this is the correct result because the algebra says so", but this isn't overly common).

Proofs themselves are interesting creatures. In most schools, there's a "transition course" that takes aspiring math majors who have heretofore only done computation and trains them to write proofs; any proofy math book written for any other course just assumes this knowledge but, in my experience (both personally and working with other students), trying to make sense of what's going on in these books without familiarity with what makes a proof valid or not just doesn't work; it's not entirely unlike trying to understand a book on arithmetic that just assumes you understand what the + and * symbols mean. This transition course more or less teaches you to speak and understand a funny language mathematicians use to communicate why mathematical propositions are correct; without taking the time to learn this funny language, you can't really understand why the proof of a theorem actually does show the theorem is correct, nor will you be able to glean any insight as to why, on an intuitive level, the theorem is true (this is why I doubt you'd have much success trying to read Spivak, absent a transition course). After the transition course, this funny language becomes second nature, it's clear that the proofs after theorem statements, indeed, prove the theorems they claim to prove, and it's often possible, with a bit of work [2], to get an intuitive appreciation for why the theorem is true.

To summarize: the math I think you're looking to learn is proofy, not computational, in nature. This type of math is inherently impossible to learn in a rote manner; instead, you get to spend hours and hours by yourself trying to prove propositions [3] which isn't dull, but may take some practice to appreciate (as noted below, if you're at the right level, this activity should be flow-inducing). The first step is to do a transition, which will teach you how to write proofs and discriminate between correct proofs from incorrect; there will probably some set theory.

So, you want to transition; what's the best way to do it?

Well, super ideally, the best way is to have an experienced teacher explain what's going on, connecting the intuitive with the rigorous, available to answer questions. For most things mathematical, assuming a good book exists, I think it can be learned entirely from a book, but this is an exception. That said, How to Prove It is highly rated, I had a good experience with it, and other's I've recommended it to have done well. If you do decide to take this approach and have questions, pm me your email address and I'll do what I can.

  1. This analogy breaks down somewhat when you look at the arc musicians go through. The typical progression for musicians I know is (1) start playing in whatever grade the music program of the school I'm attending starts, (2) focus mainly on ensemble (band, orchestra) playing, (3) after a high (>90%) attrition rate, we're left with three groups: those who are in it for easy credit (orchestra doesn't have homework!); those who practice a little, but are too busy or not interested enough to make a consistent effort; and those who are really serious. By the time they reach high school, everyone in this third group has private instructors and, if they're really serious about getting good, goes back and spends a lot of times practicing scales. Even at the highest level, musicians review scales, often daily, because they're the most fundamental thing: I once had the opportunity to ask Gloria dePasquale what the best way to improve general ability, and she told me that there's 12 major scales and 36 minor scales and, IIRC, that she practices all of them every day. Getting back to math, there's a lot here that's not analogous to math. Most notably, there's no analogue to practicing scales, no fundamental-level thing that you can put large amounts of time into practicing and get general returns to mathematical ability: there's just proofs, and once you can tell a valid proof from an invalid proof, there's almost no value that comes from studying set theory proofs very closely. There's certainly an aesthetic sense that can be refined, but studying whatever proofs happen to be at to slightly above your current level is probably the most helpful (like in flow), if it's too easy, you're just bored and learn nothing (there's nothing there to learn), and if it's too hard, you get frustrated and still learn nothing (since you're unable to understand what's going on).)

  2. "With a bit of work", used in a math text, means that a mathematically literate reader who has understood everything up until the phrase's invocation should be able to come up with the result themselves, that it will require no real new insight; "with a bit of work, it can be shown that, for every positive integer n, (1 + 1/n)^n < e < (1 + 1/n)^(n+1)". This does not preclude needing to do several pages of scratch work or spending a few minutes trying various approaches until you figure out one that works; the tendency is for understatement. Related, most math texts will often leave proofs that require no novel insights or weird tricks as exercises for the reader. In Linear Algebra Done Right, for instance, Axler will often state a theorem followed by "as you should verify", which should require some writing on the reader's part; he explicitly spells this out in the preface, but this is standard in every math text I've read (and I only bother reading the best ones). You cannot read mathematics like a novel; as Axler notes, it can often take over an hour to work through a single page of text.

  3. Most math books present definitions, state theorems, and give proofs. In general, you definitely want to spend a bit of time pondering definitions; notice why they're correct/how the match your intuition, and seeing why other definitions weren't used. When you come to a theorem, you should always take a few minutes to try to prove it before reading the book's proof. If you succeed, you'll probably learn something about how to write proofs better by comparing what you have to what the book has, and if you fail, you'll be better acquainted with the problem and thus have more of an idea as to why the book's doing what it's doing; it's just an empirical result (which I read ages ago and cannot find) that you'll understand a theorem better by trying to prove it yourself, successful or not. It's also good practice. There's some room for Anki (I make cards for definitions—word on front, definition on back—and theorems—for which reviews consist of outlining enough of a proof that I'm confident I could write it out fully if I so desired to) but I spend the vast majority of my time trying to prove things.

Comment by richard_reitz on Starting University Advice Repository · 2015-12-04T12:52:26.632Z · LW · GW

It has happened more than once that a professor has assigned a textbook, which I bought, only for the professor to say in the first class that the only reason they assigned a textbook is because they were required to, but will never use it. Holding off on buying textbooks until after the first class (or, I guess, emailing the professor to ask if they plan on using the textbook) would have saved me several hundreds of dollars. (Having textbooks to study from is nice—they are, to me, the most efficient way of getting up to speed in math or science—but the ones professors assign because they need to put something down tend not to be the best ones.)

Comment by richard_reitz on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-07T12:45:18.415Z · LW · GW

Lemma: sum of the degrees of the nodes is twice the number of edges.

Proof: We proceed by induction on the number of edges. If a graph has 0 edges, the the sum of degrees of edges is 0=2(0). Now, by way of induction, assume, for all graphs with n edges, the sum of the degrees of the nodes 2n; we wish to show that, for all graphs with n+1 edges, the sum of the degrees of the nodes is 2(n+1). But the sum of the degrees of the nodes is (2n)+2 = 2(n+1). ∎

The theorem follows as a corollary.

If you want practice proving things and haven't had much experience so far, I'd recommend Mathematics for Computer Science, a textbook from MIT and distributed under a free license, along with the associated video lectures *. To use Terry Tao's words, Sipser is writing at both level 1 and 3: he's giving arguments an experienced mathematician is capable of filling in the details to form a rigorous argument, but also doing so in such a way that a level 1 mathematician can follow along. Critically, however, from what I understand from reading Sipser's preface, he's definitely not writing a book to move level 1 mathematicians to level 2, which is a primary goal of the MIT book. If you're looking to prove things because you haven't done it much before, I infer you're essentially looking to transition from level 1 to 2, hence the recommendation.

A particular technique I picked up from the MIT book, which I used here, was that, for inductive proofs, it's often easier to prove a stronger theorem, since it gives you stronger assumptions in the inductive step.

PM me if you want someone to look over your solutions (either for Sipser or the MIT book). In the general case, I'm a fan learning from textbooks and believe that working things out for yourself without being helped by an instructor makes you stronger, but I'm also convinced that you need feedback from a human when you're first getting learning how to prove things.

* The lectures follow an old version of the book, which ~350 pages shorter and, crucially, lacks exercises.

Comment by richard_reitz on Two Growth Curves · 2015-10-02T12:18:52.773Z · LW · GW

It helps to explicitly visualize people who I perceive as being skilled in X failing at it over and over again

Some of the greatest value I've gotten out of attending math lectures comes from seeing math Ph.Ds (particularly good ones) make mistakes or even forget exactly how a proof works and have to dismiss class early. It never happened often, but just often enough to keep me from getting discouraged.

Comment by richard_reitz on Open thread, Sep. 28 - Oct. 4, 2015 · 2015-09-30T08:50:27.569Z · LW · GW

Paul Graham writes that studying fields with hard, solved problems (eg mathematics) is useful, because it gives you practice solving hard problems and the approaches and habits of mind that you develop solving those problems are useful when you set out to tackle new (technical) problems. This claim seems at least plausible to me and seems to line up with me personal experience, but you seem like a person who might know why I shouldn't believe this, so I ask, is there any reason I should doubt that the problem-solving approaches and habits of mind I develop studying mathematics won't help me as I run into novel technical problems?

Comment by richard_reitz on Open thread, Sep. 28 - Oct. 4, 2015 · 2015-09-30T08:15:24.788Z · LW · GW

If you're after feedback-for-understanding, providing a student with a list of questions they got wrong and a good solutions manual (which you only have to write once) works most of the time (my guess is around 90% of the time, but I have low confidence in my estimates because I'm capable of successfully working through entire textbooks' worth of material and needing no human feedback, which I'm told is not often the case). Doing this should be more effective than having the error explained outright a la generation effect.

Another interesting result is that the best feedback for fostering understanding often comes not from experts, who have such a deep degree of understanding and automaticity that it impairs their ability to simulate and communicate with minds struggling with new material, but from students who just learned the material. There's a risk of students who believe the right thing for the wrong reason propagating their misunderstanding, but I think that pairing up a student who's struggling with some concept (i.e., throwing a solutions manual at them hasn't helped them bridge the conceptual gap that caused them to get the question wrong) with a student who understands it is often helpful. IIRC, Sal Khan described using this technique with some success in his book; a friend/mentor who teaches secondary math and keeps up with the literature tells me this works; and I've used this basic technique doing an enrichment afterschool program for the local Mathcounts team after the season had ended and can only describe its efficacy as "definitely witchcraft".

I think there's a place for graders to give detailed feedback to bad answers, but most of the time, it's better to force students to do the work themselves and locate their own errors/conceptual gaps, and in most of the remaining cases, to pawn off the responsibility to students (this could be construed as teachers being lazy, but it's also what, to my knowledge, produces the best learning outcomes). Since detailed feedback is only desirable after two rounds of other approaches that (in my deeply nonrepresentative experience) usually work, I don't think it makes sense to produce detailed feedback to every wrong answer.

Then again, I don't fully understand what context you're thinking in. In my original post, I was thinking about purely diagnostic math tests given to postsecondary students for employers that wouldn't so much as tell students which questions they got wrong, along the lines of the Royal Statistical Society's Graduate Diploma (five three-hour tests which grant a credential equivalent to a "good UK honours degree"). In writing this, I'm mostly imagining standardized math tests for secondary students in America (which, I'm given to understand, already have written components), which currently don't give per-question feedback, but changing that is much less of a pipe dream than creating tests that effectively test understanding. Come to think of it, I think the above approach applies even better to classroom instructors giving their own tests, at either the secondary or postsecondary level.

Tangentially related: the best professor I ever had would type 3–4 pages of general commentary (common errors and why they were wrong and how to do them better, as well as things the class did well) for the class after every problem set and test, generally by the next class. I found this commentary was extraordinarily helpful, not just because of feedback, but because (a) it helped dispel the misperception that everyone else understood everything and I was struggling because I was stupid, (b) taught us to discriminate between bad, mediocre, and good work, and (c) comments like "most of you did [x], which was suboptimal because of [y], but one of you did [z], which takes a bit more work but is a better approach because [~y]" really drove me to not do the minimum amount of work to get an answer when I could do a bit more work to get a stronger solution. (The course was in numerical methods so, as an example, we once had a problem where we had to use some technique where error exploded (I've now forgotten since I didn't have Anki back then) to locate a typo in some numeric data. A sufficient answer would have been to identify the incorrect entry; a stronger answer was to identify the incorrect entry, figure out the error (two digits typed in the wrong order), and demonstrate that fixing the error caused explosions to not happen.)

Comment by richard_reitz on Open thread, Sep. 28 - Oct. 4, 2015 · 2015-09-30T07:12:15.241Z · LW · GW

If we assume that the questions are designed such that a student can answer them upon initial exposure if and only if they deeply understand the material, then the question of identifying graders turns into the much easier question of identifying people who can discriminate between valid and invalid answers. I'm told that being able to discriminate between valid and invalid responses is a necessary condition for subject expertise, so anyone who's a relevant expert works. One way to demonstrate expertise is by building something that requires expertise. In an extreme example, I'm confident that Grigori Perelman understands topology because he proved the Poincare conjecture, and, for similar reasons, I'm (mostly) confident that Ph.Ds are experts. If we have well-designed tests, we can set the set of people qualified to grade tests as "has built something requiring expertise or has passed a well-designed test graded by someone already in this set."

Comment by richard_reitz on Open thread, Sep. 28 - Oct. 4, 2015 · 2015-09-28T13:59:31.999Z · LW · GW

It seems conventional wisdom that tests are generally gameable in the sense that an (most?) effective way to produce the best scores involves teaching password guessing rather than actually learning material deeply, i.e. such that the student can use it in novel and useful ways. Indeed, I think this is the case for many (most, even) tests, but also think it possible to write tests that are most easily passed by learning the material deeply. In particular, I don't see how to game questions like "state, prove, and provide an intuitive justification for Pascal's combinatorial identity" or "Under what conditions does f(x) = ax^3 + bx^2 + cx + d have only one critical point?'', but that's more a statement about my mind than the gameability of tests. I would greatly appreciate learning how a test consisting of such questions could be gamed, thereby unlearning an untrue thing; and if no one here can (or, at least, is willing to take the time to) explain how such a thing could be done, well, that's useful to know, too.

Comment by richard_reitz on Open Thread - Aug 24 - Aug 30 · 2015-08-28T14:02:39.197Z · LW · GW

Trying to find the Oxford livestream, I happened across the Saturday Afternoon video.

...And, now it's private.

Comment by richard_reitz on Open Thread - Aug 24 - Aug 30 · 2015-08-24T18:25:44.003Z · LW · GW

For those interested in further reading: Robin Hanson's take, popularly-written book.

Comment by richard_reitz on Crazy Ideas Thread, Aug. 2015 · 2015-08-12T12:24:39.655Z · LW · GW

See Appendix B here and a long, rambly, unproofread fb post I don't entirely agree with (it's a stream-of-consciousness, get-an-unrefined-idea-on-paper-so-it-can-get-revised thing) here.

Comment by richard_reitz on A Year of Spaced Repetition Software in the Classroom · 2015-07-05T18:23:24.492Z · LW · GW

In my high school career, I took precisely one non-honors/AP course when alternatives were present. Recalling my classmates... yeah, I'm now as skeptical as you are.

(I had successfully repressed those memories until now. Thanks so much for the reminder ;)

Any chance your success might influence your colleagues?

Comment by richard_reitz on A Year of Spaced Repetition Software in the Classroom · 2015-07-05T09:49:36.889Z · LW · GW

Before we started using SRS I tried to sell my students on it with a heartfelt, over-prepared 20 minute presentation on how it works and the superpowers to be gained from it. It might have been a waste of time. It might have changed someone's life. Hard to say.

I'm less skeptical. You say that you got a few students to use Anki which, while probably not life-changing, is probably significantly life-impacting. If my tenth grade English teacher had introduced Anki to me... well, right now, I'm reteaching myself introductory biology (5 on the AP exam), introductory chemistry (5 on the AP exam) and introductory psychology (A in the college course) because I forgot the content of each of these courses because I lacked Anki. I obviously don't know everything you do in your classroom, but it's entirely plausible that, rather than being a waste of time, introducing your students to Anki might have been (on average) the most impactful 20 minutes of teaching you did all year; you just may not see all the benefit in your classroom.

Comment by richard_reitz on Supporting Effective Altruism through spreading rationality · 2015-06-14T22:34:00.237Z · LW · GW

I'm not entirely sure who the audience of this letter is (I'm given to understand "effective altruists" is a pretty heterogeneous group). This affects how your letter should look so much that I can't give much object-level feedback. For instance, it matters how much of your audience has pre-existing familiarity with things like raising the sanity waterline and rationality as a common interest across causes; if most of them lack this familiarity, I expect they'll read your first sentence, be unable to bridge an inferential gap, and stop reading.

Ideally, I'd like to know how exactly this letter is getting to its recipients: are you posting on EA forum or mailing it to anyone who's donated to GiveWell?

Comment by richard_reitz on June 2015 Media Thread · 2015-06-01T22:14:05.045Z · LW · GW

Introductory discrete math textbook (pdf) courtesy of MIT. I prefer it to Rosen, which is currently recommended in the MIRI research guide, although I think there exist students who would do better with Rosen's book.

(How to tell which book you should choose? Well, since this one is Creative Commons, and therefore free, I'd try this one. If you find it's not saying enough words per theorem, try Rosen. If you think it's saying too many words per theorem, try these lecture notes. A recommendation to LW's list of best textbooks is forthcoming, which will contain a complete discussion.)

An earlier version of the book corresponds to these videos lectures, which I find to be excellent, as far as lectures go.

Comment by richard_reitz on Learning Optimization · 2015-05-01T00:23:32.818Z · LW · GW

As another person who's used Anki for quite some time (~ 2 years), my experience agrees with eeuuah. I would also add exceptions to "just Google it."

  1. It's easier to maintain knowledge than to reacquire it. The prototypical example here is tying a tie. Having a card that says "tie a four-in-hand knot", and having to do that occasionally, turns out to be a lot easier than Googling how to tie a tie, especially if you do it infrequently enough that you need to re-learn it every time.

  2. You need to maintain working memory. The prototypical example here is math. Sure, I can look up the definition of an affine subset, but if I'm in the middle of a proof and I need to prove X is an affine subset of V and then need to look up the definition of affine subset, then I suffer a break in my working memory, which sets me back quite a bit.

  3. You need to remember that the fact exists. The prototypical example here is theorems. Being able to Google the Law of Total Probability doesn't help if I don't remember that it exists, and it doesn't tell me when I can apply it. Having an Anki card for Law of Total Probability does both these things.

  4. You need knowledge in a context where you can't use Google. The prototypical example here is school. Even outside of school, though, there's situations where it just won't do to pull out your phone to Google something.

Comment by richard_reitz on Learning Optimization · 2015-04-30T22:18:19.470Z · LW · GW

White noise is fine; irrelevant sound effect operates on anything that sounds like it may be human speech, which turns out to be any sort of fluctuating tone.

Comment by richard_reitz on Learning Optimization · 2015-04-30T03:05:35.726Z · LW · GW

It has been requested that I post my own take on efficient learning. As I spend half a page describing, this is not yet ready for publishing, but I'm putting out there because there may be (great) benefit to be had. After all, there is low-hanging fruit if you're willing to abandon traditional methods: simply doing practice problems in a different order may improve your test score by 40 points.

Comment by richard_reitz on The Best Textbooks on Every Subject · 2015-03-19T22:31:18.570Z · LW · GW

"Baby Rudin" refers to "Principles of Mathematical Analysis", not "Real and Complex Analysis" (as was currently listed up top.) (Source)

Comment by richard_reitz on Book Review: Linear Algebra Done Right (MIRI course list) · 2015-03-11T18:34:06.042Z · LW · GW

Since this review, Axler has released a third edition. The new edition contains substantial changes (i.e. it's not the same book being released under "n+1 edition"): though there's little new material, exercises appear at the end of every section, instead at the end of every chapter, and there's many more examples given in the body of the text (a longer list of changes can be found on Dr. Axler's website). I feel these revisions are significant improvements from a pedagogical perspective, as it gives the reader more opportunity to practice prerequisite skills before learning the next thing. The changes also lower the requisite mathematical maturity, which is a good thing (insofar as it makes the book more accessible), although it won't push the reader to develop mathematical maturity as much. Overall: the third edition came out when I was halfway through the second edition and I felt that the improments merited switching books.

Comment by richard_reitz on Open thread, Mar. 2 - Mar. 8, 2015 · 2015-03-03T04:32:48.878Z · LW · GW

Turns out you're not the only one who wants to know this. Seems your best bet is to use C-S-v to paste raw text and then format it in the article editor.

Comment by richard_reitz on Stupid Questions January 2015 · 2015-01-03T04:41:01.635Z · LW · GW

Yeah. I've taught myself several courses just from textbooks, with much more success than in traditional setups that come with individual attention. I am probably unusual in this regard and should probably typical-mind-fallacy less.

However, I will nitpick a bit. While most textbooks won't quite have every answer to every question a student could formulate whilst reading it (although the good ones come very close), answers to these questions are typically 30 seconds away, either on Wikipedia or Google. Point about the importance of having people to talk to still stands.

Also, some textbooks (e.g. the AoPS books) have hints for when a student gets stuck on a problem. Point about the importance of having people to help students when they get stuck still stands, although I believe the people best-suited to do this are their classmates; by happy coincidence, these people don't cost educational organizations anything.

I'm tinkering with a system in which a professor, instead of lecturing, has it as their job to give each of 20 graduate students an hour a week of one-on-one attention (you know, the useful type of individual attention), which the graduate student is expected to prepare for extensively. Similarly, each graduate student is tasked with giving undergraduates 1 hour/week of individual attention. This maintains a professor:student ratio of 200:1 (so MIT needs a grand total of... 57 professors), doesn't overly burden the mentors, and gives the students much more quality individual attention than I sense they're currently getting. (Also, I believe that 1 hour of a grad student's time is going to be more helpful to a student than 1 hour of a professor's time. Graduate students haven't become so well-trained in their field they're no longer able to simulate a non-understanding undergrad in their head (an inability Dr. Mazur claims is shared among lecturers) and I expect there's benefit from shrinking the age/culture gap. Also, no need to worry about appearing to be the class idiot in front of the person assigning your grade and potentially not giving you the benefit of the doubt on account of being the class idiot.) (Also, it has not escaped my attention that this falls apart at schools that are small or don't have graduate students. And there's other problems. Just an idea I've had floating around that may be enough in the right direction to effect a positive change.)

Comment by richard_reitz on Stupid Questions January 2015 · 2015-01-03T04:26:01.711Z · LW · GW

The argument goes "paying 20k camera-people for one year can replace 2M full-time equivalent jobs next year, which can either go into something more useful without changing anything else (1). Of course, once you're going to do that, you'd do well to look into seeing what elements of anything else could be changed to make it even more awesome."

If we optimize properly, I believe we wind up open-sourcing textbooks, somewhat like Linux. We have a core textbook, which has recieved enough feedback to make sure that everything is explained well enough that students generally don't come away with misconceptions, but because they're open source, every time you need to write for a particular audience, you have something to work from. LaTeX also supports comments, which makes it easy to include nonconventional perspectives for interested students (i.e. the ones who really need them).

But, yeah, pooling resources. Definitely something we should do more of and WHY HASN'T THE FREE MARKET SOLVED THIS 10 YEARS AGO?

(1) Fermi estimate is as follows: Cursory search indicates Harvard offers a bit over 3k undergraduate classes. Round it up to 5k to include secondary school and the few undergraduate courses not offered at Harvard (for instance, I can't find an equivalent to 8.012.) Multiply by 4 for different levels, and we arrive at 20k camera-people needed to tape all these courses. (It's actually less than that, since most courses are one semester.)

Cursory Googling indicates there are 3700k teachers in America; add in other English-speaking countries and eliminate primary- and graduate-level teachers should bring you to 4M teachers (I'm guessing that we add more teachers from English-speaking countries than we lose from not considering primary- and graduate-level teachers, since most classes are at these levels.) Assume that half their teaching job is replaceable by the videos we've created, and we've freed up the equivalent of 2M full-time jobs.

This is very much a Fermi estimate, but I feel I was liberal enough with the camera-people portion (we're only hiring them a few hours a week!) to say that the cost of getting high-quality video of all secondary and undergraduate courses is 1% of the savings it should theoretically yield every year in the future. This upper limit goes down once we start writing textbooks instead of taping lectures, especially since most secondary and undergraduate courses already have very good textbooks to work from.

Comment by richard_reitz on Stupid Questions January 2015 · 2015-01-02T17:45:23.427Z · LW · GW

There's two problems here. First, we have duplication of labor in that we have something like 1% of the population doing essentially the same task, even though it's fairly straightforward to reproduce and distribute en masse after it's been done once. This encompasses things like lesson plans, lectures, and producing supplementary materials (e.g. a sheet of practice problems).

This leads into the second problem, which is a resulting quality issue: if you have a large population of diverse talent doing the same task, you expect it to form some sort of a bell curve. As noted above, we can take any lecture, tape it, and broadcast in en masse fairly easily. When we choose a system where each student is subjected to their instructor's particular lecture, a relatively small portion of them get an excellent lecture, a very large portion get an average lecture (rather than an excellent lecture), and a relatively small portion get an execrable lecture (rather than an excellent lecture). If you're really ambitious, you could even get the top, say, ten lecturers together and have them collaborate to make a super-lecture, and then get feedback on that particular unit, so they can improve the superlecture into a super-duperlecture.

(IMO, this is still a suboptimal way to do things. Try that process on textbooks (which are much easier to write collaboratively), and instead of getting feedback on hour-long chunks, get feedback on section-sized chunks (which, depending on the subject, can something like one-tenth the size). A good textbook is also cheaper to write, cheaper to distribute, more updateable, and better didactic material to begin with.)

It's worth noting that there's still a few wrinkles. Most importantly, there's really no such thing as a "best" lecture, lesson plan, problem set, or textbook; the "goodness" quality depends, not just on the lecture's content, but the intended audience. Think of this as a callibration issue. For instance:

Last I checked, MIT uses Sadava as their introductory biology textbook. If you dig around the reviews, you will find endorsements of another introductory biology book by Campbell that claim it's "SO much easier to understand. It's better organized, more clearly written". When I found myself needing to relearn introductory biology (this time with Anki so I actually retain the knowledge), I tried Campbell, since that's what my high school used, but gave up not halfway through the first chapter, frustrated by the difficulty I had understanding, the poor organization, and unclear writing; I find Sadava, however, to be much easier to understand, better organized, and more clearly written. Is the quoted reviewer lying, perhaps paid off by Big Textbooks? Perhaps, but a much better explanation is that Sadava is more technical; it's much closer to the "definition-theorem-proof" feel of a math text. This makes it a fantastic text if you're most students at MIT (or a typical LWer), but much less so if you're in the other 99% of the population. This also solves the callibration problem: write two (or more) supertextbooks.

(This also neatly explains why MIT sometimse seems like the only school that uses good textbooks and why SICP only has 3.5 stars on Amazon.)

A second wrinkle is individual attention, which I tend to be dismissive of (if the textbook is good enough, you shouldn't need any individual attention! And it's not like the current education system, with its one-way lectures, is very good at giving very much individual attention), but if we're optimizing education, there probably is more individual attention given to every student. However, because of reasons, I suspect that most of it should come from students in the same class, not staff. Also, it belongs after the reading.

A third wrinkle is a narrowing of perspectives. In any particular domain, there's usually several approaches to solving problems, often coming from different ways of looking at it. In the current system, if you wind up on a team and come across a seemingly intractable problem, there's a good chance that someone else has happened across a nonstandard approach that makes the problem very easy. If we standardize everything, we lose this. This is somewhat mitigated by the solution to the callibration problem, wherein people are going to be reading different texts with the different approaches because they're different people, but we still kind of expect most mathematicians to learn their analysis from super!Rudin, meaning that they all lack some trick that Pugh mentions. The best solution I have is to have students learn in the highly standardized manner first, and once they have a firm grasp on that, expose them to nonstandard methods (according to my Memory text, this is an effective manner for increasing tranfer-of-learning).

Comment by richard_reitz on Memory Improvement: Mnemonics, Tools, or Books on the Topic? · 2014-12-07T05:35:35.376Z · LW · GW

I'll give you that nutrition/exercise is very high on the list of things to do to optimize memory, but I'm skeptical that it's more important than mnemonics.

Personally, movement from fairly wretched nutrition/exercise to Lifestyle Interventions to Improve Longevity/Optimal Exercise-compliant nutrition/exercise has helped lots and lots, but (for the limited cases it applies), Method of Loci helped more.

Comment by richard_reitz on Memory Improvement: Mnemonics, Tools, or Books on the Topic? · 2014-11-22T15:50:57.850Z · LW · GW

Nootropics Depot. If you dig around the comments of the Reddit link, you'll find that it's the same one as the first one in the OP there.

Comment by richard_reitz on Memory Improvement: Mnemonics, Tools, or Books on the Topic? · 2014-11-22T03:19:07.787Z · LW · GW

Brienne Strohl

Comment by richard_reitz on Memory Improvement: Mnemonics, Tools, or Books on the Topic? · 2014-11-21T23:36:20.859Z · LW · GW

Yes (4 credits).

Comment by richard_reitz on Memory Improvement: Mnemonics, Tools, or Books on the Topic? · 2014-11-21T23:17:59.272Z · LW · GW

There is an easy way of watching the lectures. It involves paying Harvard University $1,250 whenever the class is next offered. Their video streaming is on par with Youtube circa 2007, but at least it works.

There is also a free way of watching the lectures, but it involves me breaking a contract I made with Harvard University, which I'm all manner of unwilling to do. However, they've made the video to the first lecture publicly available in the course description, so there's that.

Comment by richard_reitz on Memory Improvement: Mnemonics, Tools, or Books on the Topic? · 2014-11-21T22:12:06.981Z · LW · GW

Yes. Lots of them. Right now, my memory deck has about 200 cards, and I'm only about 2/3 done with the course. I'll point again to Baddeley Eysenck Anderson. You seem primarily interested in long-term memory (although that may be an artifact of not knowing a lot about memory; a large benefit of having a textbook on memory is to point out "unknown unkowns"), so here are some big ones off the top of my head.

Implicit and explicit memory (also known as declarative and nondeclarative, respectively).

Episodic and semantic memory (are subsets of explicit/declarative memory)

Also procedural memory (a subset of implicit/nondeclarative memory).

You should also be aware of the testing effect and distributed practice, which, along with forgetting curves, form the basis of Spaced Repetition Software. Since many things don't lend themselve to Anki, like riding a bike, it's enormously beneficial to know about these independently.

Also Source monitoring, which leads to my favorite term, cryptomnesia.


Comment by richard_reitz on Memory Improvement: Mnemonics, Tools, or Books on the Topic? · 2014-11-21T21:49:31.510Z · LW · GW

Memory researchers do, in fact, make a distinction between accessibility (can I retrieve a memory?) and availibility (does the memory trace exist?).

Comment by richard_reitz on Memory Improvement: Mnemonics, Tools, or Books on the Topic? · 2014-11-21T21:31:14.904Z · LW · GW

The best textbook on memory I'm aware of is Baddeley Eysenck Anderson. It is quite good, but some of the definitions are vague, so you'll need to reference Wikipedia,.

Memory palaces, more formally known as Method of Loci, are well-supported by the academic literature. Brienne's presentation is a fantastic introduction, in line with all the academic literature I've read.

I use Anki. It gets the job done quite well, and although other software may be just as good or better, I'm left with no desire to try anything else. See janki method for implementation suggestions.

I'm in the middle of a course on memory; according to my notes, making outlines is a good way of studying for a test and thinking about things in terms of future plans is "perhaps the best way of remembering stuff" (so, if I wanted to remember regular expressions, I might imagine doing this with them).

According to Scott, bacopa is "a memory-enhancing drug that performs very well in studies"—assuming you take it consistently for 3 months. According to my soylent spreadsheet, this is the most cost-effective source. According to Reddit, this is source with the lowest amounts of heavy metals (which are well within limits set by FDA). Reddit also has dosing recommendations. Apparently is also an axiolytic, so yay. Note that bacopa tastes nasty, so many people pay a bit extra for pills, although I find the taste trivial to deal with if I have a glass of water to wash the powder down with.

Comment by richard_reitz on Financial Effectiveness Repository · 2014-11-18T22:14:31.011Z · LW · GW

Definitely. I wanted to make that point because, until I read Varian, I accepted the naive argument and not everyone here has studied economics, and the less they know, the more this entire "financial effectiveness" post is aimed at them, and this is something I found completely nonintuitive before reading about it and transparently obvious afterwards.

Comment by richard_reitz on Financial Effectiveness Repository · 2014-11-18T22:07:01.278Z · LW · GW

Well, if real interest rates are negative, everything reverses, and you should start favoring more expensive things now.

Also, it's possible to be realistic and say things like "if 2 + 2 = 5, then 5 = 2(1+1) and therefore isn't prime".