Posts

What are your favorite examples of distillation? 2020-04-25T17:06:51.393Z · score: 50 (15 votes)
Do you trust the research on handwriting vs. typing for notes? 2020-04-23T20:49:19.731Z · score: 26 (10 votes)
How much motivation do you derive from curiosity alone in your work/research? 2020-04-17T14:04:13.763Z · score: 25 (8 votes)
Decaf vs. regular coffee self-experiment 2020-03-01T19:19:24.832Z · score: 9 (7 votes)
Yet another Simpson's Paradox Post 2019-12-23T14:20:09.309Z · score: 8 (3 votes)
Billion-scale semi-supervised learning for state-of-the-art image and video classification 2019-10-19T15:10:17.267Z · score: 5 (2 votes)
What are your strategies for avoiding micro-mistakes? 2019-10-04T18:42:48.777Z · score: 18 (9 votes)
What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? 2019-03-31T18:31:29.866Z · score: 26 (11 votes)
So you want to be a wizard 2019-02-15T15:43:48.274Z · score: 16 (3 votes)
How do we identify bottlenecks to scientific and technological progress? 2018-12-31T20:21:38.348Z · score: 31 (9 votes)
Babble, Learning, and the Typical Mind Fallacy 2018-12-16T16:51:53.827Z · score: 6 (4 votes)
NaiveTortoise's Short Form Feed 2018-08-11T18:33:15.983Z · score: 14 (3 votes)
The Case Against Education: Why Do Employers Tolerate It? 2018-06-10T23:28:48.449Z · score: 17 (5 votes)

Comments

Comment by an1lam on Why isn't JS a popular language for deep learning? · 2020-10-08T18:15:54.085Z · score: 20 (8 votes) · LW · GW

I haven't researched this extensively but have used the Python data science toolkit for a while now and so can comment on its advantages.

To start, I think it's important to reframe the question a bit. At least in my neck of the woods, very few people just do deep learning with Python. Instead, a lot of people use Python to do Machine Learning, Data Science, Stats (although hardcore stats seems to have a historical bias towards R). This leads to two big benefits of using Python: pretty good support for vectorized operations and numerical computing (via calling into lower level languages of course and also Cython) and a toolkit for "full stack" data science and machine learning.

Regarding the numerical computing side of things, I'm not super up-to-date on the JS numerical computing ecosystem but when I last checked, JS had neither good pre-existing libraries that compared to numpy nor as good a setup for integrating with the lower level numerical computing ecosystem (but I also didn't look hard for it in fairness).

Regarding the full stack ML / DS point, in practice, modeling is a small part of the overall ML / DS workflow, especially once you go outside the realm of benchmark datasets or introduce matters of scale. The former involves handling data processing and analysis (transformation, plotting, aggregation) in addition to building models. Python (and R for what it's worth) has a suite of battle-hardened libraries and tools for both data processing -- things in the vein of airflow, luigi, etc. -- and analysis -- pandas, scipy, seaborn, matplotlib, etc. -- that, as far as I know Javascript lacks.

ETA: To be clear, Python has lots of downsides and doesn't solve any of these problems perfectly, but the question focused on relative to JS so I tried to answer in the same vein.

Comment by an1lam on Thoughts on ADHD · 2020-10-08T01:03:07.758Z · score: 6 (4 votes) · LW · GW

I've never been evaluated for ADHD (or seriously considered it) but some of these -- especially 2, 3, 6, 7, 9 -- feel very familiar to me.

Comment by an1lam on ricraz's Shortform · 2020-09-18T01:02:29.079Z · score: 1 (1 votes) · LW · GW

Yeah good point - given generous enough interpretation of the notebook my rejection doesn't hold. It's still hard for me to imagine that response feeling meaningful in the context but maybe I'm just failing to model others well here.

Comment by an1lam on ricraz's Shortform · 2020-09-17T23:06:41.160Z · score: 1 (1 votes) · LW · GW

I've seen this quote before and always find it funny because when I read Greg Egan, I constantly find myself thinking there's no way I could've come up with the ideas he has even if you gave me months or years of thinking time.

Comment by an1lam on Progress: Fluke or trend? · 2020-09-14T11:56:15.626Z · score: 2 (2 votes) · LW · GW

Sorry I was unclear. I was actually imagining two possible scenarios.

The first would be deeper investigation reveals that recent progress mostly resulted from serendipity and lucky but more contingent than we expected historical factors. For example, maybe it turns out that the creation of industrial labs all hinged on some random quirk of the Delaware C-Corp code (I'm just making this up to be clear). Even though these factors were a fluke in the past and seem sort of arbitrary, we could still be systematic about bringing them about going forward.

The second scenario is even more pessimistic. Suppose we fail to find any factors that influenced recent progress - it's just all noise. It's hard to give an example of what this would look like because it would look like an absence of examples. Every rigorous investigation of a potential cause of historical progress would find a null result. Even in this pessimistic world, we still could say, "ok, well nothing in the past seemed to make a difference but we're going to experiment to figure out things that do."

That said, writing out this maximally pessimistic case made me realize how unlikely I think it is. It seems like we already know of certain factors which at least marginally increased the rate of progress, so I want to emphasize that I'm providing a line of retreat not arguing that this is how the world actually is.

Comment by an1lam on Progress: Fluke or trend? · 2020-09-13T02:59:46.962Z · score: 2 (2 votes) · LW · GW

Isn't it both possible that it's a fluke and also that going forward we can figure out mechanisms to promote it systematically?

To be clear, I think it's more likely that not that a nontrivial fraction of recent progress has non fluke causes. I'm just also noting that the goal of enhancing progress seems at least partly disjoint from whether recent progress was a fluke.

Comment by an1lam on [AN #115]: AI safety research problems in the AI-GA framework · 2020-09-02T20:55:36.928Z · score: 3 (2 votes) · LW · GW

Yep, clicking "View this email in browser" allowed me to read it but obviously would be better to have it fixed here as well.

Comment by an1lam on ricraz's Shortform · 2020-08-26T12:48:31.717Z · score: 3 (2 votes) · LW · GW

Thanks for your reply! I largely agree with drossbucket's reply.

I also wonder how much this is an incentives problem. As you mentioned and in my experience, the fields you mentioned strongly incentivize an almost fanatical level of thoroughness that I suspect is very hard for individuals to maintain without outside incentives pushing them that way. At least personally, I definitely struggle and, frankly, mostly fail to live up to the sorts of standards you mention when writing blog posts in part because the incentive gradient feels like it pushes towards hitting the publish button.

Given this, I wonder if there's a way to shift the incentives on the margin. One minor thing I've been thinking of trying for my personal writing is having a Knuth or Nintil style "pay for mistakes" policy. Do you have thoughts on other incentive structures to for rewarding rigor or punishing the lack thereof?

Comment by an1lam on ricraz's Shortform · 2020-08-21T13:17:59.270Z · score: 1 (1 votes) · LW · GW

I'd be curious what, if any, communities you think set good examples in this regard. In particular, are there specific academic subfields or non-academic scenes that exemplify the virtues you'd like to see more of?

Comment by an1lam on Becoming Unusually Truth-Oriented · 2020-08-15T20:22:28.230Z · score: 1 (1 votes) · LW · GW

Makes sense - would some of the early posts about Focusing and other "lower level" concepts you reference here qualify? If you create a tag, people (including maybe me) could probably help curate!

Comment by an1lam on Becoming Unusually Truth-Oriented · 2020-07-28T15:43:52.970Z · score: 3 (2 votes) · LW · GW

I can make a longer comment there if you'd like but personally I wasn't that bothered by the dreams example because I agreed with you that confabulation in the immediate moments after I woke up didn't seem like a huge issue. As a result, I was definitely interested in seeing more posts from the meditative/introspective angle even if they just expanded upon some of these moment-to-moment habits with more examples and detail. Unfortunately, that would at least partly require writing more posts rather than pure curation.

Comment by an1lam on The Future of Science · 2020-07-28T15:14:45.657Z · score: 6 (4 votes) · LW · GW

Great post (or talk I guess)!

Two "yes, and..." add-ons I'd suggest:

  1. Faster tool development as the result of goal-driven search through the space of possibilities. Think something like Ed Boyden's Tiling Tree Method semi-automated and combined with powerful search. As an intuition pump, imagine doing search in the latent space of GPT-N, maybe fine tuned on all papers in an area's, embeddings.
  2. Contrary to some of the comments from the talk, I weakly suspect NP-hardness will be less of a constraint for narrow AI scientists than it is for humans. My intuition here comes from what we've seen with protein folding and learned algorithms where my understanding is that hardness results limit how quickly we can do things in general but not necessarily on the distributions we encounter in practice. I think this is especially likely if we assume that AI scientists will be better at searching for complex but fast approximations than humans are. (I'm very uncertain about this one since I'm by no means an expert in these areas.)
Comment by an1lam on Becoming Unusually Truth-Oriented · 2020-07-28T13:26:11.296Z · score: 1 (1 votes) · LW · GW

Did you ultimately decide not to continue this series?

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-23T02:09:41.407Z · score: 1 (1 votes) · LW · GW

Yes I can relate to this!

Comment by an1lam on Open & Welcome Thread - July 2020 · 2020-07-23T01:41:22.890Z · score: 2 (2 votes) · LW · GW

Yep.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-23T01:26:41.128Z · score: 1 (1 votes) · LW · GW

Thanks, this framing is helpful for me for understanding how these things can be seen to fit together.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-22T18:29:39.107Z · score: 1 (1 votes) · LW · GW

This is basically my perspective but seems contrary to the perspective in which most problems are caused by internal blockages, right?

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-22T04:24:34.112Z · score: 3 (2 votes) · LW · GW

Thanks for replying and sharing your post. I'd actually read it a while ago but forgotten how relevant it is to the above. To be clear, I totally buy that if you have crippling depression or even something more mild, fixing that is a top priority. I also have enjoyed recent posts on and think I understand the alignment-based models of getting all your "parts" on board.

Where I get confused and where I think there's less evidence is that the unblocking can make it such that doing hard stuff is no longer "hard". Part of what's difficult here is that I'm struggling to find the right words but I think it's specifically claims of effortlessness or fun that seem less supported to me.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-22T02:42:23.846Z · score: 12 (7 votes) · LW · GW

I keep seeing rationalist-adjacent discussions on Twitter that seem to bottom out with the arguments of the general (very caricatured, sorry) form: "stop forcing yourself and get unblocked and then X effortlessly" where X equals learn, socialize, etc. In particular, a lot of focus seems to be on how children and adults can just pursue what's fun or enjoyable if they get rid of their underlying trauma and they'll naturally learn fast and gravitate towards interesting (but also useful in the long term) topics, with some inspiration from David Deutsch.

On one hand, this sounds great, but it's so foreign to my experience of learning things and seems to lack the kind of evidence I'd expect before changing my cognitive strategies so dramatically. In fairness, I probably am too far in the direction of doing things because I "should", but I still don't think going to the other extreme is the right correction.

In particular, having read Mason Currey's Daily Rituals, I have a strong prior that even the most successful artists and scientists are at risk of developing akrasia and need to systematize their schedules heavily to ensure that they get their butts in the chair and work. Given this, what might convince me would be if someone catalogue of thinkers who did interesting work, quotes or stories that provide evidence that they did what was fun, and counter-examples to show that it's not cherry-picking.

The above is also is related to the somewhat challenged but still I think somewhat valid idea that getting better at things requires deliberate practice, which is not "fun". This also leads me to a subtle point which is that I think "fun" may be being used in a non-standard way by people who claim that learning can always be "fun". I.e. I can see how even not necessarily enjoyable in the moment practice can be something one values on reflection, but this seems like a misuse of the term "fun" to me.

Comment by an1lam on TurnTrout's shortform feed · 2020-06-27T14:19:53.202Z · score: 7 (3 votes) · LW · GW

For what it's worth, this is very true for me as well.

I'm also reminded of a story of Robin Hanson from Cryonics magazine:

Robin’s attraction to the more abstract ideas supporting various fields of interest was similarly shown in his approach – or rather, lack thereof – to homework. “In the last two years of college, I simply stopped doing my homework, and started playing with the concepts. I could ace all the exams, but I got a zero on the homework… Someone got scatter plots up there to convince people that you could do better on exams if you did homework.” But there was an outlier on that plot, courtesy of Robin, that said otherwise.

Comment by an1lam on The Indexing Problem · 2020-06-22T19:38:37.423Z · score: 3 (2 votes) · LW · GW

Meta: this project is wrapping up for now. This is the first of probably several posts dumping my thought-state as of this week.

Moving on to other things?

Comment by an1lam on Types of Knowledge · 2020-06-21T01:48:34.890Z · score: 1 (1 votes) · LW · GW

A thing I really structured to capture was that "i did actual research and had actual models for why masks would help against covid, but it's still not type-3", which is why "know why" doesn't feel right to me.

I share this feeling based on my understanding of the boundaries you're trying to draw.

I tentatively think that some of what you're calling engineering knowledge would fit into what I call scientific (which is a strike agains the names), and/or that I didn't do a good enough job explaining why engineering knowledge is useful.

Yeah, like I said, I don't want to bike shed over terms but I do think the distinction between science and engineering is an interesting one. Regarding the "and/or" isn't it more like there's often an interplay between the levels? First we get "folk knowledge" and then we "do science" to understand it and then we use that science to "engineer" it?

Also, I found an Overcoming Bias post where Robin quotes Drexler on the distinction:

The essence of science is inquiry; the essence of engineering is design. Scientific inquiry expands the scope of human perception and understanding; engineering design expands the scope of human plans and results. …

Scientists seek unique, correct theories, and if several theories seem plausible, all but one must be wrong, while engineers seek options for working designs, and if several options will work, success is assured. Scientists seek theories that apply across the widest possible range (the Standard Model applies to everything), while engineers seek concepts well-suited to particular domains (liquid-cooled nozzles for engines in liquid-fueled rockets). Scientists seek theories that make precise, hence brittle predictions (like Newton’s), while engineers seek designs that provide a robust margin of safety. In science a single failed prediction can disprove a theory, no matter how many previous tests it has passed, while in engineering one successful design can validate a concept, no matter how many previous versions have failed. ..

Simple systems can behave in ways beyond the reach of predictive calculation. This is true even in classical physics. …. Engineers, however, can constrain and master this sort of unpredictability. A pipe carrying turbulent water is unpredictable inside (despite being like a shielded box), yet can deliver water reliably through a faucet downstream. The details of this turbulent flow are beyond prediction, yet everything about the flow is bounded in magnitude, and in a robust engineering design the unpredictable details won’t matter. …

The reason that aircraft seldom fall from the sky with a broken wing isn’t that anyone has perfect knowledge of dislocation dynamics and high-cycle fatigue in dispersion-hardened aluminum, nor because of perfect design calculations, nor because of perfection of any other kind. Instead, the reason that wings remain intact is that engineers apply conservative design, specifying structures that will survive even unlikely events, taking account of expected flaws in high-quality components, crack growth in aluminum under high-cycle fatigue, and known inaccuracies in the design calculations themselves. This design discipline provides safety margins, and safety margins explain why disasters are rare. …

The key to designing and managing complexity is to work with design components of a particular kind— components that are complex, yet can be understood and described in a simple way from the outside. … Exotic effects that are hard to discover or measure will almost certainly be easy to avoid or ignore. … Exotic effects that can be discovered and measured can sometimes be exploited for practical purposes. …

When faced with imprecise knowledge, a scientist will be inclined to improve it, yet an engineer will routinely accept it. Might predictions be wrong by as much as 10 percent, and for poorly understood reasons? The reasons may pose a difficult scientific puzzle, yet an engineer might see no problem at all. Add a 50 percent margin of safety, and move on.

Comment by an1lam on Types of Knowledge · 2020-06-20T19:21:25.097Z · score: 3 (2 votes) · LW · GW

Good question, yes I know what you mean. I don't think these are great labels but to me the categories seem like reguritated facts, pre-scientific empiricism / folk knowledge, and science and engineering. Admittedly I know these don't make great section headings, so I'll think more about better names.

Another comment mentions "know what", "know how", and "know why" which I suspect captures some of what you're getting at but not all of it? Only some because there are different types of whys within levels 2 & 3, right?

Comment by an1lam on Types of Knowledge · 2020-06-20T18:47:53.737Z · score: 1 (1 votes) · LW · GW

One more point that may clarify things: I think you're lumping applying science to the problem of making things under science whereas to me that's the essence of engineering. That is, the scientist seeks to understand things for their own sake whereas the engineer asks "what can I build with this?" Again these are obviously idealized, extreme characterizations.

With respect to COVID, we ought to allocate some credit to engineering for building the high-throughput systems that enable rapid testing and experimentation. We also will require some impressive engineering to scale up vaccine production once we hopefully have a viable vaccine.

Comment by an1lam on Types of Knowledge · 2020-06-20T18:41:56.693Z · score: 3 (2 votes) · LW · GW

Not to quibble on words and categories but this feels like a straw man of engineering knowledge that reinforces the view I've mentioned before that people here often have a bias that causes them to think of engineering as "trivial" or "easy". As I understand it, you're arguing that engineering knowledge only allows local improvements whereas science allows global optimization, whereas I feel like the reality is murkier but something like what Jason Crawford described in his recent post on shuttling between science and invention. I'm also partial to Eric Drexler's framing where science is about universal quantifiers -- the space of mechanisms -- whereas engineering is about existential quantifiers -- if there exists even one way to do something, then let's find it. This is a little abstract so I'll discuss a few of your examples.

To take your example of hybrid cars, yes inventing hybrid cars' components certainly required scientific breakthroughs but their economic feasibility heavily benefited from improved engineering of both batteries and cars. I'm not an expert on this but I feel like as a general statement it's hopefully not that controversial?

Related to this, the thing you mention about car maintenance is discussing mechanics, who are distinct from engineers. Consider an alternative statement, "yeah it's good that cars are 10X cheaper than they used to be, but the important thing was inventing the internal combustion engine in the first place." Isn't this a more balanced comparison for the role of engineering and science in car production?

Happy to go into more detail but will stop here because I'm having trouble anticipating whether my comment will 1) provoke disagreement and 2) which aspects are confusing / most likely to be disagrees with.

Comment by an1lam on Using a memory palace to memorize a textbook. · 2020-06-19T15:03:34.293Z · score: 5 (3 votes) · LW · GW

This reminds me of Mary Carruthers's comparison of characteristics people emphasize when they talk about Einstein vs. Thomas Aquinas and the way in which memory palaces played an integral role in not just recall but composition for medieval thinkers. Unfortunately, it's too much text to copy, but I've taken screenshots of the relevant pages and uploaded them here.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-24T01:24:31.289Z · score: 1 (1 votes) · LW · GW

This is awesome! I've been thinking I should try out the natural number game for a while because I feel like formal theorem proving will scratch my coding / video game itch in a way normal math doesn't.

Comment by an1lam on Against Dog Ownership · 2020-05-18T18:07:43.268Z · score: 14 (4 votes) · LW · GW

Minor note - having spent significant time in multiple homes with one dog and more recently a home with multiple, my anecdotal observation is that even just having 2 dogs changes the dynamic from dog obsessed with humans to dogs that have each other and are maybe still obsessed with their humans as well.

Comment by an1lam on What are your greatest one-shot life improvements? · 2020-05-16T20:12:18.344Z · score: 1 (1 votes) · LW · GW

Are you putting yours on paper or storing it digitally?

Comment by an1lam on Project Proposal: Gears of Aging · 2020-05-10T18:36:52.040Z · score: 7 (4 votes) · LW · GW

(Not the author, obviously.) Part of my personal intuition against this view is that even amongst mammals, lifespans and the way in which lives ends seems to vary quite a bit. See, for example, the biological immortality Wikipedia page, this article about sea sponges and bowhead whales, and this one about naked mole rats.

That said, it's still possible we're locked in a very tricky-to-get-out-of local optima in a high dimensional space that makes it very hard for us to make local improvements. But then I suspect OP's response would be that the way to get out of local optima is to understand gears.

Comment by an1lam on TurnTrout's shortform feed · 2020-05-06T19:43:08.114Z · score: 1 (1 votes) · LW · GW

Only related to the first part of your post, I suspect Pearl!2020 would say the coarse-grained model should be some sort of causal model on which we can do counterfactual reasoning.

Comment by an1lam on Insights from Euclid's 'Elements' · 2020-05-05T16:53:44.416Z · score: 8 (6 votes) · LW · GW

FWIW as someone who learned Python first, was exposed to C but didn't really understand it, and then only really learned C later (by playing around with / hacking on the OpenBSD operating system and also working on a project that used C++ with mainly only features from C), I've always found the following argument quite suspect with respect to programming:

(FWIW I've made the same argument in the context of training programmers, preferring that they have to learn to work with assembly, FORTRAN, and C because the difficulty forced me to understand a lot of useful details that help me even when working in higher level languages that can't be fully appreciated if you are, for example, trying to simulate the experience of managing memory or creating loops with JUMPIF in a language where it's not necessary. Not exactly the same as what's going on here but of the same type.)

It's undoubtedly true that I see some difference before & after "grokking" low-level programming in terms of being able to better debug issues with low-level networking code and maybe having a better intuition for performance. Now in fairness, most of my programming work hasn't been super performance focused. But, at the same time, I found learning lower level programming much easier after having already internalized decent programming practices (like writing tests and structuring my code) which allowed me to focus on the unique difficulties of C and assembly. Furthermore, I was much more motivated to understand C & assembly because I felt like I had a reason to do so rather than just doing it because (no snark intended) old-school programmers had to do so when they were learning.

For these reasons, I definitely would not recommend someone who wants to learn programming start with C & assembly unless they have a goal that requires it. This just seems to me like going to hard mode directly primarily because that's what people used to have to do. As I said above, I'm fairly convinced that the lessons you learn from doing so are things you can pick up later and not so necessary that you'll be handicapped without them.

(Of course, all of this is predicated on the assumption that I have the skills you claim one learns from learning these languages, which I admit you have no reason to believe purely based on my comments / posts.)

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-05T00:05:23.474Z · score: 1 (1 votes) · LW · GW

It seems not that conscious. I suspect it's similar to very scrupulous people who just clean / tidy up by default. That said, I am very curious whether it's cultivatable in a less pathological way.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-04T23:15:24.312Z · score: 1 (1 votes) · LW · GW

Yeah good idea.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-03T16:16:38.000Z · score: 1 (1 votes) · LW · GW

I'm interested in reading more about what might've been going on in Ramanujan's head when he did math. So far, the best thing I've found is this.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-03T02:06:53.042Z · score: 3 (2 votes) · LW · GW

How to remember everything (not about Anki)

In this fascinating article, Gary Marcus (now better known as a Deep Learning critic, for better or worse) profiles Jill Price, a woman who has an exceptional autobiographical memory. However, unlike others that studied Price, Marcus plays the role of the skeptic and comes to the conclusion that Price's memory is not exceptional in general, but instead only for the facts about her life, which she obsesses over constantly.

Now obsessing over autobiographical memories is not something I'd recommend to people, but reading this did make me realize that to the degree it's cultivate-able, continuously mulling over stuff you've learned is a viable strategy for remembering it much better.

Comment by an1lam on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2020-05-01T04:11:50.603Z · score: 1 (1 votes) · LW · GW

I would recommend the other writers I linked though! They are much more insightful than I anyway!

Comment by an1lam on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2020-05-01T04:11:06.433Z · score: 1 (1 votes) · LW · GW

Sadly, not much. I wrote this one blog post a few years back about my take on why "reading code" isn't a thing people should do in the same way they read literature but not much (publicly) other than that. I'll think about whether there's anything relevant to stuff I've been doing recently that I could write up.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-01T02:29:55.192Z · score: 3 (2 votes) · LW · GW

Sometimes there are articles I want to share, like this one, where I don't generally trust the author and they may have quite (what I consider) wrong views overall but I really like some of their writing. On one hand, sharing the parts I like without crediting the author seems 1) intellectually / epistemically dishonest and 2) unfair to the author. On the other hand, providing a lot of disclaimers about not generally trusting the author feels weird because I feel uncomfortable publicly describing why I find them untrustworthy.

Not really sure what to do here but flagging it to myself as an issue explicitly seems like it might be useful.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-04-30T02:11:40.393Z · score: 3 (2 votes) · LW · GW

Taking Self-Supervised Learning Seriously as a Model for Learning

It seems like if we take self-supervised learning (plus a sprinkling of causality) seriously as key human functions, we can more directly enhance our learning by doing much more prediction / checking of predictions while we learn. (I think this is also what predictive processing implies but don't understand that framework as well.)

Comment by an1lam on I do not like math · 2020-04-29T17:49:07.079Z · score: 4 (3 votes) · LW · GW

I appreciate you writing this especially given that the userbase on this site includes a lot of people who really like math.

One thing I'm curious about - it seems like you enjoy physics. Do you enjoy using math to do / understand physics? If so, what do you think the difference is? Is it just that you especially dislike proofs or is it also something related to the concreteness / applied nature of it?

Comment by an1lam on What are your favorite examples of distillation? · 2020-04-26T21:02:21.933Z · score: 5 (3 votes) · LW · GW

Thought I just had after writing this: I think distillation is probably a subset of pedagogy.

Comment by an1lam on What are your favorite examples of distillation? · 2020-04-26T20:49:27.608Z · score: 3 (2 votes) · LW · GW

Good question - frankly, I'm not sure!

I have an intuitive sense that distillation (as defined in the Research Debt article) differs from pedagogy by focusing more on clarity, being more opinionated, and drawing connections between topics/fields while focusing less on comprehensiveness and accessibility. Admittedly though, some of my favorite examples of distillation--Paths Perspective on Value Learning, If correlation doesn't imply causation, then what does?--are also quite accessible as pedagogical examples. That said, I do think these examples illustrate the opinionated point. These authors are writing about the parts of their topics that interest them and from their perspectives, not trying to describe the topics comprehensively.

Let me just reiterate that this is me thinking out loud. I have not yet distilled the difference between distillation and pedagogy.

Comment by an1lam on What are your favorite examples of distillation? · 2020-04-25T23:26:02.547Z · score: 1 (1 votes) · LW · GW

Agree! I too am a big fan.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-04-24T22:01:01.248Z · score: 3 (2 votes) · LW · GW

Ah that makes sense, thanks. I was in fact thinking of Newton's method (which is why I didn't see the connection).

Comment by an1lam on Do you trust the research on handwriting vs. typing for notes? · 2020-04-24T18:27:10.549Z · score: 1 (1 votes) · LW · GW

Fair enough - the problem is for me neatness isn't a constant but a function of writing speed. I can write very neatly but then I also end up writing very slowly.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-04-24T18:25:44.701Z · score: 3 (2 votes) · LW · GW

So... I just re-read your brain dump post and realized that you described an issue that I not only encountered but the exact example for which it happened!

so i might remember the intuition behind newton's approximation, but i won't know how to apply it or won't remember that it's useful in proving the chain rule.

I indeed have a card for Newton's approximation but didn't remember this fact! That said, I don't know whether I would have noticed the connection had I tried to re-prove the chain rule, but I suspect not. The one other caveat is that I created cards very sparsely when I reviewed calculus so I'd like to think I might have avoided this with a bit more card-making.

Comment by an1lam on Do you trust the research on handwriting vs. typing for notes? · 2020-04-24T18:21:24.666Z · score: 1 (1 votes) · LW · GW

Yes! If you haven't seen the following articles already, I recommend at least skimming them:

The first Michael Nielsen link has a part where he discusses using Anki to achieve a goal vs. just remembering for the sake of remembering which seems relevant to your question.

This thread I started on LW about my own observations from using Anki also touches on the question you raise. Personally, I haven't used Readwise but I have had general positive experiences using Anki. That said, similar to Nielsen, my worst Anki experiences have come from trying to remember books for the sake of remembering them vs. using the content for some sort of goal. I use goal broadly here to include writing a blog post/article/book, solving problems/exercises, writing a program, etc.

Comment by an1lam on Do you trust the research on handwriting vs. typing for notes? · 2020-04-24T13:15:36.565Z · score: 1 (1 votes) · LW · GW

Good point! Part of my interest in whether high quality studies exists is that this seems like an example of an information cascade if not.

Comment by an1lam on Do you trust the research on handwriting vs. typing for notes? · 2020-04-24T00:39:31.605Z · score: 3 (2 votes) · LW · GW

Thanks for sharing this! Interestingly, this has not been my experience - when I started taking LaTex notes in live talks / lectures, I think my retention went down because I would get nerd-sniped trying to fix my formatting. However, I do think a similar thing is true for me which is that my retention seems proportionate to my time spent interpreting / thinking about the material, which I could see note-taking difficulty being correlated with. I suspect it's not for me because I have an above average level need format things in the "optimal" way (this is also why I dislike using pens for writing).