Posts

What are your favorite examples of distillation? 2020-04-25T17:06:51.393Z
Do you trust the research on handwriting vs. typing for notes? 2020-04-23T20:49:19.731Z
How much motivation do you derive from curiosity alone in your work/research? 2020-04-17T14:04:13.763Z
Decaf vs. regular coffee self-experiment 2020-03-01T19:19:24.832Z
Yet another Simpson's Paradox Post 2019-12-23T14:20:09.309Z
Billion-scale semi-supervised learning for state-of-the-art image and video classification 2019-10-19T15:10:17.267Z
What are your strategies for avoiding micro-mistakes? 2019-10-04T18:42:48.777Z
What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? 2019-03-31T18:31:29.866Z
So you want to be a wizard 2019-02-15T15:43:48.274Z
How do we identify bottlenecks to scientific and technological progress? 2018-12-31T20:21:38.348Z
Babble, Learning, and the Typical Mind Fallacy 2018-12-16T16:51:53.827Z
NaiveTortoise's Short Form Feed 2018-08-11T18:33:15.983Z
The Case Against Education: Why Do Employers Tolerate It? 2018-06-10T23:28:48.449Z

Comments

Comment by an1lam on TurnTrout's shortform feed · 2020-11-28T19:53:54.950Z · LW · GW

I'm curious what sort of things you're Anki-fying (e.g. a few examples for measure theory).

Comment by an1lam on How can we lobby to get a vaccine distributed faster? · 2020-11-11T21:25:35.224Z · LW · GW

Minor correction: I think you mean Alex Tabarrok (other author on MR).

Comment by an1lam on Probability vs Likelihood · 2020-11-11T16:55:39.699Z · LW · GW

I find it helpful to have more real world examples to anchor on so here's another COVID-related example of what I'm pretty sure is likelihood / probability confusion.

Sensitivity and specificity (terrible terms IMO but common) model and respectively and therefore are likelihoods. If I get a positive test, I likely have COVID, but it still may not be very probable that I have COVID if I live in, e.g. Taiwan, where the base rate of having COVID is very low.

Comment by an1lam on Three Open Problems in Aging · 2020-11-09T17:31:04.958Z · LW · GW

I'm the person starting to work on the senescence-induced senescence problem. Happy to chat more about current thoughts / plan (I am open to trading marginal time for relatively small amounts of $ but also happy to just talk about what I plan to do anyway). Feel free to DM me.

Comment by an1lam on Sunday, Nov 8: Tuning Your Cognitive Algorithms · 2020-11-08T20:07:06.158Z · LW · GW

Now getting this error:

> The Walled Garden is a private virtual space managed by the LessWrong team.
> It is closed right now. Please return on Sunday between noon and 4pm PT, when it is open to everyone. If you have a non-Sunday invite, you may need to log in.

Comment by an1lam on Sunday, Nov 8: Tuning Your Cognitive Algorithms · 2020-11-08T20:01:29.596Z · LW · GW

Seems like event is scheduled November 9th at 7:45 AM?

Comment by an1lam on Open & Welcome Thread – November 2020 · 2020-11-08T18:46:17.406Z · LW · GW

The first way to treat this in the DAG paradigm that comes to mind is that the "quantitative" question is a question about a causal effect given a hypothesized diagram

On the other hand, the "qualitative" question can be framed in two ways, I think. In the first, the question is about which DAG best describes reality given the choice of different DAGs that represent different sets of species having an effect. But in principle, we could also just construct a larger graph with all possible species as s having arrows pointing to $ X $ and try to infer all the different effects jointly, translating the qualitative question into a quantitative one. (The species that don't effect $ X $ will just have a causal effect of $ 0 $ on $ X $.)

To your point about diversity in the wild, in theoretical causality, our ability to generalize depends on 1) the structure of the DAG and 2) our level of knowledge of the underlying mechanisms. If we only have a blackbox understanding of the graph structure and the size of the average effects (that is, $ P(Y \mid \text{do}(\mathbf{X})) $), then there exist [certain situations](https://ftp.cs.ucla.edu/pub/stat_ser/r372-a.pdf) in which we can "transport" our results from the lab to other situations. If we actually know the underlying mechanisms (the structural causal model equations in causal DAG terminology), then we can potentially apply our results even outside of the situations in which our graph structure and known quantities are "transportable".

Comment by an1lam on Three more stories about causation · 2020-11-03T22:54:35.170Z · LW · GW

Oh I see, yeah this sounds hard. The causal graph wouldn't be a DAG because it's cyclic, in which case there may be something you can do but the "standard" (read: what you'd find in Pearl's Causality) won't help you unless I'm forgetting something.

An apparently real hypothesis that fits this pattern is that people take more risks / do more unhealthy things the more they know healthcare can heal them / keep them alive.

Comment by an1lam on Three more stories about causation · 2020-11-03T19:13:23.087Z · LW · GW

A few minor comments. Regarding I, it's known that the direction of (or lack of) an arrow in generic two-node causal is un-identifiable, although there's some recent work solving this in restricted cases.

Regarding II, if I understand correctly, the second sub-scenario is one in which we'd have a graph that looks like the following DAG.

What I'm confused about is if we condition on a level of tar in a big population, we'll still see correlation between smoking and cancer via the trait assuming there's independent noise feeding into each of these nodes. More concretely, presumably people will smoke different amounts based on some other unobserved factors outside this trait. So at at least certain levels of tar in lungs, we'll have people who do/don't have the trait, meaning there'll be a correlation between smoking and cancer even in different tar level sub-populations. That said, in the purely deterministic simplified scenario, I see your point.

Alternatively, I'm pretty sure applying the front-door criterion (explanation) would properly identify the zero causal effect of smoking on cancer in this scenario (again assuming all the relationships aren't purely deterministic).

Comment by an1lam on AllAmericanBreakfast's Shortform · 2020-10-22T23:08:33.722Z · LW · GW

If you haven't seen Half-assing it with everything you've got, I'd definitely recommend it as an alternative perspective on this issue.

Comment by an1lam on Why isn't JS a popular language for deep learning? · 2020-10-08T18:15:54.085Z · LW · GW

I haven't researched this extensively but have used the Python data science toolkit for a while now and so can comment on its advantages.

To start, I think it's important to reframe the question a bit. At least in my neck of the woods, very few people just do deep learning with Python. Instead, a lot of people use Python to do Machine Learning, Data Science, Stats (although hardcore stats seems to have a historical bias towards R). This leads to two big benefits of using Python: pretty good support for vectorized operations and numerical computing (via calling into lower level languages of course and also Cython) and a toolkit for "full stack" data science and machine learning.

Regarding the numerical computing side of things, I'm not super up-to-date on the JS numerical computing ecosystem but when I last checked, JS had neither good pre-existing libraries that compared to numpy nor as good a setup for integrating with the lower level numerical computing ecosystem (but I also didn't look hard for it in fairness).

Regarding the full stack ML / DS point, in practice, modeling is a small part of the overall ML / DS workflow, especially once you go outside the realm of benchmark datasets or introduce matters of scale. The former involves handling data processing and analysis (transformation, plotting, aggregation) in addition to building models. Python (and R for what it's worth) has a suite of battle-hardened libraries and tools for both data processing -- things in the vein of airflow, luigi, etc. -- and analysis -- pandas, scipy, seaborn, matplotlib, etc. -- that, as far as I know Javascript lacks.

ETA: To be clear, Python has lots of downsides and doesn't solve any of these problems perfectly, but the question focused on relative to JS so I tried to answer in the same vein.

Comment by an1lam on Thoughts on ADHD · 2020-10-08T01:03:07.758Z · LW · GW

I've never been evaluated for ADHD (or seriously considered it) but some of these -- especially 2, 3, 6, 7, 9 -- feel very familiar to me.

Comment by an1lam on ricraz's Shortform · 2020-09-18T01:02:29.079Z · LW · GW

Yeah good point - given generous enough interpretation of the notebook my rejection doesn't hold. It's still hard for me to imagine that response feeling meaningful in the context but maybe I'm just failing to model others well here.

Comment by an1lam on ricraz's Shortform · 2020-09-17T23:06:41.160Z · LW · GW

I've seen this quote before and always find it funny because when I read Greg Egan, I constantly find myself thinking there's no way I could've come up with the ideas he has even if you gave me months or years of thinking time.

Comment by an1lam on Progress: Fluke or trend? · 2020-09-14T11:56:15.626Z · LW · GW

Sorry I was unclear. I was actually imagining two possible scenarios.

The first would be deeper investigation reveals that recent progress mostly resulted from serendipity and lucky but more contingent than we expected historical factors. For example, maybe it turns out that the creation of industrial labs all hinged on some random quirk of the Delaware C-Corp code (I'm just making this up to be clear). Even though these factors were a fluke in the past and seem sort of arbitrary, we could still be systematic about bringing them about going forward.

The second scenario is even more pessimistic. Suppose we fail to find any factors that influenced recent progress - it's just all noise. It's hard to give an example of what this would look like because it would look like an absence of examples. Every rigorous investigation of a potential cause of historical progress would find a null result. Even in this pessimistic world, we still could say, "ok, well nothing in the past seemed to make a difference but we're going to experiment to figure out things that do."

That said, writing out this maximally pessimistic case made me realize how unlikely I think it is. It seems like we already know of certain factors which at least marginally increased the rate of progress, so I want to emphasize that I'm providing a line of retreat not arguing that this is how the world actually is.

Comment by an1lam on Progress: Fluke or trend? · 2020-09-13T02:59:46.962Z · LW · GW

Isn't it both possible that it's a fluke and also that going forward we can figure out mechanisms to promote it systematically?

To be clear, I think it's more likely that not that a nontrivial fraction of recent progress has non fluke causes. I'm just also noting that the goal of enhancing progress seems at least partly disjoint from whether recent progress was a fluke.

Comment by an1lam on [AN #115]: AI safety research problems in the AI-GA framework · 2020-09-02T20:55:36.928Z · LW · GW

Yep, clicking "View this email in browser" allowed me to read it but obviously would be better to have it fixed here as well.

Comment by an1lam on ricraz's Shortform · 2020-08-26T12:48:31.717Z · LW · GW

Thanks for your reply! I largely agree with drossbucket's reply.

I also wonder how much this is an incentives problem. As you mentioned and in my experience, the fields you mentioned strongly incentivize an almost fanatical level of thoroughness that I suspect is very hard for individuals to maintain without outside incentives pushing them that way. At least personally, I definitely struggle and, frankly, mostly fail to live up to the sorts of standards you mention when writing blog posts in part because the incentive gradient feels like it pushes towards hitting the publish button.

Given this, I wonder if there's a way to shift the incentives on the margin. One minor thing I've been thinking of trying for my personal writing is having a Knuth or Nintil style "pay for mistakes" policy. Do you have thoughts on other incentive structures to for rewarding rigor or punishing the lack thereof?

Comment by an1lam on ricraz's Shortform · 2020-08-21T13:17:59.270Z · LW · GW

I'd be curious what, if any, communities you think set good examples in this regard. In particular, are there specific academic subfields or non-academic scenes that exemplify the virtues you'd like to see more of?

Comment by an1lam on Becoming Unusually Truth-Oriented · 2020-08-15T20:22:28.230Z · LW · GW

Makes sense - would some of the early posts about Focusing and other "lower level" concepts you reference here qualify? If you create a tag, people (including maybe me) could probably help curate!

Comment by an1lam on Becoming Unusually Truth-Oriented · 2020-07-28T15:43:52.970Z · LW · GW

I can make a longer comment there if you'd like but personally I wasn't that bothered by the dreams example because I agreed with you that confabulation in the immediate moments after I woke up didn't seem like a huge issue. As a result, I was definitely interested in seeing more posts from the meditative/introspective angle even if they just expanded upon some of these moment-to-moment habits with more examples and detail. Unfortunately, that would at least partly require writing more posts rather than pure curation.

Comment by an1lam on The Future of Science · 2020-07-28T15:14:45.657Z · LW · GW

Great post (or talk I guess)!

Two "yes, and..." add-ons I'd suggest:

  1. Faster tool development as the result of goal-driven search through the space of possibilities. Think something like Ed Boyden's Tiling Tree Method semi-automated and combined with powerful search. As an intuition pump, imagine doing search in the latent space of GPT-N, maybe fine tuned on all papers in an area's, embeddings.
  2. Contrary to some of the comments from the talk, I weakly suspect NP-hardness will be less of a constraint for narrow AI scientists than it is for humans. My intuition here comes from what we've seen with protein folding and learned algorithms where my understanding is that hardness results limit how quickly we can do things in general but not necessarily on the distributions we encounter in practice. I think this is especially likely if we assume that AI scientists will be better at searching for complex but fast approximations than humans are. (I'm very uncertain about this one since I'm by no means an expert in these areas.)
Comment by an1lam on Becoming Unusually Truth-Oriented · 2020-07-28T13:26:11.296Z · LW · GW

Did you ultimately decide not to continue this series?

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-23T02:09:41.407Z · LW · GW

Yes I can relate to this!

Comment by an1lam on Open & Welcome Thread - July 2020 · 2020-07-23T01:41:22.890Z · LW · GW

Yep.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-23T01:26:41.128Z · LW · GW

Thanks, this framing is helpful for me for understanding how these things can be seen to fit together.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-22T18:29:39.107Z · LW · GW

This is basically my perspective but seems contrary to the perspective in which most problems are caused by internal blockages, right?

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-22T04:24:34.112Z · LW · GW

Thanks for replying and sharing your post. I'd actually read it a while ago but forgotten how relevant it is to the above. To be clear, I totally buy that if you have crippling depression or even something more mild, fixing that is a top priority. I also have enjoyed recent posts on and think I understand the alignment-based models of getting all your "parts" on board.

Where I get confused and where I think there's less evidence is that the unblocking can make it such that doing hard stuff is no longer "hard". Part of what's difficult here is that I'm struggling to find the right words but I think it's specifically claims of effortlessness or fun that seem less supported to me.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-07-22T02:42:23.846Z · LW · GW

I keep seeing rationalist-adjacent discussions on Twitter that seem to bottom out with the arguments of the general (very caricatured, sorry) form: "stop forcing yourself and get unblocked and then X effortlessly" where X equals learn, socialize, etc. In particular, a lot of focus seems to be on how children and adults can just pursue what's fun or enjoyable if they get rid of their underlying trauma and they'll naturally learn fast and gravitate towards interesting (but also useful in the long term) topics, with some inspiration from David Deutsch.

On one hand, this sounds great, but it's so foreign to my experience of learning things and seems to lack the kind of evidence I'd expect before changing my cognitive strategies so dramatically. In fairness, I probably am too far in the direction of doing things because I "should", but I still don't think going to the other extreme is the right correction.

In particular, having read Mason Currey's Daily Rituals, I have a strong prior that even the most successful artists and scientists are at risk of developing akrasia and need to systematize their schedules heavily to ensure that they get their butts in the chair and work. Given this, what might convince me would be if someone catalogue of thinkers who did interesting work, quotes or stories that provide evidence that they did what was fun, and counter-examples to show that it's not cherry-picking.

The above is also is related to the somewhat challenged but still I think somewhat valid idea that getting better at things requires deliberate practice, which is not "fun". This also leads me to a subtle point which is that I think "fun" may be being used in a non-standard way by people who claim that learning can always be "fun". I.e. I can see how even not necessarily enjoyable in the moment practice can be something one values on reflection, but this seems like a misuse of the term "fun" to me.

Comment by an1lam on TurnTrout's shortform feed · 2020-06-27T14:19:53.202Z · LW · GW

For what it's worth, this is very true for me as well.

I'm also reminded of a story of Robin Hanson from Cryonics magazine:

Robin’s attraction to the more abstract ideas supporting various fields of interest was similarly shown in his approach – or rather, lack thereof – to homework. “In the last two years of college, I simply stopped doing my homework, and started playing with the concepts. I could ace all the exams, but I got a zero on the homework… Someone got scatter plots up there to convince people that you could do better on exams if you did homework.” But there was an outlier on that plot, courtesy of Robin, that said otherwise.

Comment by an1lam on The Indexing Problem · 2020-06-22T19:38:37.423Z · LW · GW

Meta: this project is wrapping up for now. This is the first of probably several posts dumping my thought-state as of this week.

Moving on to other things?

Comment by an1lam on Types of Knowledge · 2020-06-21T01:48:34.890Z · LW · GW

A thing I really structured to capture was that "i did actual research and had actual models for why masks would help against covid, but it's still not type-3", which is why "know why" doesn't feel right to me.

I share this feeling based on my understanding of the boundaries you're trying to draw.

I tentatively think that some of what you're calling engineering knowledge would fit into what I call scientific (which is a strike agains the names), and/or that I didn't do a good enough job explaining why engineering knowledge is useful.

Yeah, like I said, I don't want to bike shed over terms but I do think the distinction between science and engineering is an interesting one. Regarding the "and/or" isn't it more like there's often an interplay between the levels? First we get "folk knowledge" and then we "do science" to understand it and then we use that science to "engineer" it?

Also, I found an Overcoming Bias post where Robin quotes Drexler on the distinction:

The essence of science is inquiry; the essence of engineering is design. Scientific inquiry expands the scope of human perception and understanding; engineering design expands the scope of human plans and results. …

Scientists seek unique, correct theories, and if several theories seem plausible, all but one must be wrong, while engineers seek options for working designs, and if several options will work, success is assured. Scientists seek theories that apply across the widest possible range (the Standard Model applies to everything), while engineers seek concepts well-suited to particular domains (liquid-cooled nozzles for engines in liquid-fueled rockets). Scientists seek theories that make precise, hence brittle predictions (like Newton’s), while engineers seek designs that provide a robust margin of safety. In science a single failed prediction can disprove a theory, no matter how many previous tests it has passed, while in engineering one successful design can validate a concept, no matter how many previous versions have failed. ..

Simple systems can behave in ways beyond the reach of predictive calculation. This is true even in classical physics. …. Engineers, however, can constrain and master this sort of unpredictability. A pipe carrying turbulent water is unpredictable inside (despite being like a shielded box), yet can deliver water reliably through a faucet downstream. The details of this turbulent flow are beyond prediction, yet everything about the flow is bounded in magnitude, and in a robust engineering design the unpredictable details won’t matter. …

The reason that aircraft seldom fall from the sky with a broken wing isn’t that anyone has perfect knowledge of dislocation dynamics and high-cycle fatigue in dispersion-hardened aluminum, nor because of perfect design calculations, nor because of perfection of any other kind. Instead, the reason that wings remain intact is that engineers apply conservative design, specifying structures that will survive even unlikely events, taking account of expected flaws in high-quality components, crack growth in aluminum under high-cycle fatigue, and known inaccuracies in the design calculations themselves. This design discipline provides safety margins, and safety margins explain why disasters are rare. …

The key to designing and managing complexity is to work with design components of a particular kind— components that are complex, yet can be understood and described in a simple way from the outside. … Exotic effects that are hard to discover or measure will almost certainly be easy to avoid or ignore. … Exotic effects that can be discovered and measured can sometimes be exploited for practical purposes. …

When faced with imprecise knowledge, a scientist will be inclined to improve it, yet an engineer will routinely accept it. Might predictions be wrong by as much as 10 percent, and for poorly understood reasons? The reasons may pose a difficult scientific puzzle, yet an engineer might see no problem at all. Add a 50 percent margin of safety, and move on.

Comment by an1lam on Types of Knowledge · 2020-06-20T19:21:25.097Z · LW · GW

Good question, yes I know what you mean. I don't think these are great labels but to me the categories seem like reguritated facts, pre-scientific empiricism / folk knowledge, and science and engineering. Admittedly I know these don't make great section headings, so I'll think more about better names.

Another comment mentions "know what", "know how", and "know why" which I suspect captures some of what you're getting at but not all of it? Only some because there are different types of whys within levels 2 & 3, right?

Comment by an1lam on Types of Knowledge · 2020-06-20T18:47:53.737Z · LW · GW

One more point that may clarify things: I think you're lumping applying science to the problem of making things under science whereas to me that's the essence of engineering. That is, the scientist seeks to understand things for their own sake whereas the engineer asks "what can I build with this?" Again these are obviously idealized, extreme characterizations.

With respect to COVID, we ought to allocate some credit to engineering for building the high-throughput systems that enable rapid testing and experimentation. We also will require some impressive engineering to scale up vaccine production once we hopefully have a viable vaccine.

Comment by an1lam on Types of Knowledge · 2020-06-20T18:41:56.693Z · LW · GW

Not to quibble on words and categories but this feels like a straw man of engineering knowledge that reinforces the view I've mentioned before that people here often have a bias that causes them to think of engineering as "trivial" or "easy". As I understand it, you're arguing that engineering knowledge only allows local improvements whereas science allows global optimization, whereas I feel like the reality is murkier but something like what Jason Crawford described in his recent post on shuttling between science and invention. I'm also partial to Eric Drexler's framing where science is about universal quantifiers -- the space of mechanisms -- whereas engineering is about existential quantifiers -- if there exists even one way to do something, then let's find it. This is a little abstract so I'll discuss a few of your examples.

To take your example of hybrid cars, yes inventing hybrid cars' components certainly required scientific breakthroughs but their economic feasibility heavily benefited from improved engineering of both batteries and cars. I'm not an expert on this but I feel like as a general statement it's hopefully not that controversial?

Related to this, the thing you mention about car maintenance is discussing mechanics, who are distinct from engineers. Consider an alternative statement, "yeah it's good that cars are 10X cheaper than they used to be, but the important thing was inventing the internal combustion engine in the first place." Isn't this a more balanced comparison for the role of engineering and science in car production?

Happy to go into more detail but will stop here because I'm having trouble anticipating whether my comment will 1) provoke disagreement and 2) which aspects are confusing / most likely to be disagrees with.

Comment by an1lam on Using a memory palace to memorize a textbook. · 2020-06-19T15:03:34.293Z · LW · GW

This reminds me of Mary Carruthers's comparison of characteristics people emphasize when they talk about Einstein vs. Thomas Aquinas and the way in which memory palaces played an integral role in not just recall but composition for medieval thinkers. Unfortunately, it's too much text to copy, but I've taken screenshots of the relevant pages and uploaded them here.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-24T01:24:31.289Z · LW · GW

This is awesome! I've been thinking I should try out the natural number game for a while because I feel like formal theorem proving will scratch my coding / video game itch in a way normal math doesn't.

Comment by an1lam on Against Dog Ownership · 2020-05-18T18:07:43.268Z · LW · GW

Minor note - having spent significant time in multiple homes with one dog and more recently a home with multiple, my anecdotal observation is that even just having 2 dogs changes the dynamic from dog obsessed with humans to dogs that have each other and are maybe still obsessed with their humans as well.

Comment by an1lam on What are your greatest one-shot life improvements? · 2020-05-16T20:12:18.344Z · LW · GW

Are you putting yours on paper or storing it digitally?

Comment by an1lam on Project Proposal: Gears of Aging · 2020-05-10T18:36:52.040Z · LW · GW

(Not the author, obviously.) Part of my personal intuition against this view is that even amongst mammals, lifespans and the way in which lives ends seems to vary quite a bit. See, for example, the biological immortality Wikipedia page, this article about sea sponges and bowhead whales, and this one about naked mole rats.

That said, it's still possible we're locked in a very tricky-to-get-out-of local optima in a high dimensional space that makes it very hard for us to make local improvements. But then I suspect OP's response would be that the way to get out of local optima is to understand gears.

Comment by an1lam on TurnTrout's shortform feed · 2020-05-06T19:43:08.114Z · LW · GW

Only related to the first part of your post, I suspect Pearl!2020 would say the coarse-grained model should be some sort of causal model on which we can do counterfactual reasoning.

Comment by an1lam on Insights from Euclid's 'Elements' · 2020-05-05T16:53:44.416Z · LW · GW

FWIW as someone who learned Python first, was exposed to C but didn't really understand it, and then only really learned C later (by playing around with / hacking on the OpenBSD operating system and also working on a project that used C++ with mainly only features from C), I've always found the following argument quite suspect with respect to programming:

(FWIW I've made the same argument in the context of training programmers, preferring that they have to learn to work with assembly, FORTRAN, and C because the difficulty forced me to understand a lot of useful details that help me even when working in higher level languages that can't be fully appreciated if you are, for example, trying to simulate the experience of managing memory or creating loops with JUMPIF in a language where it's not necessary. Not exactly the same as what's going on here but of the same type.)

It's undoubtedly true that I see some difference before & after "grokking" low-level programming in terms of being able to better debug issues with low-level networking code and maybe having a better intuition for performance. Now in fairness, most of my programming work hasn't been super performance focused. But, at the same time, I found learning lower level programming much easier after having already internalized decent programming practices (like writing tests and structuring my code) which allowed me to focus on the unique difficulties of C and assembly. Furthermore, I was much more motivated to understand C & assembly because I felt like I had a reason to do so rather than just doing it because (no snark intended) old-school programmers had to do so when they were learning.

For these reasons, I definitely would not recommend someone who wants to learn programming start with C & assembly unless they have a goal that requires it. This just seems to me like going to hard mode directly primarily because that's what people used to have to do. As I said above, I'm fairly convinced that the lessons you learn from doing so are things you can pick up later and not so necessary that you'll be handicapped without them.

(Of course, all of this is predicated on the assumption that I have the skills you claim one learns from learning these languages, which I admit you have no reason to believe purely based on my comments / posts.)

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-05T00:05:23.474Z · LW · GW

It seems not that conscious. I suspect it's similar to very scrupulous people who just clean / tidy up by default. That said, I am very curious whether it's cultivatable in a less pathological way.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-04T23:15:24.312Z · LW · GW

Yeah good idea.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-03T16:16:38.000Z · LW · GW

I'm interested in reading more about what might've been going on in Ramanujan's head when he did math. So far, the best thing I've found is this.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-03T02:06:53.042Z · LW · GW

How to remember everything (not about Anki)

In this fascinating article, Gary Marcus (now better known as a Deep Learning critic, for better or worse) profiles Jill Price, a woman who has an exceptional autobiographical memory. However, unlike others that studied Price, Marcus plays the role of the skeptic and comes to the conclusion that Price's memory is not exceptional in general, but instead only for the facts about her life, which she obsesses over constantly.

Now obsessing over autobiographical memories is not something I'd recommend to people, but reading this did make me realize that to the degree it's cultivate-able, continuously mulling over stuff you've learned is a viable strategy for remembering it much better.

Comment by an1lam on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2020-05-01T04:11:50.603Z · LW · GW

I would recommend the other writers I linked though! They are much more insightful than I anyway!

Comment by an1lam on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2020-05-01T04:11:06.433Z · LW · GW

Sadly, not much. I wrote this one blog post a few years back about my take on why "reading code" isn't a thing people should do in the same way they read literature but not much (publicly) other than that. I'll think about whether there's anything relevant to stuff I've been doing recently that I could write up.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-05-01T02:29:55.192Z · LW · GW

Sometimes there are articles I want to share, like this one, where I don't generally trust the author and they may have quite (what I consider) wrong views overall but I really like some of their writing. On one hand, sharing the parts I like without crediting the author seems 1) intellectually / epistemically dishonest and 2) unfair to the author. On the other hand, providing a lot of disclaimers about not generally trusting the author feels weird because I feel uncomfortable publicly describing why I find them untrustworthy.

Not really sure what to do here but flagging it to myself as an issue explicitly seems like it might be useful.

Comment by an1lam on NaiveTortoise's Short Form Feed · 2020-04-30T02:11:40.393Z · LW · GW

Taking Self-Supervised Learning Seriously as a Model for Learning

It seems like if we take self-supervised learning (plus a sprinkling of causality) seriously as key human functions, we can more directly enhance our learning by doing much more prediction / checking of predictions while we learn. (I think this is also what predictive processing implies but don't understand that framework as well.)