What are your favorite examples of distillation? 2020-04-25T17:06:51.393Z
Do you trust the research on handwriting vs. typing for notes? 2020-04-23T20:49:19.731Z
How much motivation do you derive from curiosity alone in your work/research? 2020-04-17T14:04:13.763Z
Decaf vs. regular coffee self-experiment 2020-03-01T19:19:24.832Z
Yet another Simpson's Paradox Post 2019-12-23T14:20:09.309Z
Billion-scale semi-supervised learning for state-of-the-art image and video classification 2019-10-19T15:10:17.267Z
What are your strategies for avoiding micro-mistakes? 2019-10-04T18:42:48.777Z
What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? 2019-03-31T18:31:29.866Z
So you want to be a wizard 2019-02-15T15:43:48.274Z
How do we identify bottlenecks to scientific and technological progress? 2018-12-31T20:21:38.348Z
Babble, Learning, and the Typical Mind Fallacy 2018-12-16T16:51:53.827Z
NaiveTortoise's Short Form Feed 2018-08-11T18:33:15.983Z


Comment by NaiveTortoise (An1lam) on ask me about technology · 2023-07-07T12:21:50.669Z · LW · GW

What do you think about synthetic biology as a manufacturing technology?

Comment by NaiveTortoise (An1lam) on Petrov Day Retrospective: 2022 · 2022-09-29T20:38:02.995Z · LW · GW

Mainly commenting on your footnote, I generally agree that it's fine to put low amounts of effort into one-off simple events. The caveat here is that this is an event that is 1) treated pretty seriously in past years and 2) is a symbol of a certain mindset that I think typically includes double-checking things and avoiding careless mistakes.

Comment by NaiveTortoise (An1lam) on Petrov Day Retrospective: 2022 · 2022-09-29T01:25:03.239Z · LW · GW

I don't know all the details of what testing was done, but I would not describe code review and then deploying as state-of-the-art as this ignores things like staged deploys, end-to-end testing, monitoring, etc. Again, I'm not familiar with the LW codebase and deploy process so it's possible all these things are in place, in which case I'd be happy to retract my comment!

Comment by NaiveTortoise (An1lam) on Petrov Day Retrospective: 2022 · 2022-09-28T22:59:10.460Z · LW · GW

I know this is going to come off as overly critical no matter how I frame it but I genuinely don't mean it to be.

Another takeaway from this would seem to be an update towards recognizing the difference between knowing something and enacting it or, analogously, being able to identify inadequacy vs. avoid it. People on LW often discuss, criticize, and sometimes dismiss folks who work at companies that fail to implement all security best practices or do things like push to production without going through proper checklists. Yet, this is a case where exactly that happened, even though there was not strong financial or (presumably) top down pressure to act quickly.

Comment by NaiveTortoise (An1lam) on Can we grow cars instead of building them? · 2022-01-30T21:59:48.828Z · LW · GW

EDIT: I now see you research these questions and so want to add a disclaimer that I have not thought about these things nearly as deeply as you probably have...

Epistemic status: very speculative.

Cool post, I've long been fond of the, likely less difficult, thought experiment of whether we can grow a house using synthetic biology.

At first, I was thinking growing vs. building was just about the amount of labor involved to go from raw materials to final product. Then I realized this doesn't work because under this definition a fully automated robot factory would qualify as growing a car.

My next best guess is that "growing" is related to:

  • the system containing its own description,
  • the system maintaining itself given "raw" materials, and
  • an aesthetic component that connects growing to things that look and feel biological.

In terms of KPIs, the first things that come to mind are metrics like:

  • How small a seed can the system bootstrap itself from given raw materials and otherwise little to no outside intervention?
  • Can the system repair itself when damaged?
Comment by NaiveTortoise (An1lam) on Kelly Bet or Update? · 2021-12-22T13:39:28.077Z · LW · GW

This and the linked post have been really helpful for my attempts to better internalize the Kelly Criterion. Thanks!

EDIT: I see you've corrected the mistake with the 12% and 2.7x return that I originally discussed below in a subsequent post so the details below aren't necessary. Maybe consider linking that post in the Addendum?

Mostly unrelated to the above, this is sort of a nitpick but between the body and the addendum, you (implicitly) switch from the odds-as-ratio-of-probabilities representation of to one in which is net fractional odds and is assumed to be . I know this makes sense because you're explicitly talking about bets in the final section but I'm bringing it up because it might throw off someone who hasn't read as many discussions of Kelly.

Comment by NaiveTortoise (An1lam) on True Stories of Algorithmic Improvement · 2021-11-01T17:12:38.835Z · LW · GW

Another great example of this is Striped Smith-Waterman, which takes advantage of SIMD instructions to achieve a 2-8 speed-up (potentially much more on modern CPUs though) for constructing sequence local alignments.

Comment by An1lam on [deleted post] 2021-09-28T00:50:36.210Z

(I'm the author of the post.) This is a totally reasonable critique, which I tried to make of myself:

Wrapping things up, I find all this analysis still a bit dissatisfying. While I’ve tried to use the commonalities amongst and differences between energetic aliens to understand them better, I feel like all the factors I identified describe rather than explain what’s going on with energetic aliens. A more satisfying understanding would instead at least suggest candidate causal factors which are predictive of energetic alienness warranting further investigation. Of course, this is also why this essay is called the neglected mystery rather than the resolved mystery.

That said, part of the challenge is that coming up with good theories is really hard here and risks touching on controversial issues, so warrants being careful! If you, or anyone else has such theories, I'd love to hear them.

Comment by NaiveTortoise (An1lam) on Let Us Do Our Work As Well · 2021-09-17T13:53:58.703Z · LW · GW

As someone who has also struggled with similar issues, although in a different context than writing papers, I found some of the answers here helpful and could imagine some of them as good "tactical advice" to go along with cultural norms. I also ended up looking through Google's SRE book as recommended in Gwern's answer and benefited from it even though it's focused on software infrastructure. In particular, the idea of treating knowledge production as a complex system helped knock me out of my "just be careful" mindset, which I think is often one of the harder things to scale. Of course, YMMV.

Comment by An1lam on [deleted post] 2021-09-07T17:57:43.912Z

That makes sense.

Comment by An1lam on [deleted post] 2021-09-07T16:24:12.123Z

For what it's worth, I took notes on the event and did not share them publicly because it was pretty clear to me doing so would have been a defection, even though it was implicit. Obviously, just because it was clear to me doesn't mean it was clear to everyone but I thought it still made sense to share this as a data point in favor of "very well known person doing a not recorded meetup" implying "don't post and promote your notes publicly."

I am also disappointed to see that this post is so highly upvoted and positively commented upon despite:

  1. Presumably others were aware of the fact that the meetup was not supposed to be recorded and LW is supposedly characteristically aware of coordination problems/defection/impact on incentives. At least to me, it seems likely that in expectation this post spreading widely would make Sam and people like him less likely to speak at future events and less trustworthy of the community that hosted the event. This seems not worth the benefit of having the notes posted given that people who were interested could have attended the event or asked someone about it privately.
  2. As Sean McCarthy and others pointed out, there were some at best misleading portrayals of what Altman said during his Q&A.
Comment by NaiveTortoise (An1lam) on Framing Practicum: Timescale Separation · 2021-08-22T13:29:02.836Z · LW · GW
  1. Returning to my number of muscle cells an adult human body example (from the initial stable equilibrium post), for the purposes of calculating lean vs. fat mass (or just weight), we don't care about the fact that the distribution shifts as the person ages and experiences sarcopenia.
  2. For predator-prey population size ratios, the ratio fluctuates slightly on a daily basis assuming the predators hunt at certain times of the day and potentially seasonally. Assuming both species live more than a year, neither matters for estimating the carrying capacity of the ecosystem for the predator species.
  3. For calculating the average body temperature of a species, we can mostly ignore real but small fluctuations that occur throughout the day due to circadian rhythms, digestion, etc.
Comment by NaiveTortoise (An1lam) on Framing Practicum: Dynamic Equilibrium · 2021-08-22T13:22:08.070Z · LW · GW
  1. Number of cells in an adult human body. Also, cell type composition in an adult human body (over the timescale of months but not years because aging).
  2. Relative size of predator/prey species population in a mature, mostly otherwise static ecosystem.
  3. Warm-blooded mammal body temperature.
Comment by NaiveTortoise (An1lam) on Framing Practicum: Bistability · 2021-08-22T13:15:24.668Z · LW · GW
  1. Most complex eukaryotic organisms are either dead or alive. Yes, they can be sick, which is sort of in between, but sick is still "alive". In general, going from dead to alive is hard... Going from alive to dead requires disrupting any of several important core sub-equilibria of the living system.
  2. It's snowing out vs. not. Note: didn't use raining because "misting" felt like more of an in between edge case than lightly snowing.
  3. A door is either open or closed. Depending on the door, switching from closed to open or open to closed requires applying force and maybe adding some sort of friction device to keep the system in its new state.
  4. (Cheating because I've seen this before.) Some natural and designed proteins function as switches with multiple stable states of comparable free energies.
Comment by NaiveTortoise (An1lam) on Framing Practicum: Stable Equilibrium · 2021-08-22T13:00:57.993Z · LW · GW

Main exercise:

  1. Amount of muscle a person who doesn't exercise regularly has.
  2. Level of clutter on a person's desk/counter/etc.
  3. Quantity of light that reaches a forest floor at a given time.
  4. (Extra:) Number of organisms in an all-male group.

I recognize 1 and 3 are borderline dynamic equilibria but I think they changes on a slow enough timescale that they count.

Bonus exercise:

  1. Watch their diet over the timescale of weeks to months and their physical activity. Can ignore incidental activity, like running to catch a bus or lifting lots of stuff for a move. More generally, can ignore any activity that's acute.
  2. Persistent changes to their cleaning behaviors (for reduction) or accumulation patterns (do they switch to computer notetaking?). Can ignore temporary changes or routine cleaning behaviors that have been going on for a while.
  3. Pay attention to introduction of organisms/factors that change the forest to a different type of environment with less (or more) coverage. Can ignore both temporary disturbances like a human walking through and crushing plants and introduction of organisms which only consume one of several types of trees that form the canopy, since others will presumably fill the void.
Comment by NaiveTortoise (An1lam) on Specializing in Problems We Don't Understand · 2021-07-04T20:19:00.189Z · LW · GW

I enjoyed this post a lot but in the weeks since reading it, one unaddressed aspect has been bugging me and I've finally put my finger on it: the recommendation to "Specialize in Things Which Generalize" neglects the all-important question of "how much?" Put a different way, at least in my experience, one can always go deeper into one of these subjects -- probability theory, information theory, etc. -- but doing so takes time away from expanding one's breadth. Therefore, as someone looking to build general knowledge, you're constantly presented with the trade-off of continuing to learn one area deeply vs. switching to the next area you'd like to learn.

If I try to inhabit the mindset of the OP, I can generate two potential answers to this quandary, but none of them are super satisfying:

  • Learn enough to be able to leverage what you've learned for novel problems.
  • Learn enough to be able to build gears-level models using what you've learned.
Comment by NaiveTortoise (An1lam) on Sam Altman and Ezra Klein on the AI Revolution · 2021-06-27T14:23:15.920Z · LW · GW

Created some PredictionBook predictions based off of this:

Comment by NaiveTortoise (An1lam) on Selection Has A Quality Ceiling · 2021-06-03T02:21:24.880Z · LW · GW

Nice point. I wanted to note that the converse is also true and seems like an example of Berkson's Paradox. If you only see individuals who passed the test, it will look like teachability is anti-correlated with the other two factors even though this may purely be a result of the selection process.

This may seem pedantic but the point I'm making is that it's equally important not to update in the other direction and assume less alignment between past experience and current skillset is better, since it may not be once you correct for this effect.

Comment by NaiveTortoise (An1lam) on Does anyone know this webpage about Machine Learning? · 2021-04-16T02:12:07.151Z · LW · GW

Sounds like Metacademy.

Comment by NaiveTortoise (An1lam) on Daniel Kokotajlo's Shortform · 2021-04-09T11:29:42.261Z · LW · GW

This is basically a souped up version of TagTime (by the Beeminder folks) so you might be able to start with their implementation.

Comment by NaiveTortoise (An1lam) on Core Pathways of Aging · 2021-03-28T18:57:27.833Z · LW · GW

Good point, this also suggests that Genome Project-Write is an important project.

Comment by NaiveTortoise (An1lam) on Core Pathways of Aging · 2021-03-28T15:50:38.797Z · LW · GW

As a funny aside, a few months ago, I had the thought "removing all transposons would be a nice somewhat pointless but impressive demonstration of a civilization's synthetic biology mastery." I guess the "pointless" part may have been very wrong!

Comment by NaiveTortoise (An1lam) on Core Pathways of Aging · 2021-03-28T15:48:42.731Z · LW · GW

As a thought experiment mostly for testing my own understanding, suppose we could do a bulk culling of transposons in all of an elderly human's stem cells (or all cells). If I understand correctly, this post's main hypothesis (DNA damage <-> ROS feedback loop) would imply the following should happen:

  • Senescent cell fraction quickly (within days or months) starts reverting to its healthy level.
  • Atherosclerosis heals on its own because ROS production reduces to its healthy level meaning the plaque equilibrium returns to the young level.
  • Similarly, vascular stiffening reverses for the same reason AG works temporarily.
  • Alzheimer's remains unclear without further understanding but we can guess this might help.
  • Sarcopenia same story as atherosclerosis and stiffening.
  • Lens and elastin fibers continue to build up, so we'll all be blind and wrinkly but otherwise healthy...

The one thing I'm less clear on is where immune system aging fits into this. I feel pretty confident that a treatment like this wouldn't cause the thymus to spontaneously grow for example but am more uncertain about some of the other aged immune system phenotypes. It seems plausible that reducing the load on the immune system would allow it to regain some of its ability to deal with infectious diseases for example.

Does this fit with your understanding?

Comment by NaiveTortoise (An1lam) on Core Pathways of Aging · 2021-03-28T15:28:33.605Z · LW · GW

In principle, we could test it by looking for an age-related increase in transposon count in non-senescent cells, but that turns out to be actually-pretty-difficult in practice. (Modern DNA sequencing involves breaking the DNA into little pieces, sequencing those, then computationally reconstructing which pieces overlap with each other. That’s a lot more difficult when the pieces you’re interested in have millions of near-copies filling most of the genome. Also, the copy-events we’re interested in will vary from cell to cell.)

I wonder if something like single cell ATAC-seq could help here? There's still the problem of aligning near-copies but it seems like there's already some work trying to deal with this problem. (I haven't read either of these papers in detail but the second specifically mentions transposons as a use-case.)

Comment by NaiveTortoise (An1lam) on Core Pathways of Aging · 2021-03-28T15:22:55.535Z · LW · GW

I've seen this claim about naked mole rats thrown around a bunch but it's left me with the question of what naked mole rats do die of? If their mortality likelihood truly doesn't increase, we'd expect there to be some very long-lived naked mole rats. Is the issue just we haven't held them in captivity for long enough to see them die of natural causes? I vaguely remember reading somewhere that eventually they stop eating or die in other ways but can't seem to find the reference now.

Comment by NaiveTortoise (An1lam) on If you've learned from the best, you're doing it wrong · 2021-03-11T02:02:06.590Z · LW · GW

I agree that I'd want to learn physics from him, I'm just not sure he was an exceptional physicist. Good, but not Von Neuman. He says as much in his biographies (e.g. pointing out one of his big contributions came from randomly point to a valve on a schematic and getting people to think about the schematic).

(Disclaimer: not a physicist). From what I understand, Feynman was a really really good physicist. Besides winning a Nobel Prize for his work on quantum electrodynamics, he also contributed to several other areas during his career. Also, if you look at what other eminent mathematicians of the time say about him, you get the sense that he was exceptional even amongst the exceptional.

For example, Mark Kac, an eminent mathematician of the time, said:

There are two kinds of geniuses: the ‘ordinary’ and the ‘magicians.’ an ordinary genius is a fellow whom you and I would be just as good as, if we were only many times better. There is no mystery as to how his mind works. Once we understand what they’ve done, we feel certain that we, too, could have done it. It is different with the magicians... Feynman is a magician of the highest caliber.

Hans Bethe (another Nobel Prize winning physicist of the time) shared similar sentiments:

As the late, great Nobel Laureate physicist Hans Bethe remarked: "Feynman was a magician. With a magician, you just do not know how he does it."

I don't have the time to find more quotes like this right now but I think there are a bunch more like them if you look for them.

Comment by NaiveTortoise (An1lam) on jp's Shortform · 2021-02-20T19:55:57.749Z · LW · GW

I now use MathJax!

Comment by NaiveTortoise (An1lam) on TurnTrout's shortform feed · 2020-11-28T19:53:54.950Z · LW · GW

I'm curious what sort of things you're Anki-fying (e.g. a few examples for measure theory).

Comment by NaiveTortoise (An1lam) on How can we lobby to get a vaccine distributed faster? · 2020-11-11T21:25:35.224Z · LW · GW

Minor correction: I think you mean Alex Tabarrok (other author on MR).

Comment by NaiveTortoise (An1lam) on Probability vs Likelihood · 2020-11-11T16:55:39.699Z · LW · GW

I find it helpful to have more real world examples to anchor on so here's another COVID-related example of what I'm pretty sure is likelihood / probability confusion.

Sensitivity and specificity (terrible terms IMO but common) model and respectively and therefore are likelihoods. If I get a positive test, I likely have COVID, but it still may not be very probable that I have COVID if I live in, e.g. Taiwan, where the base rate of having COVID is very low.

Comment by NaiveTortoise (An1lam) on Three Open Problems in Aging · 2020-11-09T17:31:04.958Z · LW · GW

I'm the person starting to work on the senescence-induced senescence problem. Happy to chat more about current thoughts / plan (I am open to trading marginal time for relatively small amounts of $ but also happy to just talk about what I plan to do anyway). Feel free to DM me.

Comment by NaiveTortoise (An1lam) on Sunday, Nov 8: Tuning Your Cognitive Algorithms · 2020-11-08T20:07:06.158Z · LW · GW
Comment by NaiveTortoise (An1lam) on Sunday, Nov 8: Tuning Your Cognitive Algorithms · 2020-11-08T20:01:29.596Z · LW · GW
Comment by NaiveTortoise (An1lam) on Open & Welcome Thread – November 2020 · 2020-11-08T18:46:17.406Z · LW · GW

The first way to treat this in the DAG paradigm that comes to mind is that the "quantitative" question is a question about a causal effect given a hypothesized diagram

On the other hand, the "qualitative" question can be framed in two ways, I think. In the first, the question is about which DAG best describes reality given the choice of different DAGs that represent different sets of species having an effect. But in principle, we could also just construct a larger graph with all possible species as s having arrows pointing to $ X $ and try to infer all the different effects jointly, translating the qualitative question into a quantitative one. (The species that don't effect $ X $ will just have a causal effect of $ 0 $ on $ X $.)

To your point about diversity in the wild, in theoretical causality, our ability to generalize depends on 1) the structure of the DAG and 2) our level of knowledge of the underlying mechanisms. If we only have a blackbox understanding of the graph structure and the size of the average effects (that is, $ P(Y \mid \text{do}(\mathbf{X})) $), then there exist [certain situations]( in which we can "transport" our results from the lab to other situations. If we actually know the underlying mechanisms (the structural causal model equations in causal DAG terminology), then we can potentially apply our results even outside of the situations in which our graph structure and known quantities are "transportable".

Comment by NaiveTortoise (An1lam) on Three more stories about causation · 2020-11-03T22:54:35.170Z · LW · GW

Oh I see, yeah this sounds hard. The causal graph wouldn't be a DAG because it's cyclic, in which case there may be something you can do but the "standard" (read: what you'd find in Pearl's Causality) won't help you unless I'm forgetting something.

An apparently real hypothesis that fits this pattern is that people take more risks / do more unhealthy things the more they know healthcare can heal them / keep them alive.

Comment by NaiveTortoise (An1lam) on Three more stories about causation · 2020-11-03T19:13:23.087Z · LW · GW

A few minor comments. Regarding I, it's known that the direction of (or lack of) an arrow in generic two-node causal is un-identifiable, although there's some recent work solving this in restricted cases.

Regarding II, if I understand correctly, the second sub-scenario is one in which we'd have a graph that looks like the following DAG.

What I'm confused about is if we condition on a level of tar in a big population, we'll still see correlation between smoking and cancer via the trait assuming there's independent noise feeding into each of these nodes. More concretely, presumably people will smoke different amounts based on some other unobserved factors outside this trait. So at at least certain levels of tar in lungs, we'll have people who do/don't have the trait, meaning there'll be a correlation between smoking and cancer even in different tar level sub-populations. That said, in the purely deterministic simplified scenario, I see your point.

Alternatively, I'm pretty sure applying the front-door criterion (explanation) would properly identify the zero causal effect of smoking on cancer in this scenario (again assuming all the relationships aren't purely deterministic).

Comment by NaiveTortoise (An1lam) on AllAmericanBreakfast's Shortform · 2020-10-22T23:08:33.722Z · LW · GW

If you haven't seen Half-assing it with everything you've got, I'd definitely recommend it as an alternative perspective on this issue.

Comment by NaiveTortoise (An1lam) on Why isn't JS a popular language for deep learning? · 2020-10-08T18:15:54.085Z · LW · GW

I haven't researched this extensively but have used the Python data science toolkit for a while now and so can comment on its advantages.

To start, I think it's important to reframe the question a bit. At least in my neck of the woods, very few people just do deep learning with Python. Instead, a lot of people use Python to do Machine Learning, Data Science, Stats (although hardcore stats seems to have a historical bias towards R). This leads to two big benefits of using Python: pretty good support for vectorized operations and numerical computing (via calling into lower level languages of course and also Cython) and a toolkit for "full stack" data science and machine learning.

Regarding the numerical computing side of things, I'm not super up-to-date on the JS numerical computing ecosystem but when I last checked, JS had neither good pre-existing libraries that compared to numpy nor as good a setup for integrating with the lower level numerical computing ecosystem (but I also didn't look hard for it in fairness).

Regarding the full stack ML / DS point, in practice, modeling is a small part of the overall ML / DS workflow, especially once you go outside the realm of benchmark datasets or introduce matters of scale. The former involves handling data processing and analysis (transformation, plotting, aggregation) in addition to building models. Python (and R for what it's worth) has a suite of battle-hardened libraries and tools for both data processing -- things in the vein of airflow, luigi, etc. -- and analysis -- pandas, scipy, seaborn, matplotlib, etc. -- that, as far as I know Javascript lacks.

ETA: To be clear, Python has lots of downsides and doesn't solve any of these problems perfectly, but the question focused on relative to JS so I tried to answer in the same vein.

Comment by NaiveTortoise (An1lam) on Thoughts on ADHD · 2020-10-08T01:03:07.758Z · LW · GW

I've never been evaluated for ADHD (or seriously considered it) but some of these -- especially 2, 3, 6, 7, 9 -- feel very familiar to me.

Comment by NaiveTortoise (An1lam) on Richard Ngo's Shortform · 2020-09-18T01:02:29.079Z · LW · GW

Yeah good point - given generous enough interpretation of the notebook my rejection doesn't hold. It's still hard for me to imagine that response feeling meaningful in the context but maybe I'm just failing to model others well here.

Comment by NaiveTortoise (An1lam) on Richard Ngo's Shortform · 2020-09-17T23:06:41.160Z · LW · GW

I've seen this quote before and always find it funny because when I read Greg Egan, I constantly find myself thinking there's no way I could've come up with the ideas he has even if you gave me months or years of thinking time.

Comment by NaiveTortoise (An1lam) on Progress: Fluke or trend? · 2020-09-14T11:56:15.626Z · LW · GW

Sorry I was unclear. I was actually imagining two possible scenarios.

The first would be deeper investigation reveals that recent progress mostly resulted from serendipity and lucky but more contingent than we expected historical factors. For example, maybe it turns out that the creation of industrial labs all hinged on some random quirk of the Delaware C-Corp code (I'm just making this up to be clear). Even though these factors were a fluke in the past and seem sort of arbitrary, we could still be systematic about bringing them about going forward.

The second scenario is even more pessimistic. Suppose we fail to find any factors that influenced recent progress - it's just all noise. It's hard to give an example of what this would look like because it would look like an absence of examples. Every rigorous investigation of a potential cause of historical progress would find a null result. Even in this pessimistic world, we still could say, "ok, well nothing in the past seemed to make a difference but we're going to experiment to figure out things that do."

That said, writing out this maximally pessimistic case made me realize how unlikely I think it is. It seems like we already know of certain factors which at least marginally increased the rate of progress, so I want to emphasize that I'm providing a line of retreat not arguing that this is how the world actually is.

Comment by NaiveTortoise (An1lam) on Progress: Fluke or trend? · 2020-09-13T02:59:46.962Z · LW · GW

Isn't it both possible that it's a fluke and also that going forward we can figure out mechanisms to promote it systematically?

To be clear, I think it's more likely that not that a nontrivial fraction of recent progress has non fluke causes. I'm just also noting that the goal of enhancing progress seems at least partly disjoint from whether recent progress was a fluke.

Comment by NaiveTortoise (An1lam) on [AN #115]: AI safety research problems in the AI-GA framework · 2020-09-02T20:55:36.928Z · LW · GW

Yep, clicking "View this email in browser" allowed me to read it but obviously would be better to have it fixed here as well.

Comment by NaiveTortoise (An1lam) on Richard Ngo's Shortform · 2020-08-26T12:48:31.717Z · LW · GW

Thanks for your reply! I largely agree with drossbucket's reply.

I also wonder how much this is an incentives problem. As you mentioned and in my experience, the fields you mentioned strongly incentivize an almost fanatical level of thoroughness that I suspect is very hard for individuals to maintain without outside incentives pushing them that way. At least personally, I definitely struggle and, frankly, mostly fail to live up to the sorts of standards you mention when writing blog posts in part because the incentive gradient feels like it pushes towards hitting the publish button.

Given this, I wonder if there's a way to shift the incentives on the margin. One minor thing I've been thinking of trying for my personal writing is having a Knuth or Nintil style "pay for mistakes" policy. Do you have thoughts on other incentive structures to for rewarding rigor or punishing the lack thereof?

Comment by NaiveTortoise (An1lam) on Richard Ngo's Shortform · 2020-08-21T13:17:59.270Z · LW · GW

I'd be curious what, if any, communities you think set good examples in this regard. In particular, are there specific academic subfields or non-academic scenes that exemplify the virtues you'd like to see more of?

Comment by NaiveTortoise (An1lam) on Becoming Unusually Truth-Oriented · 2020-08-15T20:22:28.230Z · LW · GW

Makes sense - would some of the early posts about Focusing and other "lower level" concepts you reference here qualify? If you create a tag, people (including maybe me) could probably help curate!

Comment by NaiveTortoise (An1lam) on Becoming Unusually Truth-Oriented · 2020-07-28T15:43:52.970Z · LW · GW

I can make a longer comment there if you'd like but personally I wasn't that bothered by the dreams example because I agreed with you that confabulation in the immediate moments after I woke up didn't seem like a huge issue. As a result, I was definitely interested in seeing more posts from the meditative/introspective angle even if they just expanded upon some of these moment-to-moment habits with more examples and detail. Unfortunately, that would at least partly require writing more posts rather than pure curation.

Comment by NaiveTortoise (An1lam) on The Future of Science · 2020-07-28T15:14:45.657Z · LW · GW

Great post (or talk I guess)!

Two "yes, and..." add-ons I'd suggest:

  1. Faster tool development as the result of goal-driven search through the space of possibilities. Think something like Ed Boyden's Tiling Tree Method semi-automated and combined with powerful search. As an intuition pump, imagine doing search in the latent space of GPT-N, maybe fine tuned on all papers in an area's, embeddings.
  2. Contrary to some of the comments from the talk, I weakly suspect NP-hardness will be less of a constraint for narrow AI scientists than it is for humans. My intuition here comes from what we've seen with protein folding and learned algorithms where my understanding is that hardness results limit how quickly we can do things in general but not necessarily on the distributions we encounter in practice. I think this is especially likely if we assume that AI scientists will be better at searching for complex but fast approximations than humans are. (I'm very uncertain about this one since I'm by no means an expert in these areas.)
Comment by NaiveTortoise (An1lam) on Becoming Unusually Truth-Oriented · 2020-07-28T13:26:11.296Z · LW · GW

Did you ultimately decide not to continue this series?