Posts

Mechanistic Interpretability Reading group 2023-09-26T16:26:44.757Z
How to Read Papers Efficiently: Fast-then-Slow Three pass method 2023-02-25T02:56:30.814Z

Comments

Comment by 1stuserhere (firstuser-here) on A Rocket–Interpretability Analogy · 2024-10-22T21:13:11.418Z · LW · GW

on the one hand, mechanistic understanding has historically underperformed as a research strategy,

Are you talking about ML or in general? What are you deriving this from?

Comment by 1stuserhere (firstuser-here) on "Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" · 2024-10-02T16:12:07.751Z · LW · GW

I think it’s even more actively confusing because “smooth/continuous” takeoff not only could be faster in calendar time

We're talking about two different things here: take-off velocity, and timelines. All 4 possibilities are on the table - slow takeoff/long timelines, fast takeoff/long timelines, slow takeoff/short timelines, fast takeoff/short timelines.

A smooth takeoff might actually take longer in calendar time if incremental progress doesn’t lead to exponential gains until later stages.

Honestly I'm surprised people are conflating timelines and takeoff speeds.

Comment by 1stuserhere (firstuser-here) on Are the majority of your ancestors farmers or non-farmers? · 2024-08-30T19:09:58.532Z · LW · GW

That's interesting. On the recent episode of Dwarkesh Podcast with David Reich, at 1:18:00, there's a discussion I'll quote here:

There was a super interesting series of papers. They made many things clear but one of them was that actually the proportion of non-Africans ancestors who are Neanderthals is not 2%.

That’s the proportion of their DNA in our genomes today if you're a non-African person. It's more like 10-20% of your ancestors are Neanderthals. What actually happened was that when Neanderthals and modern humans met and mixed, the Neanderthal DNA was not as biologically fit.

The reason was that Neanderthals had lived in small populations for about half a million years since separating from modern humans—who had lived in larger populations—and had accumulated a large number, thousands of slightly bad mutations. In the mixed populations, there was selection to remove the Neanderthal ancestry. That would have happened very, very rapidly after the mixture process.

There's now overwhelming evidence that that must have happened. If you actually count your ancestors, if you're of non-African descent, how many of them were Neanderthals say, 70,000 years ago, it's not going to be 2%. It's going to be 10-20%, which is a lot.

Now I don't know which paper this is referring to but it's interesting nonetheless.

Comment by 1stuserhere (firstuser-here) on what becoming more secure did for me · 2024-08-23T18:06:02.154Z · LW · GW

Good post, thanks for sharing! found it somewhat relatable to my prior life experiences too

Comment by 1stuserhere (firstuser-here) on The ‘strong’ feature hypothesis could be wrong · 2024-08-05T19:52:21.067Z · LW · GW

Great essay!

I found it to be well written and articulates many of my own arguments in casual conversations well. I'll write up a longer comment with some points I found interesting and concrete questions accompanying them sometime later.

Comment by 1stuserhere (firstuser-here) on Can We Predict Persuasiveness Better Than Anthropic? · 2024-08-05T05:39:43.868Z · LW · GW

For each of the resulting 1313 arguments, crowdworkers were first asked to rate their support of the corresponding claim on a Likert scale from 1 (“Strongly Oppose”) to 7 (“Strongly Oppose”).

You probably mean strongly oppose to strongly support

Comment by 1stuserhere (firstuser-here) on Daniel Tan's Shortform · 2024-07-17T16:30:26.957Z · LW · GW

You'll enjoy reading What Causes Polysemanticity? An Alternative Origin Story of Mixed Selectivity from Incidental Causes (link to the paper)

Using a combination of theory and experiments, we show that incidental polysemanticity can arise due to multiple reasons including regularization and neural noise; this incidental polysemanticity occurs because random initialization can, by chance alone, initially assign multiple features to the same neuron, and the training dynamics then strengthen such overlap.

Comment by 1stuserhere (firstuser-here) on Daniel Tan's Shortform · 2024-07-17T16:18:15.584Z · LW · GW

If we train several SAEs from scratch on the same set of model activations, are they “equivalent”?

For SAEs of different sizes, for most layers, the smaller SAE does contain very high similarity with some of the larger SAE features, but it's not always true. I'm working on an upcoming post on this.

Comment by 1stuserhere (firstuser-here) on How to accelerate recovery from sleep debt with biohacking? · 2024-04-18T10:31:27.590Z · LW · GW

This is purely anecdotal - supplementing sleep debt with cardio-intensive exercise works for me. For example, I usually need 7 hrs of sleep. If I sleep for only 5 hrs, I'm likely to feel a drop in mental sharpness around midway the next day. However, if I go for an hour long run, I miss that drop almost completely and feel just as good I normally would've with a complete sleep.

Comment by firstuser-here on [deleted post] 2023-11-15T12:42:59.918Z

It's also worth noting that LLMs are not learning directly from the raw input stream but from a crux of that data (LLMs learn on compressed data) i.e. the LLMs are fed tokenized data, and the tokenizers act as compressors. This benefits the models by enabling them to have a more information-rich context.

Comment by 1stuserhere (firstuser-here) on Are Mixture-of-Experts Transformers More Interpretable Than Dense Transformers? · 2023-09-15T11:38:47.091Z · LW · GW

I think that the answer is no

 

In this “VRAM-constrained regime,” MoE models (trained from scratch) are nowhere near competitive with dense LLMs.

Curious whether your high-level thoughts on these topics still hold or have changed.

Comment by 1stuserhere (firstuser-here) on How to Think About Activation Patching · 2023-06-05T21:30:07.826Z · LW · GW

On a more narrow distribution this head could easily exhibit just one behaviour and eg seem like a monosemantic inductin head

induction* head

Comment by 1stuserhere (firstuser-here) on What 2026 looks like · 2023-04-23T12:59:32.346Z · LW · GW

The 2023 predictions seem to hold up really well, so far, especially the SDM in interactive environment one, image synthesis, passing the bar exam, legal NLP systems, enthusiasm of programmers, and Elon Musk re-entering the space of building AI systems.

Comment by 1stuserhere (firstuser-here) on How to Read Papers Efficiently: Fast-then-Slow Three pass method · 2023-02-27T16:50:08.091Z · LW · GW

Interesting perspective especially your comments on citations. Agreed with the diagrams/figures/tables being some of the most interesting parts of the paper, but I also try to find the problem that motivated the authors (which is frequently embedded better in the introduction imo than the abstract). 

Comment by 1stuserhere (firstuser-here) on AI alignment researchers don't (seem to) stack · 2023-02-23T13:46:02.255Z · LW · GW

In this analogy, the trouble is, we do not know whether we're building tunnels in parallel (same direction) or the opposite, or zig zag. The reason for that is a lack of clarity about what will turn out to be a fundamentally important approach towards building a safe AGI. So, it seems to me that for now, exploration for different approaches might be a good thing and the next generation of researchers does less digging and is able to stack more on the existing work

Comment by 1stuserhere (firstuser-here) on AI alignment researchers don't (seem to) stack · 2023-02-23T13:41:37.080Z · LW · GW

I agree. It seems like striking a balance between exploration and exploitation. We're barely entering the 2nd generation of alignment researchers. It's important to generate new directions of approaching the problem especially at this stage, so that we have a better chance of covering more of the space of possible solutions before deciding to go in deeper. The barrier to entry also remains slightly lower in this case for new researchers. When some research directions "outcompete" other directions, we'll naturally see more interest in those promising directions and subsequently more exploitation, and researchers will be stacking.