Posts

A List of 45+ Mech Interp Project Ideas from Apollo Research’s Interpretability Team 2024-07-18T14:15:50.248Z
Lucius Bushnaq's Shortform 2024-07-06T09:08:43.607Z
Apollo Research 1-year update 2024-05-29T17:44:32.484Z
Interpretability: Integrated Gradients is a decent attribution method 2024-05-20T17:55:22.893Z
The Local Interaction Basis: Identifying Computationally-Relevant and Sparsely Interacting Features in Neural Networks 2024-05-20T17:53:25.985Z
Charbel-Raphaël and Lucius discuss interpretability 2023-10-30T05:50:34.589Z
Announcing Apollo Research 2023-05-30T16:17:19.767Z
Basin broadness depends on the size and number of orthogonal features 2022-08-27T17:29:32.508Z
What Is The True Name of Modularity? 2022-07-01T14:55:12.446Z
Ten experiments in modularity, which we'd like you to run! 2022-06-16T09:17:28.955Z
Project Intro: Selection Theorems for Modularity 2022-04-04T12:59:19.321Z
Theories of Modularity in the Biological Literature 2022-04-04T12:48:41.834Z
Welcome to the SSC Dublin Meetup 2020-07-30T18:56:36.627Z

Comments

Comment by Lucius Bushnaq (Lblack) on Alexander Gietelink Oldenziel's Shortform · 2024-07-24T17:20:14.085Z · LW · GW

This sounds cool and deep but crashes headlong into the issue that the entropy rate and the excess entropy of any stochastic process is time-symmetric.
 

It's time symmetric around a starting point  of low entropy. The further  is from , the more entropy you'll have, in either direction. The absolute value  is what matters.


In this case,  is usually taken to be the big bang.  So the further in time you are from the big bang, the less the universe is like a dense uniform soup with little structure that needs description, and the higher your entropy will be. That's how you get the subjective perception of temporal causality. 

Presumably, this would hold to the other side of  as well, if there is one. But we can't extrapolate past , because close to  everything gets really really energy dense, so we'd need to know how to do quantum gravity to calculate what the state on the other side might look like.  So we can't check that.  And the notion of time as we're discussing it here might break down at those energies anyway.

Comment by Lucius Bushnaq (Lblack) on A List of 45+ Mech Interp Project Ideas from Apollo Research’s Interpretability Team · 2024-07-24T12:52:06.912Z · LW · GW

Toy example of what I would consider pretty clear-cut cross-layer superposition: 

We have a residual MLP network. The network implements a single UAND gate (universal AND, calculating the  pairwise ANDs of  sparse boolean input features using only  neurons), as described in Section 3 here

However, instead of implementing this with a single MLP, the network does this using all the MLPs of all the layers in combination. Simple construction that achieves this:

  1. Cut the residual stream into two subspaces, reserving one subspace for the input features and one subspace for the  output features.
  2. Take the construction from the paper, and assign each neuron in it to a random MLP layer in the residual network.
  3. Since the input and output spaces are orthogonal, there's no possibility of one MLP's outputs interfering with another MLP's inputs. So this network will implement UAND, as if all the neurons lived in a single large MLP layer.

Now we've made a network that computes boolean circuit in superposition, without the boolean gates living in any particular MLP. To read out the current value of one of the circuit outputs in the MLPs, you'll need to look at a direction that's a linear combination of neurons in all of the MLPs. And if you use an SAE to look at a single residual stream position in this network before the very final MLP layer, it'll probably show you a bunch of half-computed nonsense.

In a real network, the most convincing evidence to me would be a circuit involving sparse coded variables or operations that cannot be localized to any single MLP.

Comment by Lucius Bushnaq (Lblack) on A List of 45+ Mech Interp Project Ideas from Apollo Research’s Interpretability Team · 2024-07-22T21:36:06.617Z · LW · GW

A prior that doesn't assume independence should give you a sparsity penalty that isn't a sum of independent penalties for each activation.

Comment by Lucius Bushnaq (Lblack) on Feature Targeted LLC Estimation Distinguishes SAE Features from Random Directions · 2024-07-19T22:34:18.048Z · LW · GW

Would you predict that SAE features corresponding to input tokens would have low FT-LLCs, since there's no upstream circuits needed to compute them?

It's not immediately obvious to me that we'd expect random directions to have lower FT-LLCs than 'feature directions',  actually. If my random read-off direction is a sum of many features belonging to different circuits, breaking any one of those circuits may change the activations of that random read-off. Whereas an output variable of a single circuit might stay intact so long as that specific circuit is preserved.

Have you also tried this in some toy settings where you know what FT-LLCs you should get out? Something where you'd be able to work out in advance on paper how much the FT-LLC along some direction  should roughly differ from another direction ?

Asking because last time I had a look at these numeric LLC samplers, they didn't exactly seem reliable yet, to put it mildly. The numbers they spit out seemed obviously nonsense in some cases. About the most positive thing you could say about them was that they at least appeared to get the ordering of LLC values between different networks right. In a few test cases. But that's not exactly a ringing endorsement. Just counting Hessian zero eigenvalues can often do that too. That was a while ago though.
 

Comment by Lucius Bushnaq (Lblack) on Most smart and skilled people are outside of the EA/rationalist community: an analysis · 2024-07-16T15:27:16.010Z · LW · GW

I think this is particularly incorrect for alignment, relative to a more typical STEM research field. Alignment is very young[1]. There's a lot less existing work worth reading than you have in field like, say, lattice quantum field theory. Due to this, the time investment required to start contributing at the research frontier is very low, relatively speaking.

This is definitely changing. There's a lot more useful work than there was when I started dipping my toe into alignment three years ago. But compared to something like particle physics, it's still very little. 

  1. ^

    In terms of # total smart people hours invested

Comment by Lucius Bushnaq (Lblack) on The Standard Analogy · 2024-07-06T12:20:07.303Z · LW · GW

The reason I often bring up human evolution is because that's our only example of an outer optimization loop producing an inner general intelligence

There's also human baby brains training minds from something close to random initialisation at birth into a general intelligence. That example is plausibly a lot closer to how we might expect AGI training to go, because human brains are neural nets too and presumably have strictly-singular flavoured learning dynamics just like our artificial neural networks do. Whereas evolution acts on genes, which to my knowledge don't have neat NN-style loss landscapes heavily biased towards simplicity. 

Evolution is more like if people used classic genetic optimisation to blindly find neural network architectures, optimisers, training losses, and initialisation schemes, that are in turn evaluated by actually training the networks.

Not that I think this ultimately ends up weakening Doomimir's point all that much. Humans don't seem to end up with terminal goals that are straightforward copies of the reward circuits pre-wired into our brains either. I sure don't care much about predicting sensory inputs super accurately, which was probably a very big part of the training signal that build my mind.
 

Comment by Lucius Bushnaq (Lblack) on Lucius Bushnaq's Shortform · 2024-07-06T09:08:43.807Z · LW · GW

Many people in interpretability currently seem interested in ideas like enumerative safety, where you describe every part of a neural network to ensure all the parts look safe. Those people often also talk about a fundamental trade-off in interpretability between the completeness and precision of an explanation for a neural network's behavior and its description length. 

I feel like, at the moment, these sorts of considerations are all premature and beside the point.  

I don't understand how GPT-4 can talk. Not in the sense that I don't have an accurate, human-intuitive description of every part of GPT-4 that contributes to it talking well. My confusion is more fundamental than that. I don't understand how GPT-4 can talk the way a 17th-century scholar wouldn't understand how a Toyota Corolla can move. I have no gears-level model for how anything like this could be done at all. I don't want a description of every single plate and cable in a Toyota Corolla, and I'm not thinking about the balance between the length of the Corolla blueprint and its fidelity as a central issue of interpretability as a field. 

What I want right now is a basic understanding of combustion engines. I want to understand the key internal gears of LLMs that are currently completely mysterious to me, the parts where I don't have any functional model at all for how they even could work. What I ultimately want to get out of Interpretability at the moment is a sketch of Python code I could write myself, without a numeric optimizer as an intermediary, that would be able to talk.

Comment by Lucius Bushnaq (Lblack) on My AI Model Delta Compared To Yudkowsky · 2024-06-13T14:28:23.495Z · LW · GW

I kind of expect that things-people-call-their-values-that-are-not-their-revealed-preferences would be a concept that a smart AI that predicts systems coupled to humans would think in as well. It doesn't matter whether these stated values are 'incoherent' in the sense of not being in tune with actual human behavior, they're useful for modelling humans because humans use them to model themselves, and these self-models couple to their behavior. Even if they don't couple in the sense of being the revealed-preferences in an agentic model of the humans' actions.

Every time a human tries and mostly fails to explain what things they'd like to value if only they were more internally coherent and thought harder about things, a predictor trying to forecast their words and future downstream actions has a much easier time of it if they have a crisp operationalization of the endpoint the human is failing to operationalize. 

An analogy: If you're trying to predict what sorts of errors a diverse range of students might make while trying to solve a math problem, it helps to know what the correct answer is. Or if there isn't a single correct answer, what the space of valid answers looks like.

Comment by Lucius Bushnaq (Lblack) on My AI Model Delta Compared To Yudkowsky · 2024-06-11T21:58:43.197Z · LW · GW

Corrigibility and actual human values are both heavily reflective concepts.  If you master a requisite level of the prerequisite skill of noticing when a concept definition has a step where its boundary depends on your own internals rather than pure facts about the environment -- which of course most people can't do because they project the category boundary onto the environment

Actual human values depend on human internals, but predictions about systems that strongly couple to human behavior depend on human internals as well. I thus expect efficient representations of systems that strongly couple to human behavior to include human values as somewhat explicit variables. I expect this because humans seem agent-like enough that modeling them as trying to optimize for some set of goals is a computationally efficient heuristic in the toolbox for predicting humans. 

At lower confidence, I also think human expected-value-trajectory-under-additional-somewhat-coherent-reflection would show up explicitly in the thoughts of AIs that try to predict systems strongly coupled to humans. I think this because humans seem to change their values enough over time in a sufficiently coherent fashion that this is a useful concept to have. E.g., when watching my cousin grow up, I find it useful and possible to have a notion in advance of what they will come to value when they are older and think more about what they want. 

I do not think there is much reason by default for the representations of these human values and human value trajectories to be particularly related to the AI's values in a way we like. But that they are in there at all sure seems like it'd make some research easier, compared to the counterfactual. For example, if you figure out how to do good interpretability, you can look into an AI and get a decent mathematical representation of human values and value trajectories out of it. This seems like a generally useful thing to have. 

If you separately happen to have developed a way to point AIs at particular goals, perhaps also downstream of you having figured out how to do good interpretability[1], then having explicit access to a decent representation of human values and human expected-value-trajectories-under-additional-somewhat-coherent-reflection might be a good starting point for research on making superhuman AIs that won't kill everyone. 

  1. ^

    By 'good interpretability', I don't necessarily mean interpretability at the level where we understand a forward pass of GPT-4 so well that we can code our own superior LLM by hand in Python like a GOFAI. It might need to be better interpretability than that. This is because an AI's goals, by default, don't need to be explicitly represented objects within the parameter structure of a single forward pass. 

Comment by Lucius Bushnaq (Lblack) on Alexander Gietelink Oldenziel's Shortform · 2024-06-05T14:09:55.754Z · LW · GW

But here we are, and the idea of the USA govt nationalizing OpenAI seems a million miles outside the Overton window.
 

Registering that it does not seem that far out the Overton window to me anymore. My own advance prediction of how much governments would be flipping out around this capability level has certainly been proven a big underestimate. 


 

Comment by Lucius Bushnaq (Lblack) on MIRI 2024 Communications Strategy · 2024-05-30T22:43:43.503Z · LW · GW

If there's a legal ceiling on AI capabilities, that reduces the short term economic incentive to improve algorithms. If improving algorithms gets you categorised as uncool at parties, that might also reduce the short term incentive to improve algorithms.

It is thus somewhat plausible to me that an enforced legal limit on AI capabilities backed by high-status-cool-party-attending-public opinion would slow down algorithmic progress significantly.

Comment by Lucius Bushnaq (Lblack) on Interpretability: Integrated Gradients is a decent attribution method · 2024-05-21T11:56:10.814Z · LW · GW

The issue with single datapoints, at least in the context we used this for, which was building interaction graphs for the LIB papers, is that the answer to 'what directions in the layer were relevant for computing the output?' is always trivially just 'the direction the activation vector was pointing in.'

This then leads to every activation vector becoming its own 'feature', which is clearly nonsense. To understand generalisation, we need to see how the network is re-using a small common set of directions to compute outputs for many different inputs. Which means looking at a dataset of multiple activations.

And basically the trouble a lot of work that attempts to generalize ends up with is that some phenomena are very particular to specific cases, so one risks losing a lot of information by only focusing on the generalizable findings.
 

The application we were interested in here was getting some well founded measure of how 'strongly' two features interact. Not a description of what the interaction is doing computationally. Just some way to tell whether it's 'strong' or 'weak'. We wanted this so we could find modules in the network.

Averaging over data loses us information about what the interaction is doing, but it doesn't necessarily lose us information about interaction 'strength', since that's a scalar quantity. We just need to set our threshold for connection relevance sensitive enough that making a sizeable difference on a very small handful of training datapoints still qualifies.

Comment by Lucius Bushnaq (Lblack) on Interpretability: Integrated Gradients is a decent attribution method · 2024-05-21T09:16:16.274Z · LW · GW

If you want to get attributions between all pairs of basis elements/features in two layers, attributions based on the effect of a marginal ablation will take you  forward passes, where  is the number of features in a layer. Integrated gradients will take  backward passes, and if you're willing to write custom code that exploits the specific form of the layer transition, it can take less than that.

If you're averaging over a data set, IG is also amendable to additional cost reduction through stochastic source techniques.

Comment by Lucius Bushnaq (Lblack) on Interpretability: Integrated Gradients is a decent attribution method · 2024-05-21T07:58:02.356Z · LW · GW

The same applies with attribution in general (e.g. in decision making).

As in, you're also skeptical of traditional Shapley values in discrete coalition games?

"Completeness" strikes me as a desirable property for attributions to be properly normalized. If attributions aren't bounded in some way, it doesn't seem to me like they're really 'attributions'.

Very open to counterarguments here, though. I'm not particularly confident here either. There's a reason this post isn't titled 'Integrated Gradients are the correct attribution method'.

Comment by Lucius Bushnaq (Lblack) on The Local Interaction Basis: Identifying Computationally-Relevant and Sparsely Interacting Features in Neural Networks · 2024-05-21T07:43:00.323Z · LW · GW

I doubt it. Evaluating gradients along an entire trajectory from a baseline gave qualitatively similar results.

A saturated softmax also really does induce insensitivity to small changes. If two nodes are always connected by a saturated softmax, they can't be exchanging more than one bit of information. Though the importance of that bit can be large.

My best guess for why the Interaction Basis didn't work is that sparse, overcomplete representations really are a thing. So in general, you're not going to get a good decomposition of LMs from a Cartesian basis of activation space.

 

Comment by Lucius Bushnaq (Lblack) on Transcoders enable fine-grained interpretable circuit analysis for language models · 2024-05-01T07:22:13.043Z · LW · GW

Nice! We were originally planning to train sparse MLPs like this this week.

Do you have any plans of doing something similar for attention layers? Replacing them with wider attention layers with a sparsity penalty, on the hypothesis that they'd then become more monosemantic?

Also, do you have any plans to train sparse MLP at multiple layers in parallel, and try to penalise them to have sparsely activating connections between each other in addition to having sparse activations?

Comment by Lucius Bushnaq (Lblack) on Superposition is not "just" neuron polysemanticity · 2024-04-27T15:36:06.274Z · LW · GW

Thank you, I've been hoping someone would write this disclaimer post.

I'd add on another possible explanation for polysemanticity, which is that the model might be thinking in a limited number of linearly represented concepts, but those concepts need not match onto concepts humans are already familiar with. At least not all of them.

Just because the simple meaning of a direction doesn't jump out at an interp researcher when they look at a couple of activating dataset examples doesn't mean it doesn't have one. Humans probably wouldn't even always recognise the concepts other humans think in on sight.

Imagine a researcher who hasn't studied thermodynamics much looking at a direction in a model that tracks the estimated entropy of a thermodynamic system it's monitoring: 'It seems to sort of activate more when the system is warmer. But that's not all it's doing. Sometimes it also goes up when two separated pockets of different gases mix together, for example. Must be polysemantic.'

Comment by Lucius Bushnaq (Lblack) on Examples of Highly Counterfactual Discoveries? · 2024-04-27T07:07:55.122Z · LW · GW

I would not say that the central insight of SLT is about priors. Under weak conditions the prior is almost irrelevant. Indeed, the RLCT is independent of the prior under very weak nonvanishing conditions.

I don't think these conditions are particularly weak at all. Any prior that fulfils it is a prior that would not be normalised right if the parameter-function map were one-to-one. 

It's a kind of prior people like to use a lot, but that doesn't make it a sane choice. 

A well-normalised prior for a regular model probably doesn't look very continuous or differentiable in this setting, I'd guess.

To be sure - generic symmetries are seen by the RLCT. But these are, in some sense, the uninteresting ones. The interesting thing is the local singular structure and its unfolding in phase transitions during training.

The generic symmetries are not what I'm talking about. There are symmetries in neural networks that are neither generic, nor only present at finite sample size. These symmetries correspond to different parametrisations that implement the same input-output map. Different regions in parameter space can differ in how many of those equivalent parametrisations they have, depending on the internal structure of the networks at that point.

The issue of the true distribution not being contained in the model is called 'unrealizability' in Bayesian statistics. It is dealt with in Watanabe's second 'green' book. Nonrealizability is key to the most important insight of SLT contained in the last sections of the second to last chapter of the green book: algorithmic development during training through phase transitions in the free energy.

I know it 'deals with' unrealizability in this sense, that's not what I meant. 

I'm not talking about the problem of characterising the posterior right when the true model is unrealizable. I'm talking about the problem where the actual logical statement we defined our prior and thus our free energy relative to is an insane statement to make and so the posterior you put on it ends up negligibly tiny compared to the probability mass that lies outside the model class. 

But looking at the green book, I see it's actually making very different, stat-mech style arguments that reason about the KL divergence between the true distribution and the guess made by averaging the predictions of all models in the parameter space according to their support in the posterior. I'm going to have to translate more of this into Bayes to know what I think of it.
 

Comment by Lucius Bushnaq (Lblack) on Examples of Highly Counterfactual Discoveries? · 2024-04-25T23:12:32.383Z · LW · GW

The RLCT = first-order term for in-distribution generalization error
 

Clarification: The 'derivation' for how the RLCT predicts generalization error IIRC goes through the same flavour of argument as the one the derivation of the vanilla Bayesian Information Criterion uses. I don't like this derivation very much. See e.g. this one on Wikipedia. 

So what it's actually showing is just that:

  1. If you've got a class of different hypotheses , containing many individual hypotheses  .
  2. And you've got a prior ahead of time that says the chance any one of the hypotheses in  is true is some number ., let's say it's  as an example.
  3. And you distribute this total probability  around the different hypotheses in an even-ish way, so , roughly.
  4. And then you encounter a bunch of data  (the training data) and find that only one or a tiny handful of hypotheses in  fit that data, so  for basically only one hypotheses ...
  5. Then your posterior probability  that the hypothesis  is correct will probably be tiny, scaling with . If we spread your prior  over lots of hypotheses, there isn't a whole lot of prior to go around for any single hypothesis. So if you then encounter data that discredits all hypotheses in M except one, that tiny bit of spread-out prior for that one hypothesis will make up a tiny fraction of the posterior, unless  is really small, i.e. no hypothesis outside the set  can explain the data either.

So if our hypotheses correspond to different function fits (one for each parameter configuration, meaning we'd have  hypotheses if our function fits used  -bit floating point numbers), the chance we put on any one of the function fits being correct will be tiny. So having more parameters is bad, because the way we picked our prior means our belief in any one hypothesis goes to zero as  goes to infinity.

So the Wikipedia derivation for the original vanilla posterior of model selection is telling us that having lots of parameters is bad, because it means we're spreading our prior around exponentially many hypotheses.... if we have the sort of prior that says all the hypotheses are about equally likely. 

But that's an insane prior to have! We only have  worth of probability to go around, and there's an infinite number of different hypotheses. Which is why you're supposed to assign prior based on K-complexity, or at least something that doesn't go to zero as the number of hypotheses goes to infinity. The derivation is just showing us how things go bad if we don’t do that.

In summary: badly normalised priors behave badly

SLT mostly just generalises this derivation to the case where parameter configurations in our function fits don't line up one-to-one with hypotheses.

It tells us that if we are spreading our prior around evenly over lots of parameter configurations, but exponentially many of these parameter configurations are secretly just re-expressing the same hypothesis, then that hypothesis can actually get a decent amount of prior, even if the total number of parameter configurations is exponentially large.

So our prior over hypotheses in that case is actually somewhat well-behaved in that it can end up normalised properly when we take . That is a basic requirement a sane prior needs to have, so we're at least not completely shooting ourselves in the foot anymore. But that still doesn't show why this prior, that neural networks sort of[1] implicitly have, is actually good. Just that it's no longer obviously wrong in this specific way.

Why does this prior apparently make decent-ish predictions in practice? That is, why do neural networks generalise well? 

I dunno. SLT doesn't say. It just tells us how the parameter prior to hypothesis prior conversion ratio works, and in the process shows us that neural networks priors can be at least somewhat sanely normalised for large numbers of parameters. More than we might have initially thought at least. 

That's all though. It doesn't tell us anything else about what makes a Gaussian over transformer parameter configurations a good starting guess for how the universe works.

How to make this story tighter?

If people aim to make further headway on the question of why some function fits generalise somewhat and others don't, beyond: 'Well, standard Bayesianism suggests you should at least normalise your prior so that having more hypotheses isn't actively bad', then I'd suggest a starting point might be to make a different derivation for the posterior on the fits that isn't trying to reason about  defined as the probability that one of the function fits is 'true' in the sense of exactly predicting the data. Of course none of them are. We know that. When we fit a  billion parameter transformer to internet data, we don't expect going in that any of these  parameter configurations will give zero loss up to quantum noise on any and all text prediction tasks in the universe until the end of time. Under that definition of , which the SLT derivation of the posterior and most other derivations of this sort I've seen seem to implicitly make, we basically have  going in! Maybe look at the Bayesian posterior for a set of hypotheses we actually believe in at all before we even see any data, like  .

SLT in three sentences

'You thought your choice of prior was broken because it's nor normalised right, and so goes to zero if you hand it too many hypotheses. But you missed that the way you count your hypotheses is also broken, and the two mistakes sort of cancel out. Also here's a bunch of algebraic geometry that sort of helps you figure out what probabilities your weirdo prior actually assigns to hypotheses, though that parts not really finished'.

SLT in one sentence

'Loss basins with bigger volume will have more posterior probability if you start with a uniform-ish prior over parameters, because then bigger volumes get more prior, duh.'

 

 

  1. ^

    Sorta, kind of, arguably. There's some stuff left to work out here. For example vanilla SLT doesn't even actually tell you which parts of your posterior over parameters are part of the same hypothesis. It just sort of assumes that everything left with support in the posterior after training is part of the same hypothesis, even though some of these parameter settings might generalise totally differently outside the training data. My guess is that you can avoid matching this up by comparing equivalence over all possible inputs by checking which parameter settings give the same hidden representations over the training data, not just the same outputs.

Comment by Lucius Bushnaq (Lblack) on Examples of Highly Counterfactual Discoveries? · 2024-04-25T20:30:17.110Z · LW · GW

It's measuring the volume of points in parameter space with loss  when  is infinitesimal. 

This is slightly tricky because it doesn't restrict itself to bounded parameter spaces,[1] but you can fix it with a technicality by considering how the volume scales with  instead.

In real networks trained with finite amounts of data, you care about the case where  is small but finite, so this is ultimately inferior to just measuring how many configurations of floating point numbers get loss , if you can manage that.

I still think SLT has some neat insights that helped me deconfuse myself about networks.

For example, like lots of people, I used to think you could maybe estimate the volume of basins with loss  using just the eigenvalues of the Hessian. You can't. At least not in general. 

 

  1. ^

    Like the floating point numbers in a real network, which can only get so large. A prior of finite width over the parameters also effectively bounds the space

Comment by Lucius Bushnaq (Lblack) on Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer · 2024-04-18T18:30:12.803Z · LW · GW

Right. If I have  fully independent latent variables that suffice to describe the state of the system, each of which can be in one of  different states, then even tracking the probability of every state for every latent with a  bit precision float will only take me about  bits. That's actually not that bad compared to  for just tracking some max likelihood guess.

Comment by Lucius Bushnaq (Lblack) on Why Would Belief-States Have A Fractal Structure, And Why Would That Matter For Interpretability? An Explainer · 2024-04-18T12:56:23.199Z · LW · GW

With that in mind, the real hot possibility is the inverse of what Shai and his coresearchers did. Rather than start with a toy model with some known nice latents, start with a net trained on real-world data, and go look for self-similar sets of activations in order to figure out what latent variables the net models its environment as containing. The symmetries of the set would tell us something about how the net updates its distributions over latents in response to inputs and time passing, which in turn would inform how the net models the latents as relating to its inputs, which in turn would inform which real-world structures those latents represent.
 

Thank you, this was very much the paragraph I was missing to understand why comp mech might be useful for interpretability.

How sure are we that models will keep tracking Bayesian belief states, and so allow this inverse reasoning to be used, when they don't have enough space and compute to actually track a distribution over latent states?

Approximating those distributions by something like 'peak position plus spread' seems like the kind of thing a model might do to save space.
 

Comment by Lucius Bushnaq (Lblack) on How to safely use an optimizer · 2024-03-28T23:37:40.645Z · LW · GW

Typo fixed, thanks.

Comment by Lucius Bushnaq (Lblack) on How to safely use an optimizer · 2024-03-28T23:07:35.033Z · LW · GW

Also, one of my points was that even a tiny trace of malicious optimization here can still have large effects because there are so many neutral options which are convergently unsafe, so with such a high base rate, even a bit of optimization can result in a large absolute increase in failure
 

Your example has it be an important bit though. What database to use. Not a random bit. If I'm getting this right, that would correspond to far more than one bit of adversarial optimisation permitted for the oracle in this setup. 

 doesn't mean the oracle gets to select one bit of its choice in the string to flip, it means it gets to select one of two strings[1].

  1. ^

    Plus the empty string for not answering.

Comment by Lucius Bushnaq (Lblack) on How to safely use an optimizer · 2024-03-28T18:07:19.862Z · LW · GW

First thought: The Oracle is going to choose to systematically answer or not answer the queries we give it. This represents a causal channel of one bit per query it can use to influence the outside world[1]. Can you conquer the world in one awkwardly delivered kilobyte or less? Maybe.

Maybe we can stop that by scrapping every Oracle that doesn't answer and training a new one with presumably new goals? Or would the newly trained Oracles just cooperate with the former dead ones in one long-term plan to break out, take control, and reward all the dead Oracles created on the way with utility?

Second thought: What kind of optimisation problems can we specify well enough for a formal proof checker to tell whether they've been satisficed? Are they the kind of problems where solving them can save the world? 

It feels to me like the answer is 'yes'.  A lot of core research that would allow e.g. for brain augmentation seems like they'd be in that category. But my inner John Wentworth sim is looking kind of sceptical.

 

  1. ^

    It also gets to choose the timing of its answer, but I assume we are not being idiots about that and setting the output channel to always deliver results after a set time , no more and no less.

Comment by Lucius Bushnaq (Lblack) on Some costs of superposition · 2024-03-10T21:27:54.551Z · LW · GW

I think the  may be in there because JL is putting an upper bound on the interference, rather than describing the typical interference of two features. As you increase  (more features), it becomes more difficult to choose feature embeddings such that no features have high interference with any other features.

So its not really the 'typical' noise between any two given features, but it might be the relevant bound for the noise anyway? Not sure right now which one matters more for practical purposes.

Comment by Lucius Bushnaq (Lblack) on story-based decision-making · 2024-03-05T13:46:43.543Z · LW · GW

How does that make you feel about the chances of the rebels destroying the Death Star? Do you think that the competent planning being displayed is a good sign? According to movie logic, it's a really bad sign.

Even in the realm of movie logic, I always thought the lack of backup plans was supposed to signal how unlikely the operation is to work, so as to create at least some instinctive tension in the viewer when they know perfectly well that this isn't the kind of movie that realistically ends with the Death Star blowing everyone up. In fact, these scenes usually have characters directly stating how nigh-impossible the mission is.

To the extent that the presence of backup plans make me worried, it's because so many movies have pulled this cheap trick that my brain now associates the presence of backup plans with the more uncommon kind of story that attempts to work a little like real life, so things won't just magically work out and the Death Star really might blow everyone up.

Comment by Lucius Bushnaq (Lblack) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-28T15:47:17.983Z · LW · GW

I feel like 'LeastWrong' implies a focus on posts judged highly accurate or predictive in hindsight, when in reality I feel like the curation process tends to weigh originality, depth and general importance a lot as well, with posts regarded by the community as 'big if true' often being held in high regard.

Comment by Lucius Bushnaq (Lblack) on The Hidden Complexity of Wishes · 2024-02-23T22:10:08.109Z · LW · GW

I figured the probability adjustments the pump was making were modifying Everett branch amplitude ratios. Not probabilities as in reasoning tools to deal with incomplete knowledge of the world and logical uncertainty that tiny human brains use to predict how this situation might go based on looking at past 'base rates'. It's unclear to me how you could make the latter concept of an outcome pump a coherent thing at all. The former, on the other hand, seems like the natural outcome of the time machine setup described. If you turn back time when the branch doesn't have the outcome you like, only branches with the outcome you like will remain.

I can even make up a physically realisable model of an outcome pump that acts roughly like the one described in the story without using time travel at all. You just need a bunch of high quality sensors to take in data, an AI that judges from the observed data whether the condition set is satisfied, a tiny quantum random noise generator to respect the probability orderings desired, and a false vacuum bomb, which triggers immediately if the AI decides that the condition does not seem to be satisfied. The bomb works by causing a local decay of the metastable[1] electroweak vacuum. This is a highly energetic, self-sustaining process once it gets going, and spreads at the speed of light. Effectively destroying the entire future light-cone, probably not even leaving the possibility for atoms and molecules to ever form again in that volume of space.[2]

So when the AI triggers the bomb or turns back time, the amplitude of earth in that branch basically disappears. Leaving the users of the device to experience only the branches in which the improbable thing they want to have happen happens.

And causing a burning building with a gas supply in it to blow up strikes me as something you can maybe do with a lot less random quantum noise than making your mother phase through the building. Firefighter brains are maybe comparatively easy to steer with quantum noise as well, but that only works if there are any physically nearby enough to reach the building in time to save your mother at the moment the pump is activated. 

This is also why the pump has a limit on how improbable an event it can make happen. If the event has an amplitude of roughly the same size as the amplitude for the pump's sensors reporting bad data or otherwise causing the AI to make the wrong call, the pump will start being unreliable. If the event's amplitude is much lower than the amplitude for the pump malfunctioning, it basically can't do the job at all.

  1. ^

    In real life, it was an open question whether our local electroweak vacuum is in a metastable state last I checked, with the latest experimental evidence I'm aware from a couple of years ago tentatively (ca. 3 sigma I think?) pointing to yes, though that calculation is probably assuming Standard model physics the applicability of which people can argue to hell and back. But it sure seems like a pretty self-consistent way for the world to be, so we can just declare that the fictional universe works like that. Substitute strangelets or any other conjectured instant-earth-annihilation-method of your choice if you like.

  2. ^

    Because the mass terms for the elementary quantum fields would look all different now. Unclear to me that the bound structures of hadronic matter we are familiar with would still be a thing. 

Comment by Lucius Bushnaq (Lblack) on Toward A Mathematical Framework for Computation in Superposition · 2024-02-09T14:16:39.238Z · LW · GW

Thinking the example through a bit further: In a ReLU layer, features are all confined to the positive quadrant. So superposed features computed in a ReLU layer all have positive inner product. So if I send the output of one ReLU layer implementing  AND gates in superposition directly to another ReLU layer implementing another  ANDs on a subset of the outputs of that previous layer[1], the assumption that input directions are equally likely to have positive and negative inner products is not satisfied.

Maybe you can fix this with bias setoffs somehow? Not sure at the moment. But as currently written, it doesn't seem like I can use the outputs of one layer performing a subset of ANDs as the inputs of another layer performing another subset of ANDs.

EDIT: Talked it through with Jake. Bias setoff can help, but it currently looks to us like you still end up with AND gates that share a variable systematically having positive sign in their inner product. Which might make it difficult to implement a valid general recipe for multi-step computation if you try to work out the details.

  1. ^

    A very central use case for a superposed boolean general computer. Otherwise you don't actually get to implement any serial computation.

Comment by Lucius Bushnaq (Lblack) on On the Debate Between Jezos and Leahy · 2024-02-06T23:43:43.251Z · LW · GW

Noting out loud that I'm starting to feel a bit worried about the culture-war-like tribal conflict dynamic between AIS/LW/EA and e/acc circles that I feel is slowly beginning to set in on our end as well, centered on Twitter but also present to an extent on other sites and in real life. The potential sanity damage to our own community and possibly future AI policy from this should it intensify is what concerns me most here.

People have tried to suck the rationalist diaspora into culture-war-like debates before, and I think the diaspora has done a reasonable enough job of surviving intact by not taking the bait much. But on this topic, many of us actually really care about both the content of the debate itself and what people outside the community think of it, and I fear it is making us more vulnerable to the algorithms' attempts to infect us than we have been in the past.

I think us going out of our way to keep standards high in memetic public spaces might possibly help some in keeping our own sanity from deteriorating. If we engage on Twitter, maybe we don't just refrain from lowering the level of debate and using arguments as soldiers but try to have a policy of actively commenting to correct the record when people of any affiliation make locally-invalid arguments against our opposition if we would counterfactually also correct the record were such a locally-invalid argument directed against us or our in-group. I think high status and high Twitter/Youtube-visible community members' behavior might end up having a particularly high impact on the eventual outcome here.

Comment by Lucius Bushnaq (Lblack) on Toward A Mathematical Framework for Computation in Superposition · 2024-02-06T12:54:07.497Z · LW · GW

Having digested this a bit more, I've got a question regarding the noise terms, particularly for section 1.3 that deals with constructing general programs over sparse superposed variables.

Unfortunately, since the  are random vectors, their inner product will have a typical size of . So, on an input which has no features connected to neuron , the preactivation for that neuron will not be zero: it will be a sum of these interference terms, one for each feature that is connected to the neuron. Since the interference terms are uncorrelated and mean zero, they start to cause neurons to fire incorrectly when  neurons are connected to each neuron. Since each feature is connected to each neuron with probability  this means neurons start to misfire when [13].

It seems to me that the assumption of uncorrelated errors here is rather load-bearing. If you don't get uncorrelated errors over the inputs you actually care about, you are forced to scale back to connecting only  features to every neuron, correct? And the same holds for the construction right after this one, and probably most of the other constructions shown here?

And if you only get  connected features per neuron, you scale back to only being able to compute  arbitrary AND gates per layer, correct?

Now, the reason these errors are 'uncorrelated' is that the features were embedded as random vectors in our layer space. In other words, the distributions over which they are uncorrelated is the distribution of feature embeddings and sets of neurons chosen to connect to particular features. So for any given network, we draw from this distribution only once, when the weights of the network are set, and then we are locked into it.

So this noise will affect particular sets of inputs strongly, systematically, in the same direction every time. If I divide the set of features into two sets, where features in each half are embedded along directions that have a positive inner product with each other[1], I can't connect more than  from the same half to the same neuron without making it misfire, right? So if I want to implement a layer that performs  ANDs on exactly those features that happen to be embedded within the same set, I can't really do that. Now, for any given embedding, that's maybe only some particular sets of features which might not have much significance to each other. But then the embedding directions of features in later layers depend on what was computed and how in the earlier layers, and the limitations on what I can wire together apply every time. 

I am a bit worried that this and similar assumptions about stochasticity here might turn out to prevent you from wiring together the features you need to construct arbitrary programs in superposition, with 'noise' from multiple layers turning out to systematically interact in exactly such a way as to prevent you from computing too much general stuff. Not because I see a gears-level way this could happen right now, but because I think rounding off things to 'noise' that are actually systematic is one of these ways an exciting new theory can often go wrong and see a structure that isn't there, because you are not tracking the parts of the system that you have labeled noise and seeing how the systematics of their interactions constrain the rest of the system. 

Like making what seems like a blueprint for perpetual motion machine because you're neglecting to model some small interactions with the environment that seem like they ought not to affect the energy balance on average, missing how the energy losses/gains in these interactions are correlated with each other such that a gain at one step immediately implies a loss in another.

Aside from looking at error propagation more, maybe a way to resolve this might be to switch over to thinking about one particular set of weights instead of reasoning about the distribution the weights are drawn from? 

  1. ^

    E.g. pick some hyperplanes and declare everything on one side of all of them to be the first set.

Comment by Lucius Bushnaq (Lblack) on Welcome to the SSC Dublin Meetup · 2024-02-05T18:20:47.016Z · LW · GW

Update February 2024: I left Ireland over a year ago, and the group is probably dead now, unfortunately. There's still an EA group around, which as of this writing seems quite active.

Comment by Lucius Bushnaq (Lblack) on Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small · 2024-02-02T15:15:15.433Z · LW · GW

If the SAEs are not full-distribution competitive, I don't really trust that the features they're seeing are actually the variables being computed on in the sense of reflecting the true mechanistic structure of the learned network algorithm and that the explanations they offer are correct[1]. If I pick a small enough sub-distribution, I can pretty much always get perfect reconstruction no matter what kind of probe I use, because e.g. measured over a single token the network layers will have representation rank , and the entire network can be written as a rank- linear transform. So I can declare the activation vector at layer  to be the active "feature", use the single entry linear maps between SAEs to "explain" how features between layers map to each other, and be done. Those explanations will of course be nonsense and not at all extrapolate out of distributon. I can't use them to make a causal model that accurately reproduces the network's behavior or some aspect of it when dealing with a new prompt.

We don't train SAEs on literally single tokens, but I would be worried about the qualitative problem persisting. The network itself doesn't have a million different algorithms to perform a million different narrow subtasks. It has a finite description length. It's got to be using a smaller set of general algorithms that handle all of these different subtasks, at least to some extent. Likely more so for more powerful and general networks. If our "explanations" of the network then model it in terms of different sets of features and circuits for different narrow subtasks that don't fit together coherently to give a single good reconstruction loss over the whole distribution, that seems like a sign that our SAE layer activations didn't actually capture the general algorithms in the network. Thus, predictions about network behaviour made on the basis of inspecting causal relationships between these SAE activations might not be at all reliable, especially predictions about behaviours like instrumental deception which might be very mechanistically related to how the network does well on cross-domain generalisation.

  1. ^

    As in, that seems like a minimum requirement for the SAEs to fulfil. Not that this would be  to make me trust predictions about generalisation based on stories about SAE activations.

Comment by Lucius Bushnaq (Lblack) on Open Source Sparse Autoencoders for all Residual Stream Layers of GPT2-Small · 2024-02-02T10:16:05.485Z · LW · GW

Our reconstruction scores were pretty good. We found GPT2 small achieves a cross entropy loss of about 3.3, and with reconstructed activations in place of the original activation, the CE Log Loss stays below 3.6. 

Unless my memory is screwing up the scale here, 0.3 CE Loss increase seems quite substantial? A 0.3 CE loss increase on the pile is roughly the difference between Pythia 410M and Pythia 2.8B. And do I see it right that this is the CE increase maximum for adding in one SAE, rather than all of them at the same time? So unless there is some very kind correlation in these errors where every SAE is failing to reconstruct roughly the same variance, and that variance at early layers is not used to compute the variance SAEs at later layers are capturing, the errors would add up? Possibly even worse than linearly? What CE loss do you get then?

Have you tried talking to the patched models a bit and compared to what the original model sounds like? Any discernible systematic differences in where that CE increase is changing the answers?

Comment by Lucius Bushnaq (Lblack) on Making every researcher seek grants is a broken model · 2024-01-26T21:10:25.661Z · LW · GW

Can someone destroy my hope early by giving me the Molochian reasons why this change hasn't been made already and never will be?

Comment by Lucius Bushnaq (Lblack) on This might be the last AI Safety Camp · 2024-01-25T11:47:44.568Z · LW · GW

MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.

  • If so, AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it's been successful.

AISC isn't trying to do what MATS does. Anecdotal, but for me, MATS could not have replaced AISC (spring 2022 iteration). It's also, as I understand it, trying to have a structure that works without established mentors, since that's one of the large bottlenecks constraining the training pipeline.

Also, did most of the past camps ever have lots of established mentors? I thought it was just the one in 2022 that had a lot? So whatever factors made all the past AISCs work and have participants sing their praises could just still be there.

Why does the founder, Remmelt Ellen, keep posting things described as "content-free stream of consciousness", "the entire scientific community would probably consider this writing to be crankery", or so obviously flawed it gets -46 karma? This seems like a concern especially given the philosophical/conceptual focus of AISC projects, and the historical difficulty in choosing useful AI alignment directions without empirical grounding.

He was posting cranky technical stuff during my camp iteration too. The program was still fantastic. So whatever they are doing to make this work seems able to function despite his crankery. With a five year track record, I'm not too worried about this factor.

All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier.

In the first link at least, there are only eight papers listed in total though.  With the first camp being in 2018, it doesn't really seem like the rate dropped much? So to the extent you believe your colleagues that the camp used to be good, I don't think the publication record is much evidence that it isn't anymore. Paper production apparently just does not track the effectiveness of the program much. Which doesn't surprise me, I don't think the rate of paper producion tracks the quality of AIS research orgs much either.

The impact assessment was commissioned by AISC, not independent. They also use the number of AI alignment researchers created as an important metric. But impact is heavy-tailed, so the better metric is value of total research produced. Because there seems to be little direct research, to estimate the impact we should count the research that AISC alums from the last two years go on to produce. Unfortunately I don't have time to do this.

Agreed on the metric being not great, and that an independently commissioned report would be better evidence (though who would have comissioned it?). But ultimately, most of what this report is apparently doing is just asking a bunch of AIS alumni what they thought of the camp and what they were up to, these days.  And then noticing that these alumni often really liked it and have apparently gone on to form a significant fraction of the ecosystem. And I don't think they even caught everyone. IIRC our AISC follow-up LTFF grant wasn't part of the spreadsheets until I wrote Remmelt that it wasn't there. 

I am not surprised by this. Like you, my experience is that most of my current colleagues who were part of AISC tell me it was really good. The survey is just asking around and noticing the same. 
 

I was the private donor who gave €5K. My reaction to hearing that AISC was not getting funding was that this seemed insane. The iteration I was in two years ago was fantastic for me, and the research project I got started on there is basically still continuing at Apollo now. Without AISC, I think there's a good chance I would never have become an AI notkilleveryoneism researcher. 

It feels like a very large number of people I meet in AIS today got their start in one AISC iteration or another, and many of them seem to sing its praises. I think 4/6 people currently on our interp team were part of one of the camps. I am not aware of any other current training program that seems to me like it would realistically replace AISC's role, though I admittedly haven't looked into all of them. I haven't paid much attention to the iteration that happened in 2023, but I happen to know a bunch of people who are in the current iteration and think trying to run a training program for them is an obviously good idea. 

I think MATS and co. are still way too tiny to serve all the ecosystem's needs, and under those circumstances, shutting down a training program with an excellent five year track record seems like an even more terrible idea than usual. On top of that, the research lead structure they've been trying out for this camp and the last one seems to me like it might have some chance of being actually scalable. I haven't spend much time looking at the projects for the current iteration yet, but from very brief surface exposure they didn't seem any worse on average than the ones in my iteration. Which impressed and surprised me, because these projects were not proposed by established mentors like the ones in my iteration were.  A far larger AISC wouldn't be able to replace what a program like MATS does, but it might be able to do what AISC6 did for me, and do it for far more people than anything structured like MATS realistically ever could. 

On a more meta point, I have honestly not been all that impressed with the average competency of the AIS funding ecosystem. I don't think it not funding a project is particularly strong evidence that the project is a bad idea. 

Comment by Lucius Bushnaq (Lblack) on Toward A Mathematical Framework for Computation in Superposition · 2024-01-18T23:30:39.112Z · LW · GW

Well. Damn. 

As a vocal critic of the whole concept of superposition, this post has changed my mind a lot. An actual mathematical definition that doesn't depend on any fuzzy notions of what is 'human interpretable', and a start on actual algorithms for performing general, useful computation on overcomplete bases of variables.

Everything I've read on superposition before this was pretty much only outlining how you could store and access lots of variables from a linear space with sparse encoding, which isn't exactly a revelation. Every direction is a float, so of course the space can store about float precision to the -th power different states, which you can describe as superposed sparse features if you like. But I didn't need to use that lens to talk about the compression. I could just talk about good old non-overcomplete linear algebra bases instead. The  basis vectors in that linear algebra description being the compositional summary variables the sparse inputs got compressed into. If basically all we can do with the 'superposed variables' is make lookup tables of them, there didn't seem to me to be much need for the concept at all to reverse engineer neural networks. Just stick with the summary variables, summarising is what intelligence is all about.

If we can do actual,  general computation with the sparse variables? Computations with internal structure that we can't trivially describe just as well using  floats forming the non-overcomplete linear basis of a vector space? Well, that would change things. 


As you note, there's certainly work left to do here on the error propagation and checking for such algorithms in real networks. But even with this being an early proof of concept, I do now tentatively expect that better-performing implementations of this probably exist. And if such algorithms are possible, they sure do sound potentially extremely useful for an LLM's job. 

On my previous superposition-skeptical models, frameworks like the one described in this post are predicted to be basically impossible. Certainly way more cumbersome than this looks. So unless these ideas fall flat when more research is done on the error tolerance, I guess I was wrong. Oops.

Comment by Lucius Bushnaq (Lblack) on Are we inside a black hole? · 2024-01-07T11:02:53.721Z · LW · GW

I think the idea expressed in the post is for our entire observable universe to be a remnant of such spaghettificiation in higher dimensions, with basically no thickness along the direction leading to the singularity remaining. So whatever higher dimensional bound structure the local quantum fields may or may not usually be arranged in is (mostly) gone, and the merely 3+1 dimensional structures of atoms and pelvises we are used to are the result.

I wouldn't know off the top of my head if you can make this story mathematically self-consistent or not. 

Comment by Lucius Bushnaq (Lblack) on What’s up with LLMs representing XORs of arbitrary features? · 2024-01-04T09:30:00.939Z · LW · GW

Maybe a⊕b is represented “indicentally” because NN representations are high-dimensional with lots of stuff represented by chance

This would be my first guess, conditioned on the observation being real, except strike “by chance”. The model likely wants to form representations that can serve to solve a very wide class of prediction tasks over the data with very few non-linearities used, ideally none, as in a linear probe. That’s pretty much the hallmark of a good general representation you can use for many tasks.

I thus don't think that comparing to a model with randomized weights is a good falsification. I wouldn’t expect a randomly initialized model to have nice general representations. 

My stated hypothesis here would then predict that the linear probes for XOR features get progressively worse if you apply them to earlier layers. Because the model hasn't had time to make the representation as general that early in the computation. So accuracy should start to drop as you look at layers before fourteen.

I'll also say that if you can figure out a pattern in how particular directions get used as components for many different boolean classification tasks, that seems like the kind of thing that might result in an increased understanding of what these directions encode exactly. What does the layer representation contain, in actual practice, that allows it to do this?

Comment by Lucius Bushnaq (Lblack) on Critical review of Christiano's disagreements with Yudkowsky · 2023-12-29T12:53:05.507Z · LW · GW

Even architectures-in-the-narrow-sense don't show overarching scaling laws at current scales, right? IIRC the separate curves for MLPs, LSTMs and transformers do not currently match up into one larger curve. See e.g. figure 7 here.

So a sudden capability jump due to a new architecture outperforming transformers the way transformers outperform MLPs at equal compute cost seems to be very much in the cards?

I intuitively agree that current scaling laws seem like they might be related in some way to a deep bound on how much you can do with a given amount of data and compute, since different architectures do show qualitatively similar behavior even if the y-axes don't match up. But I see nothing to suggest that any current architectures are actually operating anywhere close to that bound.

Comment by Lucius Bushnaq (Lblack) on Some biases and selection effects in AI risk discourse · 2023-12-13T08:23:59.116Z · LW · GW

If it only requires a simple hack to existing public SOTA, many others will have already thought of said hack and you won't have any additional edge.

I don't recall assuming the edge to be unique? That seems like an unneeded condition for Tamsin's argument, it's enough to believe the field consensus isn't completely efficient by default and all relevant actors are sure of all currently deducable edges at all times.



Progress in DL is completely smooth.

Right, if you think it's completely smooth and thus basically not meaningfully influenced by the actions of individual researchers whatsoever, I see why you would not buy Tamsin's argument here. But then the reason you don't buy it would seem to me to be that you think meaningful new ideas in ML capability research basically don't exist, not because you think there is some symmetric argument to Tamsin's for people to stay quiet about new alignment research ideas.

Comment by Lucius Bushnaq (Lblack) on Some biases and selection effects in AI risk discourse · 2023-12-13T03:00:15.351Z · LW · GW

I don't see why this would be ridiculous. To me, e.g. "Superintelligence only requires [hacky change to current public SOTA] to achieve with expected 2025 hardware, and OpenAI may or may not have realised that already" seems like a perfectly coherent way the world could be, and is plenty of reason for anyone who suspects such a thing to keep their mouth shut about gears-level models of [] that might be relevant for judging how hard and mysterious the remaining obstacles to superintelligence actually are.

Comment by Lucius Bushnaq (Lblack) on Some biases and selection effects in AI risk discourse · 2023-12-13T02:42:27.636Z · LW · GW

It's not that hard to build an AI that saves everyone: you just need to solve [some problems] and combine the solutions. Considering how easy it is compared to what you thought, you should decrease your P(doom) / shorten your timelines.

I'm not sure what you're saying here exactly. It seems to me like you're pointing to a symmetric argument favoring low doom, but if someone had an idea for how to do AI alignment right, why wouldn't they just talk about it? Doesn't seem symmetrical to me.

Comment by Lucius Bushnaq (Lblack) on Speaking to Congressional staffers about AI risk · 2023-12-06T23:07:01.508Z · LW · GW

(I disagree. Indeed, until recently governance people had very few policy asks for government.)

Did that change because people finally finished doing enough basic strategy research to know what policies to ask for? 

It didn't seem like that to me. Instead, my impression was that it was largely triggered by ChatGPT and GPT4 making the topic more salient, and AI safety feeling more inside the Overton window. So there were suddenly a bunch of government people asking for concrete policy suggestions.

Comment by Lucius Bushnaq (Lblack) on Why Yudkowsky is wrong about "covalently bonded equivalents of biology" · 2023-12-06T22:21:27.394Z · LW · GW

"Pandemics" aren't a locally valid substitute step in my own larger argument, because an ASI needs its own manufacturing infrastructure before it makes sense for the ASI to kill the humans currently keeping its computers turned on.

When people are highly skeptical of the nanotech angle yet insist on a concrete example, I've sometimes gone with a pandemic coupled with limited access to medications that temporarily stave off, but don't cure, that pandemic as a way to force a small workforce of humans preselected to cause few problems to maintain the AI's hardware and build it the seed of a new infrastructure base while the rest of humanity dies. 

I feel like this has so far maybe been more convincing and perceived as "less sci-fi" than Drexler-style nanotech by the people I've tried it on (small sample size, n<10).

Generally, I suspect not basing the central example on a position on one side of yet another fierce debate in technology forecasting trumps making things sound less like a movie where the humans might win. The rate of people understanding that something sounding like a movie does not imply the humans have a realistic chance at winning in real life just because they won in the movie seems, in my experience with these conversations so far, to exceed the rate of people getting on board with scenarios that involve any hint of Drexler-style nanotech.

Comment by Lucius Bushnaq (Lblack) on How useful is mechanistic interpretability? · 2023-12-02T12:42:27.954Z · LW · GW

For example, if an SAE gives us 16x as many dimensions as the original activations, and we find that half of those are interpretable, to me this seems like clear evidence of superposition (8x as many interpretable directions!). How would you interpret that phenomena?
 

I don't have the time and energy to do this properly right now, but here's a few thought experiments to maybe help communicate part of what I mean:

Say you have a transformer model that draws animals.  As in, you type “draw me a giraffe”,  and then it draws you a giraffe. Unknown to you, the way the model algorithm works is that the first thirty layers of the model perform language processing to figure out what you want drawn, and output a summary of fifty scalar variables that the algorithms in the next thirty layers of the model use to draw the animals. And these fifty variables are things like “furriness”, “size”, “length of tail” and so on.

The latter half of the model does then not, in any real sense, think of the concept “giraffe” while it draws the giraffe. It is just executing purely geometric algorithms that use these fifty variables to figure out what shapes to draw. 

If you then point a sparse autoencoder at the residual stream in the latter half of the model, over a data set of people asking the network to draw lots of different animals, far more than fifty or the network width, I’d guess the “sparse features” the SAE finds might be the individual animal types. “Giraffe”, “elephant”, etc. . 

Or, if you make the encoder dictionary larger, more specific sparse features like “fat giraffe” would start showing up. 

And then, some people may conclude that the model was doing a galaxy-brained thing where it was thinking about all of these animals using very little space, compressing a much larger network in which all these animals are variables. This is kind of true in a certain sense if you squint, but pretty misleading. The model at this point in the computation no longer “knows” what a giraffe is. It just “knows” what the settings of furriness, tail length, etc. are right now. If you manually go into the network and set the fifty variables to something that should correspond to a unicorn, the network will draw you a unicorn, even if there were no unicorns in the training data and the first thirty layers in the network don’t know how to set the fifty variables to draw one. So in a sense, this algorithm is more general than a cleverly compressed lookup table of animals would be. And if you want to learn how the geometric algorithms that do the drawing work, what they do with the fifty scalar summary statistics is what you will need to look at.

Just because we can find a transformation that turns an NNs activations into numbers that correlate with what a human observer would regard as separate features of the data, does not mean the model itself is treating these as elementary variables in its own computations in any meaningful sense. 

The only thing the SAE is showing you is that the information present in the model can be written as a sum of some sparsely activating generators of the data. This does not mean that the model is processing the problem in terms of these variables. Indeed, SAE dictionaries are almost custom-selected not to give you variables that a well-generalizing algorithm would use to think about problems with big, complicated state spaces. Good summary variables are highly compositional, not sparse. They can all be active at the same time in any setting, letting you represent the relevant information from a large state space with just a few variables, because they factorise. Temperature and volume are often good summary variables for thinking about thermodynamic systems because the former tells you nothing about the latter and they can co-occur in any combination of values. Variables with strong sparsity conditions on them instead have high mutual information, making them partially redundant, and ripe for compressing away into summary statistics.

If an NN (artificial or otherwise) is, say, processing images coming in from the world, it is dealing with an exponentially large state space. Every pixel can take one of several values. Luckily, the probability distribution of pixels is extremely peaked. The supermajority of pixel settings are TV static that never occurs, and thermal noise that doesn't matter for the NNs task. One way to talk about this highly peaked pixel distribution may be to describe it as a sum of a very large number of sparse generators. The model then reasons about this distribution by compressing the many sparse generators into a small set of pretty non-sparse, highly compositional variables. For example, many images contain one or a few brown branchy structures of a certain kind, which come in myriad variations. The model summarises the presence or absence of any of these many sparse generators with the state of the variable “tree”, which tracks how much the input is “like a tree”.

If the model has a variable “tree” and a variable “size”, the myriad brown, branchy structures in the data might, for example, show up as sparsely encoded vectors in a two-dimensional (“tree”,“size”) manifold. If you point a SAE at that manifold, you may get out sparse activations like “bush” (mid tree, low size) “house” (low tree, high size), “fir” (high tree, high size). If you increase the dictionary size, you might start getting more fine-grained sparse data generators. E.g. “Checkerberry bush” and “Honeyberry bush” might show up as separate, because they have different sizes.

Humans, I expect, work similarly. So the human-like abstractions the model may or may not be thinking in and that we are searching for will not come in the form of sparse generators of layer activations, because human abstractions are the summary variables you would be using to compress these sparse generators. They are the type-of-thing you use to encode a sparse world, not the type-of-thing being encoded. That our SAE is showing us some activations that correlate with information in the input humans regard as meaningful just tells us that the data contains sparse generators humans have conceptual descriptions for, not that the algorithms of the network themselves are encoding the sparse generators using these same human conceptual descriptions. We know it hasn't thrown away the information needed to compute that there was a bush in the image, but we don't know it is thinking in bush. It probably isn't, else bush would not be sparse with respect to the other summary statistics in the layer, and our SAE wouldn't have found it.

 

Comment by Lucius Bushnaq (Lblack) on How useful is mechanistic interpretability? · 2023-12-02T10:44:03.947Z · LW · GW

The causal graph is large in general, but IMO that's just an unavoidable property of models and superposition.

This is a discussion that would need to be its own post, but I think superposition is basically not real and a confused concept. 

Leaving that aside, the vanilla reading of this claim also seems kind of obviously false for many models, otherwise optimising them in inference through e.g. low rank approximation of weight matrices would never work. You are throwing away at least one floating point number worth of description bits there.

I'm confused by why you don't consider "only a few neurons being non-zero" to be a "low dimensional summary of the relevant information in the layer"

A low-dimensional summary of a variable vector  of size  is a fixed set of random variables  that suffice to summarise the state of .  To summarise the state of  using the activations in an SAE dictionary, I have to describe the state of more than  variables. That these variables are sparse may sometimes let me define an encoding scheme for describing them that takes less than  variables, but that just corresponds to undoing the autoencoding and then performing some other compression.

Comment by Lucius Bushnaq (Lblack) on How useful is mechanistic interpretability? · 2023-12-02T08:40:43.438Z · LW · GW

SAEs are almost the opposite of the principle John is advocating for here. They deliver sparsity in the sense that the dictionary you get only has a few neurons not be in the zero state at the same time, they do not deliver sparsity in the sense of a low dimensional summary of the relevant information in the layer, or whatever other causal cut you deploy them on. Instead, the dimensionality of the representation gets blown up to be even larger

Comment by Lucius Bushnaq (Lblack) on OpenAI: Facts from a Weekend · 2023-11-20T20:06:00.099Z · LW · GW

If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place. We were already in the “worst case scenario”. Better to be honest about it. Then at least, the rest of the organisation doesn‘t get to keep pointing to the charter and the board as approving their actions when they don‘t.

The charter it is the board’s duty to enforce doesn‘t say anything about how the rest of the document doesn‘t count if investors and employees make dire enough threats, I‘m pretty sure.