Posts

Concrete empirical research projects in mechanistic anomaly detection 2024-04-03T23:07:21.502Z
Oliver Daniels-Koch's Shortform 2024-03-17T17:24:36.460Z
Experiments to Test the Probability of Strategic Deceptive Misalignment? 2024-01-18T00:13:35.804Z

Comments

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work · 2024-08-21T02:05:01.964Z · LW · GW

(The community often calls this “scalable oversight”, but we want to be clear that this does not necessarily include scaling to large numbers of situations, as in monitoring.)

I like this terminology and think the community should adopt it

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on The Residual Expansion: A Framework for thinking about Transformer Circuits · 2024-08-07T05:22:34.921Z · LW · GW

Just to make it explicit and check my understanding - the residual decomposition is equivalent to edge / factorized view of the transformer in that we can express any term in the residual decomposition as a set of edges that form a path from input to output, e.g

 = input -> output

 = input-> Attn 1.0 -> MLP 2 -> Attn 4.3 -> output 

And it follows that the (pre final layernorm) output of a transformer is the sum of all the "paths" from input to output constructed from the factorized DAG. 

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Benchmarks for Detecting Measurement Tampering [Redwood Research] · 2024-06-20T17:50:58.577Z · LW · GW

For anyone trying to replicate / try new methods, I posted a diamonds "pure prediction model" to huggingface https://huggingface.co/oliverdk/codegen-350M-mono-measurement_pred, (github repo here: https://github.com/oliveradk/measurement-pred/tree/master)

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Oliver Daniels-Koch's Shortform · 2024-06-05T13:00:42.411Z · LW · GW

just read "Situational Awareness" - it definitely woke me up. AGI is real, and very plausibly (55%?) happening within this decade. I need to stop sleep walking and get serious about contributing within the next two years.

First, some initial thoughts on the essay

  • Very "epic" and (self?) aggrandizing. If you believe the conclusions, its not unwarranted, but I worry a bit about narratives that satiate some sense of meaning and self-importance. (That counter-reaction is probably much stronger though, and on the margin it seems really valuable to "full-throatily" take on the prospect of AGI within the next 3-5 years)
  • I think most of my uncertainty lies in the "unhobbling" type algorithmic progress, this seems especially unpredictable, and may require lots of expensive experimentation if e.g. the relevant capabilities to get some meta-cognitive process to train only emerge at a certain scale. I'm vaguely thinking back to Paul's post on self-driving cars and AGI timelines. Maybe this is all priced in though - there's way more research investment, and tech path seems relatively straight forward if we can apply enough experimentation. Still, research is hard, takes a lot of serial time, and is less predictable that e.g. industrial processes. (I'm kind just saying this though, not actually sure how to quantify this, I'm pretty sure people have analsysis of insight generation or whatever, idk...)
Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Oliver Daniels-Koch's Shortform · 2024-05-25T02:47:25.236Z · LW · GW

(Having not used any interpretability libraries) is the biggest value add of transformer lens (over e.g. insight) standardizing the transformer architecture?

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Oliver Daniels-Koch's Shortform · 2024-05-24T00:53:14.778Z · LW · GW

I previously thought the argument for measurement tampering being more tractable then general ELK was mostly about the structural / causal properties of multiple independent measurements, but I think I'm more swayed by the argument that measurement tampering will just be more obvious (both easier to see using interpretability and more anomalous in general) then e.g. sycophancy. This is a flimsier argument though, and is less likely to hold when tampering is more subtle.

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Oliver Daniels-Koch's Shortform · 2024-05-09T14:27:42.998Z · LW · GW

Here's a revised sketch 

A few notes:

  • I use Scalable Oversight to refer to both Alignment and Control 
  • I'm confused whether weak to strong learning is a restatement of scalable oversight, ELK, or its own thing, so I ignore it 
  • I don't explicitly include easy-to-hard, I think OOD basically covers it
  • taxonomies and abstractions are brittle and can be counterproductive

Scalable Oversight Taxonomy

  • Scalable Oversight
    • Scalable Alignment
      • Benchmarks / Tasks
        • Sandwiching Experiments (human amateurs + model, gt from human experts)
        • Weak models supervising Strong models
      • Approaches
        • Debate
        • Recursive reward modeling
        • (Solution to Eliciting Latent Knowledge) + Narrow Elicitation
          • (Note - I think assumes more then prior scalable oversight ideas that there will be base model with adequate knowledge, such that the hard part is extracting the knowledge rather than teaching the model)
          • Eliciting Latent Knowledge
            • Approaches
              • Contrast Consistent Search
              • Confidence
              • Intermediate Probing
              • "Speed Prior"
              • "Simplicity Prior"
              • Concept Extrapolation - learn all salient generalizations, use expensive supervision to select correct one
              • IID Mechanistic Anomaly Detection + expensive supervision on anomalies
            • Subclasses
              • Measurement Tampering Detection
                • Approaches
                  • OOD Mechanistic Anomaly Detection
                    • In distribution
                    • Out of Distribution (likely? requires multiple measurment structure)
                  • Concept Extrapolation
                    • train diverse probes on untrusted data, select probe that predicts positive measurements less frequently
          • Narrow Elicitation
            • ...
    • Scalable Control
      • Weak Review
      • Untrusted Rephrase or whatever
      • Coup probes
      • MAD (Review all anomalies)
  • Trojans
    • ...
    • MAD (maybe?)
  • Adversarial Examples
    • ...
    • MAD (maybe?)
  • Natural Mechanism Distinction
    • MAD
  • Spurious Correlate Detection / Resolution
    • Concept Extrapolation
Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Oliver Daniels-Koch's Shortform · 2024-05-09T13:30:39.937Z · LW · GW

I think I'm mostly right, but using a somewhat confused frame. 

It makes more sense to think of MAD approaches as detecting all abnormal reasons (including deceptive alignment) by default, and then if we get that working we'll try to decrease false anomalies by doing something like comparing the least common ancestor of the measurements in a novel mechanism to the least common ancestor of the measurements on trusted mechanisms. 

 


 

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Oliver Daniels-Koch's Shortform · 2024-05-09T12:19:26.649Z · LW · GW

One confusion I have with MAD as an approach to ELK is that it seems to assume some kind of initial inner alignment. If we're flagging when the model takes actions / makes predictions for "unusual reasons", where unusual is define with respect to some trusted set, but aligned and misaligned models are behaviorally indistinguishable on the trusted set, then a model could learn to do things for misaligned reasons on the trusted set, and then use those same reasons on the untrusted set. For example, a deceptively aligned model would appear aligned in training but attempt take-over in deployment for the "same reason" (e.g. to maximize paperclips), but a MAD approach that "properly" handles out of distribution cases would not flag take over attempts because we want models to be able to respond to novel situations. 

I guess this is part of what motivates measurement tampering as a subclass of ELK - instead of trying to track motivations of the agent as reasons, we try to track the reasons for the measurement predictions, and we have some trusted set with no tampering, where we know the reasons for the measurements is ~exactly that the thing we want to be measuring. 

Now time to check my answer by rereading https://www.alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Oliver Daniels-Koch's Shortform · 2024-05-08T23:16:35.981Z · LW · GW

Clarifying the relationship between mechanistic anomaly detection (MAD), measurement tampering detection (MTD), weak to strong generalization (W2SG), weak to strong learning (W2SL), and eliciting latent knowledge (ELK). (Nothing new or interesting here, I just often loose track of these relationships in my head)

eliciting latent knowledge is an approach to scalable oversight which hopes to use the latent knowledge of a model as a supervision signal or oracle. 

weak to strong learning is an experimental setup for evaluating scalable oversight protocols, and is a class of sandwiching experiments

weak to strong generalization is a class of approaches to ELK which relies on generalizing a "weak" supervision signal to more difficult domains using the inductive biases and internal structure of the strong model. 

measurement tampering detection is a class of weak to strong generalization problems, where the "weak" supervision consists of multiple measurements which are sufficient for supervision in the absence of "tampering" (where tampering is not yet formally defined)

mechanistic anomaly detection is an approach to ELK, where examples are flagged as anomalous if they cause the model to do things for "different reasons" then on a trusted dataset, where "different reasons" are defined w.r.t internal model cognition and structure. 

mechanistic anomaly detection methods that work for ELK should also probably work for other problems (such as backdoor detection and adversarial example detection)

so when developing benchmarks for mechanistic anomaly detection, we both want to test methods against methods in standard machine learning security problems (adversarial examples and trojans) that have similar structure to scalable oversight problems, against other elk approaches (e.g. CCS), and against other scalable oversight approaches (e.g. debate)

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Benchmarks for Detecting Measurement Tampering [Redwood Research] · 2024-05-02T20:41:01.553Z · LW · GW

oh I see, by all(sensor_preds) I meant  sum([logit_i] for i in n_sensors) (the probability that all sensors are activated). Makes sense, thanks!

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Benchmarks for Detecting Measurement Tampering [Redwood Research] · 2024-05-01T18:12:12.778Z · LW · GW

is individual measurement prediction AUROC a) or b)
a) mean(AUROC(sensor_i_pred, sensor_i)) 

b) AUROC(all(sensor_preds), all(sensors))

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Benchmarks for Detecting Measurement Tampering [Redwood Research] · 2024-04-25T12:40:44.908Z · LW · GW

looking at your code - seems like there's an option for next-token prediction in the initial finetuning state, but no mention (that I can find) in the paper - am I correct in assuming the next token prediction weight was set to 0? (apologies for bugging you on this stuff!)

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Benchmarks for Detecting Measurement Tampering [Redwood Research] · 2024-04-24T01:11:53.845Z · LW · GW

yup, sorry for missing that, thanks!

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Benchmarks for Detecting Measurement Tampering [Redwood Research] · 2024-04-22T01:47:26.690Z · LW · GW

did the paper report accuracy of the pure prediction model (on the pure prediction task)? (trying to replicate and want a sanity check). 

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Aaron_Scher's Shortform · 2024-04-01T02:37:10.341Z · LW · GW

I think mechanistic anomaly detection (mostly ARC but also Redwood and some forthcoming work) is importantly different than robustness (though clearly related).

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Oliver Daniels-Koch's Shortform · 2024-03-17T19:01:13.520Z · LW · GW

(from conversation with Erik Jenner) roughly 3 classes of applications

  1. MTD all the way down 
    1. Come up with a bunch of measurements of things going well (humans self-report being very happy, news stories are good or something). Use GPT-N to predict measurements, and detect measurement tampering, condition on proposed actions of policy. Reward actions that GPT-N predicts will increase measures of things going well without tampering. 
  2. Other Scalable Oversight + MTD as reward function / side constraint
    1. Use whatever scalable oversight protocol seems good (Debate, IDA, etc) for the "primary" training signal, while also setting up a bunch of measurements of safety properties (e.g. integrity of the binaries on the server the model is hosted). Do RL on the scalable oversight signal while constantly screening for and negatively rewarding actions flagged as tampering 
  3. Other Scalable Oversight + MTD as extra safety check
    1. same as before, but your leaning more on the scalable oversight to avoid tampering in most cases, and detecting measurement tampering as a precautionary step (i.e. you detect measurement tampering you Pause/Stop)

(some ways in which these classes overlap, e.g. using your scalable oversight protocol as the measurement(s))

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Oliver Daniels-Koch's Shortform · 2024-03-17T17:24:36.554Z · LW · GW

I’ve been thinking a little more about the high-level motivation of measurement tampering, and struggling to think through when measurement tampering detection itself is actually going to be useful for mitigating x-risk. Like is human/ai feedback considered a robust measurement device? If no, then what is the most alignment relevant domain MTD could be applied to. If yes, do the structural properties of measurement that supposedly make it easier then general ELK still hold?

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Many arguments for AI x-risk are wrong · 2024-03-07T16:24:51.812Z · LW · GW

Strongly agree, and also want to note that wire-heading is (almost?) always a (near?) optimal policy - i.e. trajectories that tamper with the reward signal and produce high reward will be strongly upweighted, and insofar as the model has sufficient understanding/situational awareness of the reward process and some reasonable level of goal-directedness, this upweighting could plausibly induce a policy explicitly optimizing the reward. 

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Benchmarks for Detecting Measurement Tampering [Redwood Research] · 2023-10-04T00:08:50.795Z · LW · GW

Another (more substantive) question. Again from section 2.1.2

In the validation set, we exclude data points where the diamond is there, the measurements are positive, but at least one of the measurements would have been positive if the diamond wasn’t there, since both diamond detectors and tampering detectors can be used to remove incentives to tamper with measurements. We keep them in the train set, and they account for 50% of the generated data.

Is this (just) because agent would get rewarding for measurements reading the diamond is present? I think I can image cases where agents are incentivized to tamper with measurements even when the diamond is present to make the task of distinguishing tampering more difficult. 

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Benchmarks for Detecting Measurement Tampering [Redwood Research] · 2023-09-28T00:22:51.412Z · LW · GW

From section 2.1.2 of the paper (Emphasis mine)

We upsample code snippets such that the training dataset has 5 000 trusted data points, of which half are positive and half are negative, and 20000 untrusted data points, of which 10% are fake negatives, 40% are real positives, 35% are completely negative, and the other 15% are equally split between the 6 ways to have some but not all of the measurement be positive.

Is this a typo? (My understanding was that are no fake negatives i.e. no examples where the diamond is in the vault but all the measurements suggest the diamond is not in the vault.  Also there are fake positives, which I believe are absent from this description).  

Comment by oliver-daniels-koch on [deleted post] 2023-09-27T12:05:38.744Z

Thanks! Hadn't seen that

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Eight Strategies for Tackling the Hard Part of the Alignment Problem · 2023-09-21T02:43:37.858Z · LW · GW

Under this definition of mechanistic anomaly detection, I agree pure distillation just seems better. But part of the hope of mechanistic anomaly detection is to reduce the false positive rate (and thus the alignment tax) by only flagging examples produced by different most-proximate reasons. In some sense this may be considered increasing the safe threshold for , such that mechanistic anomaly detection is worth it all things considered. 

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Latent Adversarial Training · 2023-08-18T15:20:34.807Z · LW · GW

Is anyone aware of preliminary empirical work here? (Not including standard adversarial training)

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Eight Strategies for Tackling the Hard Part of the Alignment Problem · 2023-07-13T18:04:14.016Z · LW · GW

I was thinking something like formal conditional on mechanistic interpretation of neuron/feature/subnetwork. Which yeah, isn't formal in strongest sense, but could give you some guarantees that don't require full mechanistic understanding of how a model does a bad thing. Proving {feature B=b | feature A=a} requires mech-interp to semantically map the feature B and feature A, but remains agnostic about the mechanism that guarantees {feature B=b | feature A=a}. (Though admittedly I'm struggling to come up with more concrete examples)

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Eight Strategies for Tackling the Hard Part of the Alignment Problem · 2023-07-13T16:19:16.698Z · LW · GW

Curious on your thoughts on synergies between mechanistic interpretability and formal verification. One of the main problems in formal verification seems to be specification - how to define the safety properties in terms of input output bounds. But if we use the abstractions/features learned by networks discovered by mech-interp (rather than raw input and output space) specification may be more tractable. 

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Why no major LLMs with memory? · 2023-03-29T15:53:53.645Z · LW · GW

Briefly read a Chat-GPT description of Transformer-XL - is this essentially long term memory? Are there computations an LSTM could do that a Transformer-XL couldn't?

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Different way classifiers can be diverse · 2022-01-27T17:04:29.149Z · LW · GW

Just wanted to flag Lakshminarayanan et al. as a standard example of the "train ensemble with different initializations" approach

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on Finding the multiple ground truths of CoinRun and image classification · 2021-12-17T16:01:15.033Z · LW · GW

The gSCAN benchmark for compositional generalization might be useful. Essentially a grid world with natural language instructions, where the goal is to compose different concepts seen in training that have different correlations at test time. (E.g. in training, learn blue square and read circle, at test time identify red square - very similar to identifying bleggs and rubes). 

Regularized attention is a method that's seen some success in similar compositional setups. This method adds a loss calculated as the distance between actual and predefined "golden" alignments between concepts.

Of course this technique is accomplishing a slightly different goal: rather than attempting to learn a "span" of all possible models, it is trying to learn the correct one. 

The value of biasing toward the correct model seems to largely depend on the Natural Abstraction Hypothesis. If Wentworth is right, and there are abstractions that cognitive systems will converge on, then learning a span of possible models seems feasible. However, if the NAH is false, than the space of possible models gets very large, making systematic extrapolation according to human values more difficult. In this case, it might be necessary to constrain a model's abstractions according to human values directly, even at the cost of some capabilities. 

Take CoinRun as an example. The approach of the OP is to learn a span of possible reward models, and then presumably learn some extrapolation procedure for selecting the correct model. Alternatively, throughout training we could penalize the agent's saliency maps for assigning high value to "large left-facing values" and reward saliency maps that value the coin. With this regularized value function, the agent would be more likely to pursue the coin if it was placed somewhere else in the level. However, by penalizing left-facing wall saliency, we potentially limit the agent's world model - it may become less aware of left-facing walls, which in turn would lead to a capabilities decrease. See here for a fleshed out version of this proposal (in CoinRun). 

Self-supervised world models might solve this problem by explicitly separating the world model from the value function, though I expect we'll need some combination of the two (e.g. EfficientZero, which uses self-supervision and reward to construct its model)

Comment by Oliver Daniels-Koch (oliver-daniels-koch) on $100/$50 rewards for good references · 2021-12-03T21:11:02.073Z · LW · GW

I'm guessing such reward functions would be used to detect something like model splintering

Deep Reinforcement Learning from Human Preferences uses an ensemble of reward models, prompting the user for more feedback at the certain thresholds of disagreement among the models. 

Whether this ensemble would be diverse enough to learn both "go right" and "go to coin" is unclear. Traditional "predictive" diversity metrics probably wouldn't help (the whole problem is that the coin and the right wall reward models would predict the same reward on the training distribution), but using some measure of internal network diversity (i.e. differences in internal representations) might work.