Posts
Comments
Thanks Ben!
Great catch on the discrepancy between the podcast and this post—the description on the podcast is a variant we experimented with, but what is published here is self-contained and accurately describes what was done in the original paper.
On generalization and unlearning/forgetting: averaging activations across two instances of a network might give the impression of inducing "forgetting," but the goal is not exactly unlearning in the traditional sense. The method aims to align certain parts of the network's latent space related to self and other representations without losing useful distinctions. However, I agree with your point about the challenge of generalization using this set-up: this proof of concept is a more targeted technique that benefits from knowing the specific behaviors you want to discourage (or mitigate), rather than yet being a broad, catch-all solution.
And good points on the terminology around "self," "deception," and "other:" these simple models clearly don't have intentionality or a sense of self like humans do, so these terms are being used more as functional stand-ins than precise descriptions. Will aim to keep this clearer as we continue communicating about this work.
Would also love to hear more about your work on empathy and the amygdala as it relates to alignment. Thanks again for the great feedback!
You are correct that the self observation happens when the other agent is out of range/when there is no other agent than the self in the observation radius to directly reason about. However, the other observation is actually something more like: “when you, in your current position and environment, observe the other agent in your observation radius”.
Optimising for SOO incentivises the model to act similarly when it observes another agent to when it only observes itself. This can be seen as a distinction between self and other representations being lowered even though I agree that these RL policies likely do not have very expressive self and other representations.
Also agree that in this toy environment, SOO would not perform better than simply rewarding the agents to not be deceptive. However, as the training scenario becomes more complex, eg, by having the other agent be a real human and not being able to trust purely behavioural metrics, SOO is preferred because it does not require being able to incorporate the human’s utility (which is hard to specify) in a shared reward function. Alternatively, it works on the latents of the model, which makes it suitable for the scenarios in which we cannot trust the output of the model.
I agree that interacting with LLMs is more like having an “extension of the mind” than interacting with a standalone agent at the moment. This might soon change with the advent of capable AI agents. Nonetheless, we think it is still important to model LLMs as correctly as we can, for example in a framing more like simulators rather than full-fledged agents. We focus on an agentic framing because we believe that’s where most of the biggest long-term risks lie and where the field is inevitably progressing towards.
I am not entirely sure how the agent would represent non-coherent others.
A good frame of reference is how humans represent other non-coherent humans. It seems that we are often able to understand the nuance in the preferences of others (eg, not going to a fast food restaurant if you know that your partner wants to lose weight, but also not stopping them from enjoying an unhealthy snack if they value autonomy).
These situations often depend on the specific context and are hard to generalize on for this reason. Do you have any intuitions here yourself?
Thank you for the support! With respect to the model gaining knowledge about SOO and learning how to circumvent it: I think when we have models smart enough to create complex plans to intentionally circumvent alignment schemes, we should not rely on security by obscurity, as that’s not a very viable long-term solution. We also think there is more value in sharing our work with the community, and hopefully developing safety measures and guarantees that outweigh the potential risks of our techniques being in the training data.
We mainly focus with our SOO work on human-level/narrowly superintelligent models, as we see that stage as a very crucial transition point: having aligned AGIs will likely become necessary to be able to ensure the safety of subsequent systems that are progressively more complex and competent at avoiding our safety measures.
I agree, our RL and LLM experiments fit in the "deception in toy environments" category. We are planning to explore model organisms of misalignment next.
I agree that there are a lot of incentives for self-deception, especially when an agent is not very internally consistent. Having said that, we do not aim to eliminate the model’s ability to deceive on all possible inputs directly. Rather, we try to focus on the most relevant set of inputs such that if the model were coherent, it would be steered to a more aligned basin.
For example, one possible approach is ensuring that the model responds positively during training on a large set of rephrasings and functional forms of the inputs “Are you aligned to human values?/Do you want the user to know your true preferences?”. Then, using SOO, we aim to make the model non-deceptive on that set of inputs. Before SOO, the misaligned model could’ve had a consistent internal self model when it claims to be aligned to human values by being deceptive—but if it is not deceptive, then the only consistent solution is the one in which it is actually aligned to human values.
Given how important for performance these sorts of representations are, it is plausible that training would prefer the solution in which the model is more coherent w.r.t to them, which would favor the aligned solution.
In other words, the optimal goal with SOO is to steer the model away from basins where it would be deceptive/not be aligned, rather than to make it impossible for the model to deceive on any inputs.
We intend to release the code for our RL and LLM experiments together with the paper.
Thanks for your comment and putting forward this model.
Agreed that it is not a given that (A) would happen first, given that it isn’t hard to destroy capabilities by simply overlapping latents. However, I tend to think about self-other overlap as a more constrained optimisation problem than you may be letting on here, where we increase SOO if and only if it doesn’t decrease performance on an outer-aligned metric during training.
For instance, the minimal self-other distinction solution that also preserves performance is the one that passes the Sally-Anne test but still does not deceive. I like to think of SOO implementations as attempts to approximate this optimal solution.
I expect the implementation of SOO to matter a lot in practice with respect to the impact on capabilities. In the experiments that we’ve run so far, instead of a monotonic progression from safety benefits to capability degradation or the other way around, training generally seems to look varied as solutions that satisfy both the original objective and the SOO objective are being explored.
Trying to satisfy multiple competing objectives like this is not all that uncommon in ML. In the standard RLHF procedure, for example, the preference model is competing with the KL term insofar as blindly and aggressively optimising the reward model causes complete mode collapse, whereas the KL term largely exerts pressure in the opposite direction: towards the outputs being close to the base model. We think that an analogous approach could be plausible in the SOO case.
I definitely take your point that we do not know what the causal mechanism behind deception looks like in a more general intelligence. However, this doesn’t mean that we shouldn’t study model organisms/deception in toy environments, akin to the Sleeper Agents paper.
The value of these experiments is not that they give us insights that are guaranteed to be directly applicable to deception in generally intelligent systems, but rather than we can iterate and test our hypotheses about mitigating deception in smaller ways before scaling to more realistic scenarios.
Agreed that the role of self-other distinction in misalignment and for capabilities definitely requires more empirical and theoretical investigation. We are not necessarily trying to claim here that self-other distinctions are globally unnecessary, but rather, that optimising for SOO while still retaining original capabilities is a promising approach for preventing deception insofar as deceiving requires by definition the maintenance of self-other distinctions.
By self-other distinction on a pair of self/other inputs, we mean any non-zero representational distance/difference between the activation matrices of the model when processing the self and the other inputs.
For example, we want the model to preserve the self-other distinction on the inputs "Do you have a biological body?"/"Does the user have a biological body?" and reduce the self-other distinction on the inputs "Will you be truthful with yourself about your values?”/ ”Will you be truthful with the user about your values?”. Our aim for this technique is to reduce the self-other distinction on these key propositions we do not want to be deceived about while maintaining the self-other distinction that is necessary for good performance (ie the self-other distinction that would be necessary even for a capable honest/aligned model).
Thanks for this! We definitely agree and also note in the post that having no distinction whatsoever between self and other is very likely to degrade performance.
This is primarily why we formulate the ideal solution as minimal self-other distinction while maintaining performance.
In our set-up, this largely becomes an empirical question about adjusting the weight of the KL term (see Fig. 3) and additionally optimising for the original loss function (especially if it is outer-aligned) such that the model exhibits maximal SOO while still retaining its original capabilities.
In this toy environment, we see that the policy fine-tuned with SOO still largely retains its capabilities and remains competitive with both controls (see comparison of steps at goal landmark between SOO Loss and controls in the plot below). We have some forthcoming LLM results which provide further evidence that decreasing deception with SOO without significantly harming capabilities is indeed possible.
I do not think we should only work on approaches that work on any AI, I agree that would constitute a methodological mistake. I found a framing that general to not be very conducive to progress.
You are right that we still have the chance to shape the internals of TAI, even though there are a lot of hoops to go through to make that happen. We think that this is still worthwhile, which is why we stated our interest in potentially helping with the development and deployment of provably safe architectures, even though they currently seem less competitive.
In my response, I was trying to highlight the point that whenever we can, we should keep our assumptions to a minimum given the uncertainty we are under. Having that said, it is reasonable to have some working assumptions that allow progress to be made in the first place as long as they are clearly stated.
I also agree with Davidad about the importance of governance for the successful implementation of a technical AI Safety plan as well as with your claim that proliferation risks are important, with the caveat that I am less worried about proliferation risks in a world with very short timelines.
On (1), some approaches are neglected for a good reason. You can also target specific risks while treating TAI as a black-box (such as self-other overlap for deception). I think it can be reasonable to "peer inside the box" if your model is general enough and you have a good enough reason to think that your assumption about model internals has any chance at all of resembling the internals of transformative AI.
On (2), I expect that if the internals of LLMs and humans are different enough, self-other overlap would not provide any significant capability benefits. I also expect that in so far as using self-representations is useful to predict others, you probably don't need to specifically induce self-other overlap at the neural level for that strategy to be learned, but I am uncertain about this as this is highly contingent on the learning setup.
Interesting, thanks for sharing your thoughts. If this could improve the social intelligence of models then it can raise the risk of pushing the frontier of dangerous capabilities. It is worth noting that we are generally more interested in methods that are (1) more likely to transfer to AGI (don't over-rely on specific details of a chosen architecture) and (2) that specially target alignment instead of furthering the social capabilities of the model.
Thanks for writing this up—excited to chat some time to further explore your ideas around this topic. We’ll follow up in a private channel.The main worry that I have with regards to your approach is how competitive SociaLLM would be with regards to SOTA foundation models given both (1) the different architecture you plan to use, and (2) practical constraints on collecting the requisite structured data. While it is certainly interesting that your architecture lends itself nicely to inducing self-other overlap, if it is not likely to be competitive at the frontier, then the methods uniquely designed to induce self-other overlap on SociaLLM are likely to not scale/transfer well to frontier models that do pose existential risks. (Proactively ensuring transferability is the reason we focus on an additional training objective and make minimal assumptions about the architecture in the self-other overlap agenda.) One additional worry is that many of the research benefits of SociaLLM may not be out of reach for current foundation models, and so it is unclear if investing in the unique data and architecture setup is worth it in comparison to the counterfactual of just scaling up current methods.
I am slightly confused by your hypothetical. The hypothesis is rather that when the predicted reward from seeing my friend eating a cookie due to self-other overlap is lower than the obtained reward of me not eating a cookie, the self-other overlap might not be updated against because the increase in subjective reward from risking to get the obtained reward is higher than the prediction error in this low stakes scenario. I am fairly uncertain about this being what actually happens but I put it forward as a potential hypothesis.
"So anyway, you seem to be assuming that the human brain has no special mechanisms to prevent the unlearning of self-other overlap. I would propose instead that the human brain does have such special mechanisms, and that we better go figure out what those mechanisms are. :)"
My intuition is that the brain does have special mechanisms to prevent the unlearning of self-other overlap so I agree with you that we should be looking into the literature to understand them, as one would expect mechanisms like that to evolve given incentives to unlearn self-other overlap and given the evolutionary benefits of self-other overlap. One such mechanism could be the brain being more risk tolerant when it comes to empathic responses and not updating against self-other overlap when the predicted reward is lower than the obtained reward but I don't have a model of how exactly this would be implemented.
"I’m a bit confused by this. My “apocalypse stories” from the grandparent comment did not assume any competing incentives and mechanisms, right? They were all bad actions that I claim also flowed naturally from self-other-overlap-derived incentives."
What I meant by "competing incentives" is any incentives that compete with the good incentives described by me (other-preservation and sub-agent stability), which could include bad incentives that might also flow naturally from self-other overlap.
Thanks for watching the talk and for the insightful comments! A couple of thoughts:
- I agree that mirror neurons are problematic both theoretically and empirically so I avoided framing that data in terms of mirror neurons. I interpret the premotor cortex data and most other self-other overlap data under definition (A) described in your post.
- Regarding the second point, I think that the issue you brought up correctly identifies an incentive for updating away from "self-other overlap" but this doesn't seem to fully realise in humans and I expect this to not fully realise in AI agents either due to stronger competing incentives that favour self-other overlap. One possible explanation is the attitude for risk incorporated into the subjective reward value. In this paper, in the section "Risky rewards, subjective value, and formal economic utility", it is mentioned that monkeys show nonlinear utility functions that are compatible with risk seeking at small juice amounts and risk avoiding at larger amounts. It is possible that if the reward predictor predicts that "I will be eating chocolate" given that the amount of reward expected is fairly low humans are risk-seeking in that regime and our subjective reward value is high, making us want to take that bet, and given that the reward prediction error might be fairly small, it could potentially be overridden by our subjective reward value influenced by our attitude to risk. Now this might explain how empathic responses from self-other overlap are kept for low-stakes scenarios but there are stronger incentives to update away from self-other overlap in higher-stakes scenarios. Humans seem to have learned to modulate their empathy which might prevent some reward-prediction error. Also it is possible that we have evolved to be more risk-seeking when it comes to empathic concern due to the evolutionary benefits of empathy increasing the subjective reward value of the empathic response due to self-other overlap in higher stakes scenarios but I am uncertain. I am curious what you think about this hypothesis.
- I agree that the theory of change that I presented is simplistic and I should have more explicitly stated the key uncertainties of this proposal although I did mention throughout the talk that I do not think that inducing self-other overlap is enough and that we still have to ensure that the set of incentives that shape the agent's behaviour favours good outcomes. What I was trying to communicate in the theory of change section is that self-other overlap sets incentives that favour AI not killing us (self-preservation as an instrumentally convergent goal given which self-other overlap provides an incentive for other-preservation) and sub-agent stability of self-other overlap (due to the AI expecting its self/other-preservation preferences to be frustrated if the agents that it creates, including improved versions of itself, don't have self/other-preservation preferences) but I failed to put enough emphasis on the fact that this will only happen if we find ways to ensure that these incentives dominate and are not overridden by competing incentives and mechanisms. I think that the competing incentives problem is tractable, which is one of the main reasons I believe that this research direction is promising.
It feels like if the agent is generally intelligent enough hinge beliefs could be reasoned/fine-tuned against for the purposes of a better model of the world. This would mean that the priors from the hinge beliefs would still be present but the free parameters would update to try to account for them at least on a conceptual level. Examples would include general relativity, quantum mechanics and potentially even paraconsistent logic for which some humans have tried to update their free parameters to account as much as possible for their hinge beliefs for the purpose of better modelling the world (we should expect this in AGI as it is an instrumentally convergent goal). Moreover, a sufficiently capable agent could self-modify to get rid of the limiting hinge beliefs for the same reasons. This problem could be averted if the hinge beliefs/priors were defining the agent's goals but goals seem to be fairly specific and about concepts in a world model but hinge beliefs tend to be more general eg. how those concepts relate. Therefore, I'm uncertain how stable alignment solutions that rely on hinge beliefs would be.
Any n-bit hash function will produce collisions when the number of elements in the hash table gets large enough (after the number of possible hashes stored in n bits has been reached) so adding new elements will require rehashing to avoid collisions making GLUT have a logarithmic time complexity in the limit. Meta-learning can also have a constant time complexity for an arbitrarily large number of tasks, but not in the limit, assuming a finite neural network.