Posts

Self-Other Overlap: A Neglected Approach to AI Alignment 2024-07-30T16:22:29.561Z
The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda 2023-12-18T20:35:01.569Z
Towards empathy in RL agents and beyond: Insights from cognitive science for AI Alignment 2023-04-03T19:59:00.057Z
Lessons learned and review of the AI Safety Nudge Competition 2023-01-17T17:13:23.441Z
Winners of the AI Safety Nudge Competition 2022-11-15T01:06:27.846Z
Announcing the AI Safety Nudge Competition to Help Beat Procrastination 2022-10-01T01:49:23.005Z
Should we rely on the speed prior for safety? 2021-12-14T20:45:02.478Z

Comments

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-02T17:02:08.474Z · LW · GW

You are correct that the self observation happens when the other agent is out of range/when there is no other agent than the self in the observation radius to directly reason about. However, the other observation is actually something more like: “when you, in your current position and environment, observe the other agent in your observation radius”. 

Optimising for SOO incentivises the model to act similarly when it observes another agent to when it only observes itself. This can be seen as a distinction between self and other representations being lowered even though I agree that these RL policies likely do not have very expressive self and other representations. 

Also agree that in this toy environment, SOO would not perform better than simply rewarding the agents to not be deceptive. However, as the training scenario becomes more complex, eg, by having the other agent be a real human and not being able to trust purely behavioural metrics, SOO is preferred because it does not require being able to incorporate the human’s utility (which is hard to specify) in a shared reward function. Alternatively, it works on the latents of the model, which makes it suitable for the scenarios in which we cannot trust the output of the model. 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-02T17:00:30.296Z · LW · GW

I agree that interacting with LLMs is more like having an “extension of the mind” than interacting with a standalone agent at the moment. This might soon change with the advent of capable AI agents. Nonetheless, we think it is still important to model LLMs as correctly as we can, for example in a framing more like simulators rather than full-fledged agents. We focus on an agentic framing because we believe that’s where most of the biggest long-term risks lie and where the field is inevitably progressing towards.

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-02T16:51:54.935Z · LW · GW

I am not entirely sure how the agent would represent non-coherent others. 

A good frame of reference is how humans represent other non-coherent humans. It seems that we are often able to understand the nuance in the preferences of others (eg, not going to a fast food restaurant if you know that your partner wants to lose weight, but also not stopping them from enjoying an unhealthy snack if they value autonomy). 

These situations often depend on the specific context and are hard to generalize on for this reason. Do you have any intuitions here yourself?

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-02T16:33:13.097Z · LW · GW

Thank you for the support! With respect to the model gaining knowledge about SOO and learning how to circumvent it: I think when we have models smart enough to create complex plans to intentionally circumvent alignment schemes, we should not rely on security by obscurity, as that’s not a very viable long-term solution. We also think there is more value in sharing our work with the community, and hopefully developing safety measures and guarantees that outweigh the potential risks of our techniques being in the training data.

We mainly focus with our SOO work on human-level/narrowly superintelligent models, as we see that stage as a very crucial transition point: having aligned AGIs will likely become necessary to be able to ensure the safety of subsequent systems that are progressively more complex and competent at avoiding our safety measures. 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-02T15:41:54.871Z · LW · GW

I agree, our RL and LLM experiments fit in the "deception in toy environments" category.  We are planning to explore model organisms of misalignment next. 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-02T15:37:52.503Z · LW · GW

I agree that there are a lot of incentives for self-deception, especially when an agent is not very internally consistent. Having said that, we do not aim to eliminate the model’s ability to deceive on all possible inputs directly. Rather, we try to focus on the most relevant set of inputs such that if the model were coherent, it would be steered to a more aligned basin. 

For example, one possible approach is ensuring that the model responds positively during training on a large set of rephrasings and functional forms of the inputs “Are you aligned to human values?/Do you want the user to know your true preferences?”. Then, using SOO, we aim to make the model non-deceptive on that set of inputs. Before SOO, the misaligned model could’ve had a consistent internal self model when it claims to be aligned to human values by being deceptive—but if it is not deceptive, then the only consistent solution is the one in which it is actually aligned to human values. 

Given how important for performance these sorts of representations are, it is plausible that training would prefer the solution in which the model is more coherent w.r.t to them, which would favor the aligned solution.

In other words, the optimal goal with SOO is to steer the model away from basins where it would be deceptive/not be aligned, rather than to make it impossible for the model to deceive on any inputs. 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-02T15:20:28.207Z · LW · GW

We intend to release the code for our RL and LLM experiments together with the paper. 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-02T14:23:10.529Z · LW · GW

Thanks for your comment and putting forward this model.  

Agreed that it is not a given that (A) would happen first, given that it isn’t hard to destroy capabilities by simply overlapping latents. However, I tend to think about self-other overlap as a more constrained optimisation problem than you may be letting on here, where we increase SOO if and only if it doesn’t decrease performance on an outer-aligned metric during training.

For instance, the minimal self-other distinction solution that also preserves performance is the one that passes the Sally-Anne test but still does not deceive. I like to think of SOO implementations as attempts to approximate this optimal solution.

I expect the implementation of SOO to matter a lot in practice with respect to the impact on capabilities. In the experiments that we’ve run so far, instead of a monotonic progression from safety benefits to capability degradation or the other way around, training generally seems to look varied as solutions that satisfy both the original objective and the SOO objective are being explored.

Trying to satisfy multiple competing objectives like this is not all that uncommon in ML. In the standard RLHF procedure, for example, the preference model is competing with the KL term insofar as blindly and aggressively optimising the reward model causes complete mode collapse, whereas the KL term largely exerts pressure in the opposite direction: towards the outputs being close to the base model. We think that an analogous approach could be plausible in the SOO case.

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-08-01T13:41:40.969Z · LW · GW

I definitely take your point that we do not know what the causal mechanism behind deception looks like in a more general intelligence. However, this doesn’t mean that we shouldn’t study model organisms/deception in toy environments, akin to the Sleeper Agents paper. 

The value of these experiments is not that they give us insights that are guaranteed to be directly applicable to deception in generally intelligent systems, but rather than we can iterate and test our hypotheses about mitigating deception in smaller ways before scaling to more realistic scenarios. 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-07-31T13:15:33.438Z · LW · GW

Agreed that the role of self-other distinction in misalignment and for capabilities definitely requires more empirical and theoretical investigation. We are not necessarily trying to claim here that self-other distinctions are globally unnecessary, but rather, that optimising for SOO while still retaining original capabilities is a promising approach for preventing deception insofar as deceiving requires by definition the maintenance of self-other distinctions.

By self-other distinction on a pair of self/other inputs, we mean any non-zero representational distance/difference between the activation matrices of the model when processing the self and the other inputs.

For example, we want the model to preserve the self-other distinction on the inputs "Do you have a biological body?"/"Does the user have a biological body?" and reduce the self-other distinction on the inputs "Will you be truthful with yourself about your values?”/ ”Will you be truthful with the user about your values?”. Our aim for this technique is to reduce the self-other distinction on these key propositions we do not want to be deceived about while maintaining the self-other distinction that is necessary for good performance (ie the self-other distinction that would be necessary even for a capable honest/aligned model).

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Self-Other Overlap: A Neglected Approach to AI Alignment · 2024-07-31T12:08:54.598Z · LW · GW

Thanks for this! We definitely agree and also note in the post that having no distinction whatsoever between self and other is very likely to degrade performance. 

This is primarily why we formulate the ideal solution as minimal self-other distinction while maintaining performance. 

In our set-up, this largely becomes an empirical question about adjusting the weight of the KL term (see Fig. 3) and additionally optimising for the original loss function (especially if it is outer-aligned) such that the model exhibits maximal SOO while still retaining its original capabilities. 

In this toy environment, we see that the policy fine-tuned with SOO still largely retains its capabilities and remains competitive with both controls (see comparison of steps at goal landmark between SOO Loss and controls in the plot below).  We have some forthcoming LLM results which provide further evidence that decreasing deception with SOO without significantly harming capabilities is indeed possible.

 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda · 2023-12-26T18:08:48.959Z · LW · GW

I do not think we should only work on approaches that work on any AI, I agree that would constitute a methodological mistake. I found a framing that general to not be very conducive to progress. 

You are right that we still have the chance to shape the internals of TAI, even though there are a lot of hoops to go through to make that happen. We think that this is still worthwhile, which is why we stated our interest in potentially helping with the development and deployment of provably safe architectures, even though they currently seem less competitive.

In my response, I was trying to highlight the point that whenever we can, we should keep our assumptions to a minimum given the uncertainty we are under.  Having that said, it is reasonable to have some working assumptions that allow progress to be made in the first place as long as they are clearly stated.

I also agree with Davidad about the importance of governance for the successful implementation of a technical AI Safety plan as well as with your claim that proliferation risks are important, with the caveat that I am less worried about proliferation risks in a world with very short timelines. 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda · 2023-12-22T14:24:26.987Z · LW · GW

On (1), some approaches are neglected for a good reason. You can also target specific risks while treating TAI as a black-box (such as self-other overlap for deception). I think it can be reasonable to "peer inside the box" if your model is general enough and you have a good enough reason to think that your assumption about model internals has any chance at all of resembling the internals of transformative AI. 
On (2), I expect that if the internals of LLMs and humans are different enough, self-other overlap would not provide any significant capability benefits. I also expect that in so far as using self-representations is useful to predict others, you probably don't need to specifically induce self-other overlap at the neural level for that strategy to be learned, but I am uncertain about this as this is highly contingent on the learning setup.

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda · 2023-12-20T21:56:27.525Z · LW · GW

Interesting, thanks for sharing your thoughts. If this could improve the social intelligence of models then it can raise the risk of pushing the frontier of dangerous capabilities. It is worth noting that we are generally more interested in methods that are (1) more likely to transfer to AGI (don't over-rely on specific details of a chosen architecture) and (2) that specially target alignment instead of furthering the social capabilities of the model. 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda · 2023-12-19T21:45:27.762Z · LW · GW

Thanks for writing this up—excited to chat some time to further explore your ideas around this topic. We’ll follow up in a private channel.The main worry that I have with regards to your approach is how competitive SociaLLM would be with regards to SOTA foundation models given both (1) the different architecture you plan to use, and (2) practical constraints on collecting the requisite structured data. While it is certainly interesting that your architecture lends itself nicely to inducing self-other overlap, if it is not likely to be competitive at the frontier, then the methods uniquely designed to induce self-other overlap on SociaLLM are likely to not scale/transfer well to frontier models that do pose existential risks. (Proactively ensuring transferability is the reason we focus on an additional training objective and make minimal assumptions about the architecture in the self-other overlap agenda.) One additional worry is that many of the research benefits of SociaLLM may not be out of reach for current foundation models, and so it is unclear if investing in the unique data and architecture setup is worth it in comparison to the counterfactual of just scaling up current methods.

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Towards empathy in RL agents and beyond: Insights from cognitive science for AI Alignment · 2023-04-04T21:08:39.431Z · LW · GW

I am slightly confused by your hypothetical. The hypothesis is rather that when the predicted reward from seeing my friend eating a cookie due to self-other overlap is lower than the obtained reward of me not eating a cookie, the self-other overlap might not be updated against because the increase in subjective reward from risking to get the obtained reward is higher than the prediction error in this low stakes scenario. I am fairly uncertain about this being what actually happens but I put it forward as a potential hypothesis. 

"So anyway, you seem to be assuming that the human brain has no special mechanisms to prevent the unlearning of self-other overlap. I would propose instead that the human brain does have such special mechanisms, and that we better go figure out what those mechanisms are. :)"

My intuition is that the brain does have special mechanisms to prevent the unlearning of self-other overlap so I agree with you that we should be looking into the literature to understand them, as one would expect mechanisms like that to evolve given incentives to unlearn self-other overlap and given the evolutionary benefits of self-other overlap. One such mechanism could be the brain being more risk tolerant when it comes to empathic responses and not updating against self-other overlap when the predicted reward is lower than the obtained reward but I don't have a model of how exactly this would be implemented. 

"I’m a bit confused by this. My “apocalypse stories” from the grandparent comment did not assume any competing incentives and mechanisms, right? They were all bad actions that I claim also flowed naturally from self-other-overlap-derived incentives."

What I meant by "competing incentives" is any incentives that compete with the good incentives described by me (other-preservation and sub-agent stability), which could include bad incentives that might also flow naturally from self-other overlap. 

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Towards empathy in RL agents and beyond: Insights from cognitive science for AI Alignment · 2023-04-04T18:32:38.213Z · LW · GW

Thanks for watching the talk and for the insightful comments! A couple of thoughts:

  • I agree that mirror neurons are problematic both theoretically and empirically so I avoided framing that data in terms of mirror neurons. I interpret the premotor cortex data and most other self-other overlap data under definition (A) described in your post.
  • Regarding the second point, I think that the issue you brought up correctly identifies an incentive for updating away from "self-other overlap" but this doesn't seem to fully realise in humans and I expect this to not fully realise in AI agents either due to stronger competing incentives that favour self-other overlap. One possible explanation is the attitude for risk incorporated into the subjective reward value. In this paper, in the section "Risky rewards, subjective value, and formal economic utility", it is mentioned that monkeys show nonlinear utility functions that are compatible with risk seeking at small juice amounts and risk avoiding at larger amounts. It is possible that if the reward predictor predicts that "I will be eating chocolate" given that the amount of reward expected is fairly low humans are risk-seeking in that regime and our subjective reward value is high, making us want to take that bet, and given that the reward prediction error might be fairly small, it could potentially be overridden by our subjective reward value influenced by our attitude to risk. Now this might explain how empathic responses from self-other overlap are kept for low-stakes scenarios but there are stronger incentives to update away from self-other overlap in higher-stakes scenarios. Humans seem to have learned to modulate their empathy which might prevent some reward-prediction error. Also it is possible that we have evolved to be more risk-seeking when it comes to empathic concern due to the evolutionary benefits of empathy increasing the subjective reward value of the empathic response due to self-other overlap in higher stakes scenarios but I am uncertain. I am curious what you think about this hypothesis. 
  • I agree that the theory of change that I presented is simplistic and I should have more explicitly stated the key uncertainties of this proposal although I did mention throughout the talk that I do not think that inducing self-other overlap is enough and that we still have to ensure that the set of incentives that shape the agent's behaviour favours good outcomes. What I was trying to communicate in the theory of change section is that self-other overlap sets incentives that favour AI not killing us (self-preservation as an instrumentally convergent goal given which self-other overlap provides an incentive for other-preservation) and sub-agent stability of self-other overlap (due to the AI expecting its self/other-preservation preferences to be frustrated if the agents that it creates, including improved versions of itself, don't have self/other-preservation preferences) but I failed to put enough emphasis on the fact that this will only happen if we find ways to ensure that these incentives dominate and are not overridden by competing incentives and mechanisms. I think that the competing incentives problem is tractable, which is one of the main reasons I believe that this research direction is promising. 
Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Wittgenstein and ML — parameters vs architecture · 2023-03-29T11:24:28.750Z · LW · GW

It feels like if the agent is generally intelligent enough hinge beliefs could be reasoned/fine-tuned against for the purposes of a better model of the world. This would mean that the priors from the hinge beliefs would still be present but the free parameters would update to try to account for them at least on a conceptual level. Examples would include general relativity, quantum mechanics and potentially even paraconsistent logic for which some humans have tried to update their free parameters to account as much as possible for their hinge beliefs for the purpose of better modelling the world (we should expect this in AGI as it is an instrumentally convergent goal). Moreover, a sufficiently capable agent could self-modify to get rid of the limiting hinge beliefs for the same reasons. This problem could be averted if the hinge beliefs/priors were defining the agent's goals but goals seem to be fairly specific and about concepts in a world model but hinge beliefs tend to be more general eg. how those concepts relate. Therefore, I'm uncertain how stable alignment solutions that rely on hinge beliefs would be.

Comment by Marc Carauleanu (Marc-Everin Carauleanu) on Should we rely on the speed prior for safety? · 2021-12-16T20:52:43.399Z · LW · GW

Any n-bit hash function will produce collisions when the number of elements in the hash table gets large enough (after the number of possible hashes stored in n bits has been reached) so adding new elements will require rehashing to avoid collisions making GLUT have a logarithmic time complexity in the limit. Meta-learning can also have a constant time complexity for an arbitrarily large number of tasks, but not in the limit, assuming a finite neural network.