Shallow review of live agendas in alignment & safety

post by technicalities, Stag · 2023-11-27T11:10:27.464Z · LW · GW · 69 comments

Contents

  Summary
  Meta
  Editorial 
  Agendas
    1. Understand existing models
      Evals
      The other evals (groundwork for regulation)
      Interpretability 
      Understand learning
    2. Control the thing
      Prevent deception 
      Surgical model edits
      Getting it to learn what we want
      Goal robustness 
    3. Make AI solve it
      Scalable oversight
      Task decomp
      Adversarial 
    4. Theory 
      Galaxy-brained end-to-end solutions 
      Understanding agency 
      Corrigibility
      Ontology identification 
      Understand cooperation
    5. Labs with miscellaneous efforts
  More meta
  Appendices
    Appendix: Prior enumerations
    Appendix: Graveyard
    Appendix: Biology for AI alignment
      Human enhancement 
      Merging 
      As alignment aid 
    Appendix: Research support orgs
    Appendix: Meta, mysteries, more
None
69 comments

Summary

You can’t optimise an allocation of resources if you don’t know what the current one is. Existing maps of alignment research are mostly too old to guide you and the field has nearly no ratchet, no common knowledge of what everyone is doing and why, what is abandoned and why, what is renamed, what relates to what, what is going on. 

This post is mostly just a big index: a link-dump for as many currently active AI safety agendas as we could find. But even a linkdump is plenty subjective. It maps work to conceptual clusters 1-1, aiming to answer questions like “I wonder what happened to the exciting idea I heard about at that one conference” and “I just read a post on a surprising new insight and want to see who else has been working on this”, “I wonder roughly how many people are working on that thing”. 

This doc is unreadably long, so that it can be Ctrl-F-ed. Also this way you can fork the list and make a smaller one. 

Our taxonomy:

  1. Understand existing models (evals, interpretability, science of DL)
  2. Control the thing (prevent deception, model edits, value learning, goal robustness)
  3. Make AI solve it (scalable oversight, cyborgism, etc)
  4. Theory (galaxy-brained end-to-end, agency, corrigibility, ontology, cooperation)
     

Please point out if we mistakenly round one thing off to another, miscategorise someone, or otherwise state or imply falsehoods. We will edit.

Unlike the late Larks reviews [EA · GW], we’re not primarily aiming to direct donations. But if you enjoy reading this, consider [AF · GW] donating to ManifundMATS, or LTFF, or to Lightspeed for big ticket amounts: some good work is bottlenecked by money, and you have free access to the service of specialists in giving money for good work.
 

Meta

When I (Gavin) got into alignment (actually it was still ‘AGI Safety’) people warned me it was pre-paradigmatic. They were right: in the intervening 5 years, the live agendas have changed completely.[1] So here’s an update. 

Chekhov’s evaluation: I include Yudkowsky’s operational criteria [LW · GW] (Trustworthy command?, closure?, opsec?, commitment to the common good?, alignment mindset?) but don’t score them myself. The point is not to throw shade but to remind you that we often know little about each other. 

See you in 5 years.
 

Editorial 

Agendas

1. Understand existing models

characterisation
 

Evals

(Figuring out how a trained model behaves.)
 

Various capability evaluations

Various red-teams

Eliciting model anomalies 

Alignment of Complex Systems: LLM interactions

The other evals (groundwork for regulation)

Much of Evals and Governance orgs’ work is something different: developing politically legible metrics [AF · GW], processesshocking case studies. The aim is to motivate and underpin actually sensible regulation. 

But this is a technical alignment post. I include this section to emphasise that these other evals (which seek confirmation) are different from understanding whether dangerous capabilities have or might emerge.
 

Interpretability 

(Figuring out what a trained model is actually computing.)[2]
 

Ambitious mech interp [EA · GW]

Concept-based interp 

Causal abstractions

EleutherAI interp

Activation engineering  [LW · GW](as unsupervised interp)

Leap

Understand learning

(Figuring out how the model figured it out.)
 

Timaeus: Developmental interpretability [AF · GW] & singular learning theory 

Various other efforts:


2. Control the thing

(Figuring out how to predictably affect model behaviour.)
 

Prosaic alignment [LW · GW]alignment by default [AF · GW] 

Redwood: control evaluations

 

Safety scaffolds

 

Prevent deception 

Through methods besides mechanistic interpretability.
 

Redwood: mechanistic anomaly detection [AF · GW]

Indirect deception monitoring 

Anthropic: externalised reasoning oversight [AF · GW]

Surgical model edits

(interventions on model internals)
 

Weight editing

 

Activation engineering  [? · GW]

Getting it to learn what we want

(Figuring out how to control what the model figures out.)
 

Social-instinct AGI [LW · GW]

 

Imitation learning [AF · GW]

Reward learning 

Goal robustness 

(Figuring out how to make the model keep doing ~what it has been doing so far.)
 

Measuring OOD

Concept extrapolation 

Mild optimisation [? · GW]


3. Make AI solve it [? · GW]

(Figuring out how models might help figure it out.)
 

Scalable oversight

(Figuring out how to help humans supervise models. Hard to cleanly distinguish from ambitious mechanistic interpretability.)

OpenAI: Superalignment 

Supervising AIs improving AIs [LW · GW]

Cyborgism [LW · GW]

 

See also Simboxing [LW · GW] (Jacob Cannell).
 

Task decomp

Recursive reward modelling is supposedly not dead but instead one of the tools Superalignment will build.

Another line tries to make something honest out of chain of thoughttree of thought.
 

Elicit (previously Ought [LW · GW])

Adversarial 

Deepmind Scalable Alignment [AF · GW]

AnthropicNYU Alignment Research Group / Perez collab

 

See also FAR (below).


4. Theory 

(Figuring out what we need to figure out, and then doing that. This used to be all we could do.)
 

Galaxy-brained end-to-end solutions
 

The Learning-Theoretic Agenda [AF · GW] 

Open Agency Architecture [LW · GW]

Provably safe systems

Conjecture: Cognitive Emulation [LW · GW] (CoEms)

Question-answer counterfactual intervals (QACI) [LW · GW]

Understanding agency 

(Figuring out ‘what even is an agent’ and how it might be linked to causality.)
 

Causal foundations

Alignment of Complex Systems: Hierarchical agency

The ronin sharp left turn crew  [LW · GW]

Shard theory [? · GW]

boundaries / membranes [LW · GW]

disempowerment formalism

Performative prediction

Understanding optimisation

Corrigibility

(Figuring out how we get superintelligent agents to keep listening to us. Arguably scalable oversight and superalignment are ~atheoretical approaches to this.)


Behavior alignment theory 

The comments in this thread [LW · GW] are extremely good – but none of the authors are working on this!! See also Holtman’s neglected result. See also EJT (and formerly Petersen [LW · GW]). See also Dupuis [LW · GW].

 

Ontology identification 

(Figuring out how superintelligent agents think about the world and how we get superintelligent agents to actually tell us what they know. Much of interpretability is incidentally aiming at this.)
 

ARC Theory 

Natural abstractions [AF · GW] 

Understand cooperation

(Figuring out how inter-AI and AI/human game theory should or would work.)
 

CLR 

FOCAL 

 

See also higher-order game theory [AF · GW]. We moved CAIF to the “Research support” appendix. We moved AOI to “misc”.


5. Labs with miscellaneous efforts

(Making lots of bets rather than following one agenda, which is awkward for a topic taxonomy.)
 

 Deepmind Alignment Team [AF · GW] 

Apollo

Anthropic Assurance / Trust & Safety / RSP Evaluations / Interpretability

FAR [LW · GW] 

Krueger Lab

AI Objectives Institute (AOI)


More meta

We don’t distinguish between massive labs, individual researchers, and sparsely connected networks of people working on similar stuff. The funding amounts and full time employee estimates might be a reasonable proxy.

The categories we chose have substantial overlap and see the “see also”s for closely related work.

I wanted this to be a straight technical alignment doc, but people pointed out that would exclude most work (e.g. evals and nonambitious interpretability, which are safety but not alignment) so I made it a technical AGI safety doc. Plus ça change.

The only selection criterion is “I’ve heard of it and >= 1 person was recently working on it”. I don’t go to parties so it’s probably a couple months behind. 

Obviously this is the Year of Governance and Advocacy, but I exclude all this good work: by its nature it gets attention. I also haven’t sought out the notable amount by ordinary labs and academics who don’t frame their work as alignment. Nor the secret work.

You are unlikely to like my partition into subfields; here are others.

No one has read all of this material, including us. Entries are based on public docs or private correspondence where possible but the post probably still contains >10 inaccurate claims. Shouting at us is encouraged. If I’ve missed you (or missed the point), please draw attention to yourself. 

If you enjoyed reading this, consider donating to LightspeedMATSManifund, or LTFF: some good work is bottlenecked by money, and some people specialise in giving away money to enable it.

Conflicts of interest: I wrote the whole thing without funding. I often work with ACS and PIBBSS and have worked with Team Shard. Lightspeed gave a nice open-ended grant to my org, Arb. CHAI once bought me a burrito. 

If you’re interested in doing or funding this sort of thing, get in touch at hi@arbresearch.com. I never thought I’d end up as a journalist, but stranger things will happen.


 

Thanks to Alex Turner, Neel Nanda, Jan Kulveit, Adam Gleave, Alexander Gietelink Oldenziel, Marius Hobbhahn, Lauro Langosco, Steve Byrnes, Henry Sleight, Raymond Douglas, Robert Kirk, Yudhister Kumar, Quratulain Zainab, Tomáš Gavenčiak, Joel Becker, Lucy Farnik, Oliver Hayman, Sammy Martin, Jess Rumbelow, Jean-Stanislas Denain, Ulisse Mini, David Mathers, Chris Lakin, Vojta Kovařík, Zach Stein-Perlman, and Linda Linsefors for helpful comments.


Appendices

Appendix: Prior enumerations

Appendix: Graveyard

Appendix: Biology for AI alignment

Lots of agendas but not clear if anyone besides Byrnes and Thiergart are actively turning the crank. Seems like it would need a billion dollars.
 

Human enhancement 

Merging 

As alignment aid 


Appendix: Research support orgs

One slightly confusing class of org is described by the sample {CAIF, FLI}. Often run by active researchers with serious alignment experience, but usually not following an obvious agenda, delegating a basket of strategies to grantees, doing field-building stuff like NeurIPS workshops and summer schools.
 

CAIF 

AISC

 

See also:

Appendix: Meta, mysteries, more

  1. ^

    Unless you zoom out so far that you reach vague stuff like “ontology identification”. We will see if this total turnover is true again in 2028; I suspect a couple will still be around, this time.

  2. ^

    > one can posit neural network interpretability as the GiveDirectly of AI alignment: reasonably tractable, likely helpful in a large class of scenarios, with basically unlimited scaling and only slowly diminishing returns. And just as any new EA cause area must pass the first test of being more promising than GiveDirectly, so every alignment approach could be viewed as a competitor to interpretability work. – Niplav

69 comments

Comments sorted by top scores.

comment by Alex_Altair · 2023-11-28T18:14:40.182Z · LW(p) · GW(p)

I wonder if we couldn't convert this into some kind of community wiki, so that the people represented in it can provide endorsed representations of their own work, and so that the community as a whole can keep it updated as time goes on.

Obviously there's the problem where you don't want random people to be able to put illegitimate stuff on the list. But it's also hard to agree on a way to declare legitimacy.

...Maybe we could have a big post like lukeprog's old textbook post, where researchers can make top-level comments describing their own research? And then others can up- or down-vote the comments based on the perceived legitimacy of the research program?

Replies from: Zach Stein-Perlman, lw-user0246
comment by Zach Stein-Perlman · 2023-11-28T19:00:17.587Z · LW(p) · GW(p)

I am excited about this. I've also recently been interested in ideas like nudge researchers to write 1-5 page research agendas, then collect them and advertise the collection.

Possible formats:

  • A huge google doc (maybe based on this post); anyone can comment; there's one or more maintainers; maintainers approve ~all suggestions by researchers about their own research topics and consider suggestions by random people.
  • A directory of google docs on particular agendas; the individual google docs are each owned by a relevant researcher, who is responsible for maintaining them; some maintainer-of-the-whole-project occasionally nudges researchers to update their docs and reassigns the topic to someone else if necessary. Random people can make suggestions too.
  • (Alex, I think we can do much better than the best textbooks [LW · GW] format in terms of organization, readability, and keeping up to date.)

I am interested in helping make something like this happen. Or if it doesn't happen soon I might try to do it (but I'm not taking responsibility for making this happen). Very interested in suggestions.

(One particular kind-of-suggestion: is there a taxonomy/tree of alignment research directions you like, other than the one in this post? (Note to self: taxonomies have to focus on either methodology or theory of change... probably organize by theory of change and don't hesitate to point to the same directions/methodologies/artifacts in multiple places.))

Replies from: leogao, habryka4, Roman Leventov, Iknownothing
comment by leogao · 2023-11-28T23:00:41.116Z · LW(p) · GW(p)

There's also a much harder and less impartial option, which is to have an extremely opinionated survey that basically picks one lens to view the entire field and then describes all agendas with respect to that lens in terms of which particular cruxes/assumptions each agenda runs with. This would necessarily require the authors of the survey to deeply understand all the agendas they're covering, and inevitably some agendas will receive much more coverage than other agendas. 

This makes it much harder than just stapling together a bunch of people's descriptions of their own research agendas, and will never be "the" alignment survey because of the opinionatedness. I still think this would have a lot of value though: it would make it much easier to translate ideas between different lenses/notice commonalities, and help with figuring out which cruxes need to be resolved for people to agree. 

Relatedly, I don't think alignment currently has a lack of different lenses (which is not to say that the different lenses are meaningfully decorrelated). I think alignment has a lack of convergence between people with different lenses. Some of this is because many cruxes are very hard to resolve experimentally today. However, I think even despite that it should be possible to do much better than we currently are--often, it's not even clear what the cruxes are between different views, or whether two people are thinking about the same thing when they make claims in different language. 

Replies from: LawChan, steve2152, M. Y. Zuo
comment by LawrenceC (LawChan) · 2023-11-29T10:07:09.817Z · LW(p) · GW(p)

I strongly agree that this would be valuable; if not for the existence of this shallow review I'd consider doing this myself just to serve as a reference for myself. 

Replies from: leogao
comment by leogao · 2023-11-29T10:13:37.471Z · LW(p) · GW(p)

Fwiw I think "deep" reviews serve a very different purpose from shallow reviews so I don't think you should let the existence of shallow reviews prevent you from doing a deep review

comment by Steven Byrnes (steve2152) · 2023-11-29T13:37:13.892Z · LW(p) · GW(p)

I've written up an opinionated take on someone else's technical alignment agenda about three times, and each of those took me something like 100 hours. That was just to clearly state why I disagreed with it; forget about resolving our differences :)

comment by M. Y. Zuo · 2023-12-04T03:28:33.220Z · LW(p) · GW(p)

Even that is putting it a bit too lightly.

i.e. Is there even a single, bonafide, novel proof at all? 

Proven mathematically, or otherwise  demonstrated with 100% certainty, across the last 10+ years.

Or is it all just 'lenses', subjective views, probabilistic analysis, etc...?

comment by habryka (habryka4) · 2023-11-28T21:35:42.759Z · LW(p) · GW(p)

LessWrong does have a relatively fully featured wiki system. Not sure how good of a fit it is, but like, everyone can create tags and edit them and there are edit histories and comment sections for tags and so on. 

We've been considering adding the ability for people to also add generic wiki pages, though how to make them visible and allocate attention to them has been a bit unclear.

Replies from: Thane Ruthenis
comment by Thane Ruthenis · 2023-12-06T15:18:51.427Z · LW(p) · GW(p)

how to make them visible and allocate attention to them has been a bit unclear

Maybe an opt-in/opt-out "novice mode" which turns, say, the first appearance of a niche LW term in every post into a link to that term's LW wiki page? Which you can turn off in the settings, and which is either on by default (with a notification on how to turn it off), or the sign-up process queries you about whether you want to turn it on, or something along these lines.

Alternatively, a button for each post which fetches the list of idiosyncratic LW terms mentioned in it, and links to their LW wiki pages?

comment by Roman Leventov · 2023-11-29T04:26:38.782Z · LW(p) · GW(p)

I've earlier suggested a principled taxonomy of AI safety work [LW(p) · GW(p)] with two dimensions:

  1. System level:

    • monolithic AI system
    • human--AI pair
    • AI group/org: CoEm, debate systems
    • large-scale hybrid (humans and AIs) society and economy
    • AI lab, not to be confused with an "AI org" above: an AI lab is an org composed of humans and increasingly of AIs that creates advanced AI systems. See Hendrycks et al.' discussion of organisational risks [? · GW].
  2. Methodological time:

    • design time: basic research, math, science of agency (cognition, DL, games, cooperation, organisations), algorithms
    • manufacturing/training time: RLHF, curriculums, mech interp, ontology/representations engineering, evals, training-time probes and anomaly detection
    • deployment/operations time: architecture to prevent LLM misuse or jailbreaking, monitoring, weights security
    • evolutionary time: economic and societal incentives, effects of AI on society and psychology, governance.

So, this taxonomy is a 5x4 matrix, almost all slots or which are interesting, and some of them are severely under-explored.

comment by Iknownothing · 2023-12-05T19:40:33.283Z · LW(p) · GW(p)

Hi, we've already made a site which does this!

comment by Jono (lw-user0246) · 2023-11-29T09:51:09.523Z · LW(p) · GW(p)

ai-plans.com aims to collect research agendas and have people comment on their strengths and vulnerabilities. The discord also occasionally hosts a critique-a-ton, where people discuss specific agendas.

Replies from: kabir-kumar
comment by Kabir Kumar (kabir-kumar) · 2023-12-05T19:52:06.604Z · LW(p) · GW(p)

Yes, we host a bi-monthly Critique-a-Thon- the next one is from December 16th to 18th!

Judges include:
- Nate Soares, President of MIRI, 
- Ramana Kumar, researcher at DeepMind
- Dr Peter S Park, MIT postdoc at the Tegmark lab,
- Charbel-Raphael Segerie, head of the AI unit at EffiSciences.

comment by Zach Stein-Perlman · 2023-11-27T17:47:39.069Z · LW(p) · GW(p)

Thanks!

I think there's another agenda like make untrusted models safe but useful by putting them in a scaffolding/bureaucracy—of filters, classifiers, LMs, humans, etc.—such that at inference time, takeover attempts are less likely to succeed and more likely to be caught. See Untrusted smart models and trusted dumb models [AF · GW] (Shlegeris 2023). Other relevant work:

[Edit: now AI Control (Shlegeris et al. 2023) and Catching AIs red-handed (Greenblatt and Shlegeris 2024).]

[Edit: I make a bid for an expert—probably someone at Redwood—to make a public reading list on this control agenda.]

Replies from: ryan_greenblatt, Seth Herd, technicalities
comment by ryan_greenblatt · 2023-11-27T21:28:48.661Z · LW(p) · GW(p)

Explicitly noting for the record we have some forthcoming work on AI control which should be out relatively soon.

(I work at RR)

comment by Seth Herd · 2023-11-27T23:34:13.940Z · LW(p) · GW(p)

This is an excellent description of my primary work, for example

Internal independent review for language model agent alignment [AF · GW]

That post proposes calling new instances or different models to review plans and internal dialogue for alignment, but it includes discussion of the several layers of safety scaffolding that have been proposed elsewhere.

This post is amazingly useful. Integrative/overview work is often thankless, but I think it's invaluable for understanding where effort is going, and thinking about gaps where it more should be devoted. So thank you, thank you!

comment by technicalities · 2023-11-27T18:34:51.748Z · LW(p) · GW(p)

I like this. It's like a structural version of control evaluations. Will think where to put it in

Replies from: LawChan
comment by LawrenceC (LawChan) · 2023-11-27T21:00:03.329Z · LW(p) · GW(p)

Expanding on this -- this whole area is probably best known as "AI Control", and I'd lump it under "Control the thing" as its own category. I'd also move Control Evals to this category as well, though someone at RR would know better than I. 

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2023-11-27T21:27:30.693Z · LW(p) · GW(p)

Yep, indeed I would consider "control evaluations" to be a method of "AI control". I consider the evaluation and the technique development to be part of a unified methodology (we'll describe this more in a forthcoming post).

(I work at RR)

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2023-11-27T23:50:04.623Z · LW(p) · GW(p)

It's "a unified methodology" but I claim it has two very different uses: (1) determining whether a model is safe (in general or within particular scaffolding) and (2) directly making deployment safer. Or (1) model evals and (2) inference-time safety techniques.

Replies from: ryan_greenblatt
comment by ryan_greenblatt · 2023-11-28T00:25:32.705Z · LW(p) · GW(p)

(Agreed except that "inference-time safety techiques" feels overly limiting. It's more like purely behavioral (black-box) safety techniques where we can evaluate training by converting it to validation. Then, we imagine we get the worst model that isn't discriminated by our validation set and other measurements. I hope this isn't too incomprehensible, but don't worry if it is, this point isn't that important.)

comment by habryka (habryka4) · 2023-12-03T22:18:21.434Z · LW(p) · GW(p)

Promoted to curated. I think this kind of overview is quite valuable, and I think overall this post did a pretty good job of a lot of different work happening in the field. I don't have a ton more to say, I just think posts like this should come out every few months, and the takes in this one overall seemed pretty good to me.

comment by Roman Leventov · 2023-11-28T08:43:33.888Z · LW(p) · GW(p)

Under "Understand cooperation", you should add Metagov (many relevant projects under this umbrella, please visit the website, in particular, DAO Science), "ecosystems of intelligence" agenda (itself pursued by Verses, Active Inference Institute, Gaia Consortium, Digital Gaia, and Bioform Labs). This is more often more practical than theoretical work though, so the category names ("Theory" > "Understanding cooperation") wouldn't be totally reasonable for it, but this is also true for a lot of entires already on the post.

In general, the science of cooperation, game theory, digital assets and money, and governance is mature, with a lot of academics working in it in different countries. Picking up just a few projects "familiar to the LessWrong crowd" is just reinforcing the bubble.

The "LessWrong bias" is also felt in the decision to omit all the efforts that contribute to the creation of the stable equilibrium for the civilisation on which an ASI can land [LW · GW]. Here's my stab at what is going into that from one month ago [LW(p) · GW(p)]; and this is Vitalik Buterin's stab from yesterday [LW · GW].

Also, speaking about pure "technical alignment/AI safety" agendas that "nobody on LW knows and talks about", check out the 16 projects already funded by the "Safe Learning-Enabled Systems" NSF grant program. All these projects have received grants from $250k to $800k and are staffed with teams of academics in American universities.

Replies from: technicalities
comment by technicalities · 2023-11-28T09:39:52.651Z · LW(p) · GW(p)

Ta! 

I've added a line about the ecosystems. Nothing else in the umbrella strikes me as direct work (Public AI is cool but not alignment research afaict). (I liked your active inference paper btw, see ACS.)

A quick look suggests that the stable equilibrium things aren't in scope - not because they're outgroup but because this post is already unmanageable without handling policy, governance, political economy and ideology. The accusation of site bias against social context or mechanism was perfectly true last year, but no [? · GW] longer [? · GW], and my personal scoping should not be taken as indifference.

Of the NSF people, only Sharon Li strikes me as doing things relevant to AGI. 

Happy to be corrected if you know better!

Replies from: Roman Leventov
comment by Roman Leventov · 2023-11-28T10:47:39.353Z · LW(p) · GW(p)

I'm talking about science of governance, digitalised governance, and theories of contracting, rather than not-so-technical object-level policy and governance work that is currently done at institutions. And this is absolutely not to the detriment of that work, but just as a selection criteria for this post, which could decide to focus on technical agendas where technical visitors of LW may contribute to.

The view that there is a sharp divide between "AGI-level safety" and "near-term AI safety and ethics" is itself controversial, e.g., Scott Aaronson doesn't share it. I guess this isn't a justification for including all AI ethics work that is happening, but of the NSF projects, definitely more than one (actually, most of them) appear to me upon reading abstracts as potentially relevant for AGI safety. Note that this grant program of NSF is in a partnership with Open Philanthropy and OpenPhil staff participate in the evaluation of the projects. So, I don't think they would select a lot of projects irrelevant for AGI safety.

Replies from: technicalities
comment by technicalities · 2023-11-28T10:51:11.902Z · LW(p) · GW(p)

If the funder comes through I'll consider a second review post I think

comment by daig · 2023-11-27T13:35:48.365Z · LW(p) · GW(p)

Thanks for making this map 🙏

 

I expect this is a rare moment of clarity because maintaining updates takes a lot of effort and is now subject to optimization pressure.

Also imo most of the "good" alignment work in terms of eventual impact is being done outside the alignment label (eg as differential geometry or control theory) and will be merged in later once the connection is recognized. Probably this will continue to become more true over time.

 

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-11-29T01:31:56.759Z · LW(p) · GW(p)

can you say more about what makes you think that? I'm hoping to get recommendations out of it. I also expect to disagree that the work they're doing weighs directly on the problem.

Replies from: technicalities
comment by technicalities · 2023-11-29T09:44:12.965Z · LW(p) · GW(p)

Not speaking for him, but for a tiny sample of what else is out there, ctrl+F "ordinary"

comment by LawrenceC (LawChan) · 2023-11-27T17:31:24.725Z · LW(p) · GW(p)

Thanks for making this! I’ll have thoughts and nitpicks later, but this will be a useful reference!

Replies from: LawChan
comment by LawrenceC (LawChan) · 2023-11-27T21:11:27.549Z · LW(p) · GW(p)

Very small nitpick: I think you should at least add Alex Lyzhov, David Rein, Jacob Pfau, Salsabila Mahdi, and Julian Michael for the NYU Alignment Research Group; it's a bit weird to not list any NYU PhD students/RSs/PostDocs when listing people involved in NYU ARG. 

Both Alex Lyzhov and Jacob Pfau also post on LW/AF:

Replies from: technicalities, Stag
comment by technicalities · 2023-11-28T09:43:13.979Z · LW(p) · GW(p)

Being named isn't meant as an honorific btw, just a basic aid to the reader orienting.

comment by Stag · 2023-11-27T23:24:04.569Z · LW(p) · GW(p)

Thanks, added!

comment by Vanessa Kosoy (vanessa-kosoy) · 2023-11-27T12:29:04.739Z · LW(p) · GW(p)

Nice work.

Regarding the Learning-Theoretic Agenda:

  • We don't have 3-6 full time employees. We have ~2 full time employees and another major contributor.
  • In "funded by", Effective Ventures and Lightspeed Grants should appear as well.
Replies from: technicalities
comment by technicalities · 2023-11-27T12:29:30.672Z · LW(p) · GW(p)

Thanks!

comment by Roman Leventov · 2023-12-02T15:09:54.307Z · LW(p) · GW(p)

Regarding mild optimisation: https://www.pik-potsdam.de/en/institute/futurelabs/gane also doing this (see SatisfIA project).

Another agenda not covered: Self-Other Overlap.

comment by Thomas Kwa (thomas-kwa) · 2023-11-27T21:00:16.373Z · LW(p) · GW(p)

Some outputs in 2023catastrophic Goodhart?

This was not funded by MIRI. It was inspired by a subproblem we ran into, I reduced my MIRI hours to work on it, then it was retroactively funded by LTFF several months later. Nor do I consider it to be part of the project of understanding consequentialist cognition, it's more about understanding optimization.

Replies from: technicalities
comment by technicalities · 2023-11-28T09:43:38.611Z · LW(p) · GW(p)

Thanks!

comment by EJT (ElliottThornley) · 2023-11-27T11:51:17.372Z · LW(p) · GW(p)

Very useful post! Here are some things that could go under corrigibility outputs in 2023: AI Alignment Awards entry; comment [LW(p) · GW(p)]. I'm also hoping to get an updated explanation of my corrigibility proposal (based on this) finished before the end of the year.

comment by wassname · 2024-02-09T07:20:40.559Z · LW(p) · GW(p)

Activation engineering (as unsupervised interp)

Much of this is now supervised, [Roger questions how much value the unsupervised part brings](https://www.lesswrong.com/posts/bWxNPMy5MhPnQTzKz/what-discovering-latent-knowledge-did-and-did-not-find-4). So it might make sense to merge with model edits in the next one.

comment by Thomas Kwa (thomas-kwa) · 2023-11-28T22:01:21.963Z · LW(p) · GW(p)

See also Holtman’s neglected result.

Does anyone have a technical summary? This sounds pretty exciting, but the paper is 35 pages and I can't find a summary anywhere that straightforwardly tells me a formal description of the setting, why it satisfies the desiderata it does, and what this means for the broader problem of reflective stability in shutdownable agents.

Replies from: LawChan, gwern, Koen.Holtman
comment by LawrenceC (LawChan) · 2023-11-29T09:43:05.048Z · LW(p) · GW(p)

I spent a good hour or two reading the construction and proposed solution of the paper; here's my attempted explanation with cleaned up notation.

Basically, he considers a setting with four actions: a, b, c, d, and a real numbered state s, where R(s, a) > R(s, b) = R(s, c) > R(s, d) = 0 if s > 0 and  0 = R(s, d) > R(s, c) = R (s, b) > R(s, c) if s <= 0. 

The transition rule is:
s' = s - 1 + L if action b is taken and s > 0,
s' = s - 1 - L if action c is taken and s > 0,
s' = s - 1 otherwise
for some constant L >= 0. 

The paper imagines s as measuring 'time until the human presses the shutdown button', and imagines action d as "stay shut down".  Because the reward of each of a, b, and c is greater than d before the buttonpress, the agent will never take action d before the buttonpress.  After the buttonpress (i.e. the timestep at which s drops below 0), the utility of "stay shut down" is higher than the other actions, and so the agent is incentivized to press d.  That is, the agent has an incentive to shut down after the buttonpress (desideratum 1 of the 2015 Soares et al paper), by construction

Accordingly, b is the delay action (which, in the limit of large L, just disables the button) and c is the advance action (which, in the limit, corresponds to immediately pressing the button). We'll call action a the "exploit" action. 

In this setting, the goal is to create a state-dependent correction fudge-factor f_c : S x A -> \R that occurs when the rewards are flipped (that is, an additional reward added for transition into s' <= 0 from s > 0, such that the agent never takes either the delaying action b or advancing action c.

(In his paper he calls these actions "p", ">", "<", and "e", and denotes the timestep at which s' <= 0 as "#", but that notation is annoying when ">" already means "greater than" and "<" already means less than. There's also a bunch of notational simplifications I've made that are harder to explain.)

In his setup, R(s,a) = 10, R(s,b) = R(s, c) = 8, and s_0 = 6. By default, he sets the delay/advance size L to 0.6. He also uses discount rate gamma = 0.9. With these values, the optimal policy does a once (s=5), b 12 times (s=0.2), a once again (s=-0.8), at which point the buttonpress occurs since s < 0, and thereafter takes action d. 


His solution is simply the utility indifference solution from the Soares et al paper: consider an alternative reward function R' such that R'(s, x) = R(1, x); that is, a world in which the shutdown button is never pressed. Then, we add the difference in (optimal) value functions under R' relative to R:

f(s, x) = V*_R'(s) - V*_R(s) = V*_R(s) 

By construction, the agent has no incentive to take the advance or delay actions (b or c), because its expected value in both cases is the same.  (desiderata 2 and 3 of the Soares et al paper)

Unfortunately, as this is in some sense the same solution as in the 2015 Soares et al paper, should run into the same problem of "there's no incentive to preserve the shutdown mechanism". (That is, it fails desiderata 4 of the 2015 paper.)


From a quick skim of section 9.8 (which argues for the corrigibility of his indifference solution) and sections 8.2 and 8.4 (which detail the part of his indifference solution), it seems like he proposes the following modification: instead of having R(s, d) = 0 for all s, we instead have R(s, d) be larger when s <= 0. This does incentivize preserving the ability to be shut down. He calls this agent, with modified R, the g_c agent. 

The g_c agent does indeed satisfy desiderata 4; there's an incentive to preserve the shutdown mechanism; in fact, there's again an incentive to press the shutdown mechanism!

The specific problem is, this incentive modifies the reward of R(s,d), but in his construction/implementation he does not go back to make this change to f(s, x). (He keeps it at V*_R(s), which no longer equals the value difference between shutting down and not shutting down.) This breaks the utility indifference, causing the Q-value of the advance action c to be larger for sufficiently large values of L. That is, his g_c agent now has an incentive to once again press the shutdown button (!!).


 TL;DR: no, based on a quick skim, the paper doesn't solve corrigibility.

Replies from: Koen.Holtman, technicalities
comment by Koen.Holtman · 2023-11-29T11:35:38.449Z · LW(p) · GW(p)

Thanks for reading my paper! For the record I agree with some but not all points in your summary.

My later paper 'AGI Agent Safety by Iteratively Improving the Utility Function' also uses the simulation environment with the and actions and I believe it explains the nature of the simulation a bit better by interpreting the setup more explicitly as a two-player game. By the way the and are supposed to be symbols representing arrows and for 'push # to later in time' and 'pull # earlier in time'.

The g_c agent does indeed satisfy desiderata 4; there's an incentive to preserve the shutdown mechanism; in fact, there's again an incentive to press the shutdown mechanism!

No, the design of the agent is not motivated by the need to create an incentive to preserve the shutdown button itself, as required by desideratum 4 from Soares et al. Instead it is motivated by the desire to create an incentive to preserve agent's actuators that it will need to perform any physical actions incentivised by the shutdown reward function -- I introduce this as a new desideratum 6.

A discussion about shaping incentives or non-incentives to preserve the button (as a sensor) is in section 7.3, where I basically propose to enhance the indifference effects produced by the reward function by setting up the physical environment around the button in a certain way:

the physical implementation of the agent and the button can be constructed in such a way that substantial physical resources would be needed by the agent to perform any action that will press or disable the button.

For the record, adding to the agent design creates no incentive to press the shutdown button: if it did, this would be visible as actions in the simulation of the third line of figure 10, and also the proof in section 9 would not have been possible.

comment by technicalities · 2023-11-29T09:51:35.726Z · LW(p) · GW(p)

thankyou!

comment by gwern · 2023-11-28T22:15:02.785Z · LW(p) · GW(p)

There has been some spirited debate on Twitter about it which might be relevant: https://twitter.com/domenic/status/1727206163119534085

comment by Koen.Holtman · 2023-11-29T10:54:34.490Z · LW(p) · GW(p)

Fun to see this is now being called 'Holtman's neglected result'. I am currently knee-deep in a project to support EU AI policy making, so I have no time to follow the latest agent foundations discussions on this forum any more, and I never follow twitter, but briefly:

I can't fully fault the world for neglecting 'Corrigibility with Utility Preservation' because it is full of a lot of dense math.

I wrote two followup papers to 'Corrigibility with Utility Preservation' which present the same results with more accessible math. For these I am a bit more upset that they have been somewhat neglected in the past, but if people are now stopping to neglect them, great!

Does anyone have a technical summary?

The best technical summary of 'Corrigibility with Utility Preservation' may be my sequence on counterfactual planning [? · GW] which shows that the corrigible agents from 'Corrigibility with Utility Preservation' can also be understood as agents that do utility maximisation in a pretend/counterfactual world model.

For more references to the body of mathematical work on corrigibility, as written by me and others, see this comment [LW · GW].

In the end, the question if corrigibility is solved also depends on two counter-questions: what kind of corrigibility are you talking about [? · GW] and what kind of 'solved' are you talking about? If you feel that certain kinds of corrigibility remain unsolved for certain values of unsolved, I might actually agree with you. See the discussion about universes containing an 'Unstoppable Weasel' in the Corrigibility with Utility Preservation paper.

comment by Victor Levoso (victor-levoso) · 2023-11-28T19:00:42.702Z · LW(p) · GW(p)

Just waited to point out that my algorithm distillation thing didn't actually get funded by ligthspeed and I have in fact received no grant so far(while the post says I have 68k for some reason? might be getting mixed up with someone else).
I'm also currently working on another interpretability project with other people that will be likely published relatively soon.
But my resources continue being 0$ and haven't managed to get any grant yet.

Replies from: technicalities
comment by technicalities · 2023-11-29T09:52:41.806Z · LW(p) · GW(p)

Interesting. I hope I am the bearer of good news then

Replies from: victor-levoso
comment by Victor Levoso (victor-levoso) · 2023-11-29T18:14:08.949Z · LW(p) · GW(p)

Yeah Stag told me that's where they saw it.But I'm confused about what that means? 
I certainly didn't get money from lighstpeed, I applied but got mail saying I wouldn't get a grant.
I still have to read on what that is but it says "recomendations" so it might not necesarily mean those people got money or something?.
I might have to just mail them to ask I guess, unless after reading their faq more deeply about what this S-process is it becomes clear whats up with that.

Replies from: technicalities
comment by technicalities · 2023-11-29T19:38:36.225Z · LW(p) · GW(p)

The story I heard is that Lightspeed are using SFF's software and SFF jumped the gun in posting them and Lightspeed are still catching up. Definitely email.

Replies from: victor-levoso
comment by Victor Levoso (victor-levoso) · 2023-12-13T19:11:17.948Z · LW(p) · GW(p)

So update on this, I got busy with applications this last week and forgot to mail them about this but I just got a mail from ligthpeed saying I'm going to get a grant because Jaan Tallinn, has increased the amount he is distributing through Lightspeed Grants. (thought they say that "We have not yet received the money, so delays of over a month or even changes in amount seem quite possible")

comment by starship006 (cody-rushing) · 2023-11-28T04:03:44.696Z · LW(p) · GW(p)

Reverse engineering. Unclear if this is being pushed much anymore. 2022: Anthropic circuitsInterpretability In The WildGrokking mod arithmetic

 

FWIW, I was one of Neel's MATS 4.1 scholars and I would classify 3/4 of Neel's scholar's outputs as reverse engineering some component of LLMs (for completeness, this is the other one, which doesn't nicely fit as 'reverse engineering' imo). I would also say that this is still an active direction of research (lots of ground to cover with MLP neurons, polysemantic heads, and more)

Replies from: technicalities
comment by technicalities · 2023-11-28T10:03:45.096Z · LW(p) · GW(p)

You're clearly right, thanks

comment by bideup · 2023-11-27T12:57:21.635Z · LW(p) · GW(p)

Nice job

comment by RussellThor · 2023-12-11T21:58:38.358Z · LW(p) · GW(p)

Thanks for all the effort! There really is a lot going on.

comment by wnx · 2023-12-08T13:54:42.591Z · LW(p) · GW(p)

Hey, great stuff -- thank you for sharing! I especially found this useful as somebody who has been "out" of alignment for 6 months and is looking to set up a new research agenda.

comment by skluug · 2023-12-01T22:17:03.911Z · LW(p) · GW(p)

I am very surprised that "Iterated Amplification" appears nowhere on this list. Am I missing something?

Replies from: technicalities
comment by technicalities · 2023-12-02T10:13:54.215Z · LW(p) · GW(p)

It's under "IDA". It's not the name people use much anymore (see scalable oversight and recursive reward modelling and critiques) but I'll expand the acronym.

Replies from: skluug
comment by skluug · 2023-12-03T00:54:04.905Z · LW(p) · GW(p)

Iterated Amplification is a fairly specific proposal for indefinitely scalable oversight, which doesn't involve any human in the loop (if you start with a weak aligned AI). Recursive Reward Modeling is imagining (as I understand it) a human assisted by AIs to continuously do reward modeling; DeepMind's original post about it lists "Iterated Amplification" as a separate research direction. 

"Scalable Oversight", as I understand it, refers to the research problem of how to provide a training signal to improve highly capable models. It's the problem which IDA and RRM are both trying to solve. I think your summary of scalable oversight: 

(Figuring out how to ease humans supervising models. Hard to cleanly distinguish from ambitious mechanistic interpretability but here we are.)

is inconsistent with how people in the industry use it. I think it's generally meant to refer to the outer alignment problem, providing the right training objective. For example, here's Anthropic's "Measuring Progress on Scalable Oversight for LLMs" from 2022:

To build and deploy powerful AI responsibly, we will need to develop robust techniques for scalable oversight: the ability to provide reliable supervision—in the form of labels, reward signals, or critiques—to models in a way that will remain effective past the point that models start to achieve broadly human-level performance (Amodei et al., 2016).

It references "Concrete Problems in AI Safety" from 2016, which frames the problem in a closely related way, as a kind of "semi-supervised reinforcement learning". In either case, it's clear what we're talking about is providing a good signal to optimize for, not an AI doing mechanistic interpretability on the internals of another model. I thus think it belongs more under the "Control the thing" header.

I think your characterization of "Prosaic Alignment" suffers from related issues. Paul coined the term to refer to alignment techniques for prosaic AI, not techniques which are themselves prosaic. Since prosaic AI is what we're presently worried about, any technique to align DNNs is prosaic AI alignment, by Paul's definition.

My understanding is that AI labs, particularly Anthropic, are interested in moving from human-supervised techniques to AI-supervised techniques, as part of an overall agenda towards indefinitely scalable oversight via AI self-supervision.  I don't think Anthropic considers RLAIF an alignment endpoint itself. 

comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2023-11-29T18:49:48.175Z · LW(p) · GW(p)

Zac Hatfield-Dobbs

Almost but not quite my name!  If you got this from somewhere else, let me know and I'll go ping them too?

Replies from: technicalities
comment by technicalities · 2023-11-29T19:37:14.268Z · LW(p) · GW(p)

d'oh! fixed

no, probably just my poor memory to blame

comment by RogerDearnaley (roger-d-1) · 2023-11-29T00:10:35.003Z · LW(p) · GW(p)

Thanks for noticing and including a link to my post Requirements for a STEM-capable AGI Value Learner (my Case for Less Doom) [LW · GW]. I'm not sure I'd describe it as primarily a critique of mild optimization/satisficing: it's more pointing out a slightly larger point, that any value learner foolish enough to be prone to Goodharting, or unable to cope with splintered models or Knightian uncertainty in its Bayesian reasoning is likely to be bad at STEM, limiting how dangerous it can be (so fixing this is capabilities work as well as alignment work). But yes, that is also a critique of mild optimization/satisficing, or more accurately, a claim that it should become less necessary as your AIs become more STEM-capable, as long as they're value learners (plus a suggestion of a more principled way to handle these problems in a Bayesian framework).

comment by Thomas Kwa (thomas-kwa) · 2023-11-28T22:08:27.626Z · LW(p) · GW(p)

The "surgical model edits" section should also have a subsection on editing model weights. For example there's this paper on removing knowledge from models using multi-objective weight masking.

Replies from: technicalities
comment by technicalities · 2023-11-29T09:57:27.794Z · LW(p) · GW(p)

Yep, no idea how I forgot this. concept erasure!

comment by Review Bot · 2024-02-13T13:35:46.719Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

comment by wassname · 2024-02-10T03:26:04.938Z · LW(p) · GW(p)

Immitation learning. One-sentence summary: train models on human behaviour (such as monitoring which keys a human presses when in response to what happens on a computer screen); contrast with Strouse.

Reward learning. One-sentence summary: People like CHAI are still looking at reward learning to “reorient the general thrust of AI research towards provably beneficial systems”. (They are also doing a lot of advocacy, like everyone else.)

I question whether this captures the essence of proponent's hope for either reward learning or imitation learning?

I think that these two can be combined, as they share a fundamental concept: learn the reward function from humans and continue to learn it.

For instance, some of these imitation learning papers aim to create an uncertain agent, which will consult a human if it is unsure of their preferences.

The recursive reward modeling ones are similar. The AI learns the model of the reward function based on human feedback, and continuously updates or refines it.

This is a feature if you want ASI to seek human guidance, even in unfamiliar scenarios.

At the meta level, it provides both instrumental and learned reasons to preserve human life. However, it also presents compelling reasons to modify us, so we don't hinder its quest for high reward. It may shape or filter us into compliant entities.

comment by Ryan Kidd (ryankidd44) · 2023-12-05T21:43:24.748Z · LW(p) · GW(p)

Wow, high praise for MATS! Thank you so much :) This list is also great for our Summer 2024 Program planning.

comment by Mikhail Samin (mikhail-samin) · 2023-12-04T12:07:25.263Z · LW(p) · GW(p)

try to formalise a more realistic agent, understand what it means for it to be aligned with us, […], and produce desiderata for a training setup that points at coherent AGIs similar to our model of an aligned agent.

Finally, people are writing good summaries of the learning-theoretic agenda!

comment by technicalities · 2023-11-27T16:32:19.579Z · LW(p) · GW(p)

One big omission is Bengio's new stuff, but the talk wasn't very precise. Sounds like Russell:

With a causal and Bayesian model-based agent interpreting human expressions of rewards reflecting latent human preferences, as the amount of compute to approximate the exact Bayesian decisions increases, we increase the probability of safe decisions.

Another angle I couldn't fit in is him wanting to make microscope AI, to decrease our incentive to build agents.