Public Call for Interest in Mathematical Alignment

post by Davidmanheim · 2023-11-22T13:22:09.558Z · LW · GW · 9 comments

Contents

  Bottom line up front:
  More information
  Interested in collaborating?
None
9 comments

Bottom line up front: 

If you are currently working on, or are interested working in any area of mathematical AI alignment, we are collecting names and basic contact information to find who to talk to about opportunities in these areas. If that describes you, please fill out the form! (Please do so even if you think I already know who you are, or people will be left out!)

More information

There are several concrete research agendas in mathematical AI alignment, receiving varying degrees of ongoing attention, with relevance to different possible strategies for AI alignment. These include MIRI’s agent foundations and related work, Learning Theoretic Alignment [AF · GW], Developmental Interpretability [? · GW], Paul Christiano’s theoretical work, RL theory related work done at Far.AI, FOCAL at CMU, Davidad’s “Open Agency” architecture, as well as other work. Currently, as in the past, work in these areas has been conducted mainly in non-academic settings, often not published, and the people involved are scattered - as are other people who want to work on this research.

A group of people, including some individuals at MIRI, Timaeus, MATS, ALTER, PIBBSS, and elsewhere, are hoping to both promote research in these areas, and build bridges between academic and existing independent research. To that end, we are hoping to promote academic conferences, hold or sponsor attendance at research seminars, and announce opportunities and openings for PhD students or postdocs, non-academic positions doing alignment research, and similar. 

As a first step, we want to compile a list of people who are (at least tentatively) interested, and would be happy to hear about projects. This list will not be public, and is likely to involve very few emails to this list, but will be used to find individuals who might want to be invited to programs or opportunities.

Note that we are interested in people at all levels of seniority, including graduate students,  independent researchers, professors, research groups, university department contacts, and others who wish to be informed about future opportunities and programs.

Interested in collaborating?

If you are an academic, or are otherwise more specifically interested in building bridges to academia or collaborating with people in these areas, please mention that in the notes, and we are happy to be in touch with you, or help you contact others working in more narrow areas you are interested in.

9 comments

Comments sorted by top scores.

comment by Alex_Altair · 2023-11-22T15:39:42.664Z · LW(p) · GW(p)

Note that we are interested in people at all levels of seniority, including graduate students,

 

If I imagine being an undergraduate student who's interested, then this sentence leaves me unclear on whether I should fill it out.

Replies from: Davidmanheim
comment by Davidmanheim · 2023-11-22T19:27:14.107Z · LW(p) · GW(p)

We are focused on mathematical research and building bridges between academia and research. I think the pathway to doing that type of research is usually through traditional academic channels, a PhD program, or perhaps a masters degree or a program like MATS, at which point the type of research promotion and academic bridge building we are focused on become far more relevant. That said, we do have undergrad as an option, and are certainly OK with people at any level of seniority signaling their interest.

comment by domenicrosati · 2023-11-23T20:39:56.758Z · LW(p) · GW(p)

For my own clarity: What is the difference between mathematical approaches to alignment and other technical approaches like mechanistic interpretability work?

I imagine the focus is on in principal arguments or proofs regarding the capabilities of a given system rather than empirical or behavioural analysis but you mention RL so just wanted to get some colour on this.

Any clarification here would be helpful!

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2023-11-24T08:15:00.206Z · LW(p) · GW(p)

You are more or less right. By "mathematical approaches", we mean approaches focused on building mathematical models relevant to alignment/agency/learning and finding non-trivial theorems (or at least conjectures) about these models. I'm not sure what the word "but" is doing in "but you mention RL": there is a rich literature of mathematical inquiry into RL. For a few examples, see everything under the bullet "reinforcement learning theory" in the LTA reading list [LW · GW].

Replies from: domenicrosati
comment by domenicrosati · 2023-11-30T22:16:12.473Z · LW(p) · GW(p)

Thanks for the pointer! Yes RL has a lot of research of this kind - as an empirical research I just get stuck sometimes in translation

comment by Nicholas / Heather Kross (NicholasKross) · 2023-11-22T18:07:03.094Z · LW(p) · GW(p)

Don't forget Orthogonal's mathematical alignment research, including QACI [LW · GW]!

Replies from: Davidmanheim
comment by Davidmanheim · 2023-11-22T20:53:12.792Z · LW(p) · GW(p)

Thanks - and the fact that we don't know who is working on relevant things is exactly the reason we're doing this! 

comment by Jan_Kulveit · 2023-12-01T10:03:56.606Z · LW(p) · GW(p)

Part of ACS research directions fits into this - Hierarchical Agency, Active Inference based pointers to what alignmnent means, Self-unalignment

comment by Daniel Murfet (dmurfet) · 2023-11-22T17:53:30.226Z · LW(p) · GW(p)

Thanks for setting this up!