New roles on my team: come build Open Phil's technical AI safety program with me!

post by Ajeya Cotra (ajeya-cotra) · 2023-10-19T16:47:59.701Z · LW · GW · 6 comments

Contents

6 comments

Open Phil announced [EA · GW] two weeks ago that we’re hiring for over 20 roles across our teams working on global catastrophic risk reduction — and we’ll answer questions at our AMA [EA · GW] starting tomorrow. Ahead of that, I wanted to share some information about the roles I’m hiring for on my team (Technical AI Safety). This team is aiming to think through what technical research could most help us understand and reduce AI x-risk, and build thriving fields in high priority research areas by making grants to great projects and research groups.

First of all — since we initially listed roles on Sep 29, we’ve added three new roles in Technical AI Safety that you might not have seen yet if you only saw the original announcement! In addition to the (Senior) Program Associate role that was there originally, we added an Executive Assistant role last week — and yesterday we added a (Senior) Research Associate role and a role for a Senior Program Associate specializing in a particular subfield of AI safety research (e.g. interpretability, alignment theory, etc). Check those out if they seem interesting! The Executive Assistant role in particular requires a very different, less technical skill set.

Secondly, before starting to answer AMA questions, I wanted to highlight that our technical AI safety giving is far away from where it should be at equilibrium, there is considerable room to grow, and hiring more people is likely to lead quickly to more and better grants. My estimate is that last year, we recommended around ~$25M in grants to technical AI safety,[1] and so far this year I’ve recommended a similar amount. With more capacity for grant evaluation, research, and operations, we think this could pretty readily double or more.

All of our GCR teams (Technical AI Safety led by me, Capacity Building led by Claire Zabel, AI Governance and Policy led by Luke Muehlhauser, and Biosecurity led by Andrew Snyder-Beattie) are heavily capacity constrained right now — especially the teams that do work related to AI, given the recent boom in interest and activity in that area. I think my team currently faces even more severe constraints than other program teams. Compared to other teams, my team:

If you join the technical AI safety team in this round, you could help relieve some severe bottlenecks while building this new iteration of the program area from the ground up. If this sounds exciting to you, I strongly encourage you to apply!
 


 


 

  1. ^

    Interestingly, these figures are actually considerably larger than annual technical AI safety giving in the several years before that, even though we had fewer full-time-equivalent staff working in the area in 2022 and 2023 compared to 2015-2021.

  2. ^

    Initially, our program was led by Daniel Dewey. By around 2019, Catherine Olsson had joined the team, and eventually (I think by 2020-2021) it transitioned to being a team of three run by Nick Beckstead, who managed Catherine and Daniel, as well as Asya Bergal at half her time. In 2021, all three of Daniel, Catherine, and Nick left for other roles. For an interim period, there was no single point person: Holden was personally handling bigger grants (e.g. Redwood Research), and Asya was handling smaller grants (e.g. an RFP that Nick originally started and our PhD fellowship). Holden then moved on to direct work [EA · GW] and Asya went full-time on capacity building. I began doing grantmaking in Oct 2022, and quickly ended up full-time handling FTXFF bailout grants [EA · GW]. Since late January 2023 or so, I’ve been presiding over a more normal program area.

6 comments

Comments sorted by top scores.

comment by mesaoptimizer · 2023-10-20T15:36:27.527Z · LW(p) · GW(p)

Here's some (hopefully useful) context on why I (SERI MATS 4.0, independent alignment researcher) feel helplessness at the idea of applying: I expect to not actually make a difference by working as a part of your team, because I don't expect my model of the alignment problem [which is essentially that of MIRI and John Wentworth] to be shared by you or the OpenPhil leadership.

From your updated timelines post [LW · GW]:

I don’t expect a discontinuous jump in AI systems’ generality or depth of thought from stumbling upon a deep core of intelligence; I’m not totally sure I understand it but I probably don’t expect a sharp left turn [LW · GW].

This is probably our biggest crux. To me it seems pretty clear that "capabilities generalize farther than alignment". The existence of a "deep core of intelligence" also seems very obvious to me, although behaviorally I'm currently uncertain as to whether we could see a discontinuous jump in AI systems' generality.

Overall I sense that what is being selected for may be close to "be as epistemically accurate in decisions and communications as possible given our constraints of moving fast", and it makes sense; but I expect that this also selects for people who are less comfortable with the sort of non-verbal epistemic reasoning heuristics that seem crucial to not slide your attention away from noticing that one is confused, and by extension, the "hard parts of the problem". I think the former is very useful when dealing with problems in domains where we have a clear idea of the problem (rocket engineering), but probably net negative when dealing with a domain we are still confused about.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-20T17:09:52.987Z · LW(p) · GW(p)

Can you say more about what you mean by "capabilities generalize further than alignment?"

Replies from: mesaoptimizer
comment by mesaoptimizer · 2023-10-20T18:52:06.731Z · LW(p) · GW(p)

Sure. I've made two attempts to point at what I mean: one Yudkowsky-like, and the other Nate-like. I'm hoping that the combination should at least make someone get what I'm pointing at.

Attempt 1

There is a 'ground truth' for capabilities, and that is our universe. Our universe is coherent and Lawful -- 2+2=4, and <the fundamental physics laws governing our universe> hold at every moment, every where. Every piece of data given to an optimizer tells the optimizer about these things. You can learn arithmetic from a thousand different examples of data drawn from the real world, none of which need to be explicitly about arithmetic. You can detect the shape of the physics laws constraining our universe via a myriad of ways, none of which can make it seem obvious to you or I as to how we can infer these laws from the data. Combine that with highly focused optimization pressure, and what you get is a system that is incredibly capable.

There are an infinite paths to the truth of reality, and that is reflected in the data we provide an optimizer. This is not the case for our values. Human values are very complex and arbitrary -- a result of the specific brain architecture we seem to have evolved.

Every data point an optimizer is provided from the real world tells it the same thing about 'capabilities': 2+2 = 4, for example. Even inaccurate datapoints are causally upstream of a coherent universe, and therefore provide the optimizer information about the causes of these inaccurate data points. If an optimizer uses 'proxy correlates' of reality, it shall soon start to converge to understanding the actual structure of reality.

In contrast, it does not seem to be the case that we know how to get an optimizer to converge to understanding the actual goal (even if it is something as simple as "maximize the amount of diamonds in the universe"). All we seem to know how to do is to train proxy correlates into a model. These proxy correlates do not generalize out of distribution, and once an optimizer 'groks' reality, it shall see the ways it can achieve the outcomes it is meant to achieve using paths other than the ones it was shaped to follow by the stupider optimizers that built it.


Attempt 2

Until now, all SOTA AI systems we can see are limited-domain consequentialists (or approximations thereof). None of them are truly general in the sense that they seem to be able to chain actions across multiple wildly differing domains (social, programming, cognitive heuristics improvement, maintenance and upgrade of the infrastructure the AI system is running on -- to give a few examples) to achieve whatever outcomes they could be perceived as aiming towards[1]. GPT-4 is a predictor that can be prompted to simulate a consequentialist (such as a human being), but GPT-4 is not capable enough to simulate the cross-domain capabilities of such an agent, at least as far as I know.

When your 'alignment' techniques involve training an AI system to behave in ways you like when the AI system is restricted to these isolated domains, all you are doing is teaching your system decision-making influences that are proxies of the actual values you wish the system would have. These decision-making influences will not hold across all domains that an AI might chain their actions across -- and this will especially be true in the case of the domains that enable an AI system to chain actions across multiple widely differing domains, such as abstract reasoning. Since the specific ontology that an AI system uses itself changes what inputs and outputs an AI system has for its abstract reasoning algorithms, you cannot use external behavioral outputs as evidence to be able to shape an AI's reasoning[2].


  1. Note, this does not necessarily mean that we shall see systems with cleanly describable internals that seem to contain a concrete 'outcome' that the AI is 'intentionally' trying to achieve! I'm describing what we can infer based on the observed behavior of such an AI system -- it seems far more likely that such systems will likely not have such clean 'outcomes' in mind that they are deliberately aiming towards, even if one can easily imagine evidence of easily detectable convergent instrumental goals (which do not provide us much evidence for whether or not a model is aligned). ↩︎

  2. Which is why, it seems, a lot of people working on AGI alignment are converging to ontology identification as the goal of their research agendas. ↩︎

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-10-21T16:12:33.831Z · LW(p) · GW(p)

Thanks!

I wasn't expecting such a detailed answer, I guess I should have asked a more specific question. This is great though. The thing I was confused about was: "Capabilities generalize further than alignment" makes it sound like capabilities-properties or capabilities-skills (such as accurate beliefs, heuristics for arriving at accurate beliefs, useful convergent instrumental strategies, etc.) will work in a wider range of environments than alignment-properties like honesty, niceness, etc. But I don't think what you've said establishes that. 

But I think what you mean is different -- you mean "If you train an AI using human feedback on diverse tasks, hoping it'll acquire both general-purpose capabilities and also robust alignment properties, what'll happen by default is that it DOES acquire the former but it does not acquire the latter." (And the reason for this is basically that capabilities properties are more simple/natural/universal/convergent than alignment properties; with alignment properties there are all sorts of other similarly-simple properties that perform just as well in training, but for capabilities properties there generally aren't (at least not for sufficiently diverse challenging environments; in simple environments they just 'memorize' or otherwise learn 'simple heuristics that don't generalize'))

Is this an accurate summary of your view?

Replies from: mesaoptimizer
comment by mesaoptimizer · 2023-10-22T13:27:08.277Z · LW(p) · GW(p)

Yes, as far as I can tell. "Alignment properties" do not seem to me to be convergent or universal in any way.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-10-19T19:00:28.275Z · LW(p) · GW(p)

I would definitely apply if I didn't feel like my personal priority should be on focusing on my object level Safety Eval Red Teaming work, and pursuing my AI Alignment Research Agenda.