Open Philanthropy Technical AI Safety RFP - $40M Available Across 21 Research Areas

post by jake_mendel, maxnadeau, Peter Favaloro (peter.favaloro@gmail.com) · 2025-02-06T18:58:53.076Z · LW · GW · 0 comments

This is a link post for https://www.openphilanthropy.org/request-for-proposals-technical-ai-safety-research/

Contents

  Overview
None
No comments

Open Philanthropy is launching a big new Request for Proposals for technical AI safety research, with plans to fund roughly $40M in grants over the next 5 months, and available funding for substantially more depending on application quality. 

Applications (here) start with a simple 300 word expression of interest and are open until April 15, 2025.

Overview

We're seeking proposals across 21 different research areas, organized into five broad categories:

  1. Adversarial Machine Learning
    • *Jailbreaks and unintentional misalignment
    • *Control evaluations
    • *Backdoors and other alignment stress tests
    • *Alternatives to adversarial training
    • Robust unlearning
  2. Exploring sophisticated misbehavior of LLMs
    • *Experiments on alignment faking
    • *Encoded reasoning in CoT and inter-model communication
    • Black-box LLM psychology
    • Evaluating whether models can hide dangerous behaviors
    • Reward hacking of human oversight
  3. Model transparency
    • Applications of white-box techniques
    • Activation monitoring
    • Finding feature representations
    • Toy models for interpretability
    • Externalizing reasoning
    • Interpretability benchmarks
    • More transparent architectures
  4. Trust from first principles
    • White-box estimation of rare misbehavior
    • Theoretical study of inductive biases
  5. Alternative approaches to mitigating AI risks
    • Conceptual clarity about risks from powerful AI
    • New moonshots for aligning superintelligence

We’re willing to make a range of types of grants including:

The full RFP provides much more detail on each research area, including eligibility criteria, example projects, and nice-to-haves. You can find it here.

We want the bar to be low for submitting expressions of interest: even if you're unsure whether your project fits perfectly, we encourage you to submit an EOI. This RFP is partly an experiment to understand the demand for funding in AI safety research.

Please email aisafety@openphilanthropy.org with questions, or just submit an EOI.

0 comments

Comments sorted by top scores.