Schmidt Sciences Technical AI Safety RFP on Inference-Time Compute – Deadline: April 30
post by Ryan Gajarawala (ryan-gajarawala) · 2025-03-18T18:05:34.757Z · LW · GW · 0 commentsThis is a link post for https://www.schmidtsciences.org/ai-safety-science-inferencetimecompute/
Contents
Core question Illustrative examples of project ideas The RFP None No comments
Schmidt Sciences has launched an RFP for technical AI safety research in the inference-time compute paradigm.
This paradigm is poorly understood and will require significant new research – not least in AI safety. To advance this research, we seek to fund work on most critical technical AI safety challenges or opportunities emerging from the inference-time compute paradigm.
We plan to fund teams that can start quickly and deliver significant research outputs within 12-18 months, with budgets of up to $500K.
Applications (here) start with a 500-word description of the research idea and are open until April 30, 2025.
Core question
We are interested in funding the crucial work needed to both understand the implications of this paradigm on model safety and how to utilize the inference-time compute paradigm to actively make LLMs safer. (For detailed discussion on how Schmidt Sciences thinks about safety, see our website and research agenda.)
Our core RFP question: What is the most critical technical AI safety challenge or opportunity that has emerged as a result of the inference-time compute paradigm? How would you address it? |
Illustrative examples of project ideas
This is not designed to direct or constrain your creative thinking about the hardest safety problems in inference-time compute. Rather, it provides illustrations of problems that might be considered both challenging and worthy of study.
- Example 1: Enduring Problems and New Risks. Issues like adversarial robustness, contamination, and scalable oversight are prominent and worthwhile areas of safety research, but recent breakthroughs in inference-time compute also warrant rigorous investigation like chain-of-thought faithfulness, new problems in reward gaming, and safe exploration. We encourage applications that investigate longstanding risks, emerging challenges in this paradigm, or both.
- Example 2: Understanding Safety and Designing Safely. Research can focus on scientifically understanding the risks and implications of inference-time compute on model safety (e.g. evaluations science), or make models safer through intentional design changes or by harnessing inference-time compute as an active tool. All directions are valuable for improving AI safety.
We encourage applications for research that
- discover novel failure modes emerging from inference-time compute
- demonstrate the replicable nature of recently surfaced problems to certify their validity
- design robust evaluations that quantify and measure associated risks
- construct targeted interventions that actively enhance model safety
Projects should aim to produce tangible research outcomes that advance the scientific understanding of inference-time compute safety—such as theoretical analyses, rigorously validated evaluation designs, mitigation strategies, functional prototype implementations, or reproducible experimental results.
The RFP
The full RFP provides more information about the valid scope of projects, application process, eligibility, project duration & resources, and more. You can find it here.
If you’re unsure about whether your idea is in scope, sign up for office hours or email us.
Email aisafety@schmidtsciences.org with questions!
0 comments
Comments sorted by top scores.