Prizes for ML Safety Benchmark Ideas

post by joshc (joshua-clymer) · 2022-10-28T02:51:46.369Z · LW · GW · 4 comments

Contents

  What kinds of ideas are you looking for?
  What are the requirements for submissions?
  Who are the judges?
None
4 comments

“If you cannot measure it, you cannot improve it.” – Lord Kelvin (paraphrased)

Website: benchmarking.mlsafety.org – receiving submissions until August 2023.

ML Safety lacks good benchmarks, so the Center for AI Safety is offering $50,000 - $100,000 prizes for benchmark ideas (or full research papers). We will award at least $100,000 total and up to $500,000 depending on the quality of submissions.


What kinds of ideas are you looking for?

Ultimately, we will are looking for benchmark ideas that motivate or advance research that reduces existential risks from AI. To provide more guidance, we’ve outlined four research categories along with example ideas.

See Open Problems in AI X-Risk [PAIS #5] [? · GW] for example research directions in these categories and their relation to existential risk.

What are the requirements for submissions?

Datasets or implementations are not necessary, though empirical testing can make it easier for the judges to evaluate your idea. All that is required is a brief write-up (guidelines here). How the write-up is formatted isn’t very important as long as it effectively pitches the benchmark and concretely explains how it would be implemented. If you don’t have prior experience designing benchmarks, we recommend reading this document for generic tips.

Who are the judges?

Dan HendrycksPaul Christiano, and Collin Burns.


If you have questions, they might be answered on the website, or you can post them here. We would also greatly appreciate it if you helped to spread the word about this opportunity.

Thanks to Sidney Hough and Kevin Liu for helping to make this happen and to Collin Burns and Akash Wasil for feedback on the website. This project is supported by the Future Fund regranting program.

4 comments

Comments sorted by top scores.

comment by JakubK (jskatt) · 2023-04-17T17:13:56.539Z · LW(p) · GW(p)

Is this still happening? The website has stopped working for me.

comment by avturchin · 2022-10-28T09:29:50.404Z · LW(p) · GW(p)

My mind generated a list of possible benchmarks after reading your suggestions:

 

Wireheading benchmark – the tendency of an agent to find unintended shortcuts to its reward function or goal. See my comment on the post.

 

Unboxing benchmark – the tendency of an agent to break out of the simulation. Could be tested in the simulations of progressive complexity. 

 

Hidden thoughts benchmark – the tendency of an agent to hide its thoughts.

 

Uncorigibility benchmark – the tendency of the agent to resist changes.

 

Unstoppability benchmark – the tendency to self-preservation.

 

Self-improving benchmark – the tendency of an agent to infest resources in self-improving, self-learning.

 

Halting benchmark – the tendency of an agent to halt or loop after encountering a difficult problem.

 

Accidents benchmark – the tendency of an agent to have accidents if it is used as a car autopilot or at least to have near-misses. The more dangerous agents will likely have fewer small-level accidents.

 

Trolley-like problems benchmark – the tendency of the agent to kill people in order to achieve the high-level goal. I assumed it to be bad. Read Lem’s https://en.wikipedia.org/wiki/Inquest_of_Pilot_Pirx Could be tested on simulated tasks.

 

Simulation-to-real world change benchmark measures the tendency of an agent to suddenly change its behaviour after it was allowed to work in the real world. 

 

Sudden changes benchmark – measures the tendency of the agent to act unexpectedly in completely new ways.

comment by P. · 2022-11-11T13:32:32.922Z · LW(p) · GW(p)

Do you know whether this will be cancelled given the FTX situation?

Replies from: joshua-clymer
comment by joshc (joshua-clymer) · 2022-12-04T06:28:58.976Z · LW(p) · GW(p)

It hasn't been canceled.