Posts

$250K in Prizes: SafeBench Competition Announcement 2024-04-03T22:07:41.171Z
AI Safety Newsletter #4: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks 2023-05-02T18:41:43.144Z
AI Safety Newsletter #3: AI policy proposals and a new challenger approaches 2023-04-25T16:15:17.227Z
AI Safety Newsletter #2: ChaosGPT, Natural Selection, and AI Safety in the Media 2023-04-18T18:44:35.923Z
AI Safety Newsletter #1 [CAIS Linkpost] 2023-04-10T20:18:57.485Z
Announcing the Introduction to ML Safety course 2022-08-06T02:46:00.295Z
$20K In Bounties for AI Safety Public Materials 2022-08-05T02:52:47.729Z
Introducing the ML Safety Scholars Program 2022-05-04T16:01:51.575Z
SERI ML Alignment Theory Scholars Program 2022 2022-04-27T00:43:38.221Z
[$20K in Prizes] AI Safety Arguments Competition 2022-04-26T16:13:16.351Z
ML Alignment Theory Program under Evan Hubinger 2021-12-06T00:03:15.443Z

Comments

Comment by ozhang (oliver-zhang) on Cost-effectiveness of professional field-building programs for AI safety research · 2023-07-24T05:59:25.308Z · LW · GW

The main overlap between Modeling the impact of AI safety field-building programs and the other two posts is the disclaimers, which we believe should be copied in all three posts, and the main QARY definition, which seemed significant enough to add. Beyond that, the intro post is distinct from the two analysis posts.

This post does have much in common with the Cost-effectiveness of student programs for AI safety research.  The two post are structured in an incredibly similar manner. That being said, the sections, are doing the same analysis to different sets of programs. As such, the graphs/numbers/conclusions drawn may be different.

It's plausible that we could've dramatically shortened the section "The model" from one of the posts. Ultimately, we did not decide to and instead let the reader decide if they wanted to skip. (This has the added benefit of making each post most self-contained.) However, we could see arguments for the opposing view.

Comment by ozhang (oliver-zhang) on Cost-effectiveness of professional field-building programs for AI safety research · 2023-07-23T16:14:03.473Z · LW · GW

Of course!

We ask practitioners who have direct experience with these programs for their beliefs as to which research avenues participants pursue before and after the program. Research relevance (before/without, during, or after) is given by the sum product of these probabilities with CAIS’s judgement of the relevance of different research avenues (in the sense defined here).You can find the explicit calculations for workshops at lines 28-81 of this script, and for socials at lines 28-38 of this script.

Using workshop contenders’ research relevance without the program (during and after the program period) as an example:

  1. There are ~107 papers submitted with unique sets of authors. (Not quite -- this is just number of non-unique authors across submitted papers divided by average number of authors per paper. This is the way that the main practitioner interviewed about workshops found it most natural to think through this problem.)
  2. What might the distribution of research avenues among these papers look like without the program
    1. Practitioners believe: around 3% cover research avenues that CAIS considers to be 100x the relevance of adversarial robustness (e.g. power aversion), 30% cover avenues 10x the relevance of adversarial robustness (e.g. trojans), 30% cover avenues equally relevant to adversarial robustness, and most of the remainder of papers would cover research avenues 0.1x the relevance of adversarial robustness. (Next point refers to the remaining remainder.)
    2. Additionally, practitioners believe that in expectation the workshop will produce 0.05 papers with 100x relevance, 0.25 papers with 10x relevance, 1 paper with 2x relevance, 4 papers with 1x relevance, and 1 paper with 0.5x relevance. Of these, the 100x and 10x papers are fully counterfactual, and the remaining papers are 30% likely to be counterfactual.
    3. Calculating out, average research relevance without the program among contenders is 3.41.

Clearly, this is far from a perfect process. Hence the strong disclaimer regarding parameter values. In future we would want to survey participants before and after, rather than rely on practitioner intuitions. We are very open to the possibility that better future methods would produce parameter values inconsistent with the current parameter values! Our hope with these posts is to provide a helpful framework for thinking about these programs, not to give confident conclusions.

Finally, it’s worth mentioning that the cost-effectiveness of these programs relative to one another do not rely very heavily on conversions. You can see this by reading off cost-effectiveness from the change in research relevance here. Further, research avenue relevance treatment effects across programs (excluding engineers submitting to the TDC, where we can be more confident) differ by a factor of ~2, whereas differences in e.g. cost per participant are ~20x and average scientist-equivalence are ~7x.

Comment by ozhang (oliver-zhang) on $20K In Bounties for AI Safety Public Materials · 2022-12-01T18:41:41.245Z · LW · GW

Yup! The bounty is still ongoing, now funded by a different source. We have been awarding prizes throughout the duration of the bounty and will post an update in January detailing the results.

Comment by ozhang (oliver-zhang) on ML Alignment Theory Program under Evan Hubinger · 2021-12-07T21:08:47.816Z · LW · GW

Don't have a concrete definition off the top of my head, but I can try to give you a sense of what we're thinking about. "Alignment theory" for us refers to the class of work which is more "reason about alignment from first principles" rather than running actual experiments. (Happy to have a discussion on why this is our focus if the discussion would be useful?)

Examples: Risks from learned optimization, inaccessible information, most posts in Evan's list of research artifacts.