Retrospective: PIBBSS Fellowship 2023

post by DusanDNesic, Nora_Ammann · 2024-02-16T17:48:32.151Z · LW · GW · 1 comments

Contents

  Background
    Fellowship Theory of Change
      PIBBSS overall
      Zooming in on the fellowship
    Brief overview of the program
  Reflections
    How did it go according to fellows?
    How did it go, according to mentors? 
    How did it go according to organizers? 
  Appendix
    A complete list of research projects
None
1 comment

Between June and September 2023, we (Nora and Dusan) ran the second iteration of the PIBBSS Summer Fellowship. In this post, we share some of our main reflections about how the program went, and what we learnt about running it. 

We first provide some background information about (1) The theory of change behind the fellowship [LW · GW], and (2) A summary of key program design features [LW · GW]. In the second part, we share our reflections on (3) how the 2023 program went [LW(p) · GW(p)], and (4) what we learned from running it [LW · GW]. 

This post builds on an extensive internal report we produced back in September. We focus on information we think is most likely to be relevant to third parties, in particular:

Also see our reflections on the 2022 fellowship program [LW · GW]. If you have thoughts on how we can improve, you can use this name-optional feedback form

Background

Fellowship Theory of Change

Before focusing on the fellowship specifically, we will give some context on PIBBSS as an organization. 

PIBBSS overall

PIBBSS is a research initiative focused on leveraging insights and talent from fields that study intelligent behavior in natural systems to help make progress on questions in AI risk and safety. To this aim, we run several programs focusing on research, talent and field-building. 

The focus of this post is our fellowship program - centrally a talent intervention. We ran the second iteration of the fellowship program in summer 2023, and are currently in the process of selecting fellows for the 2024 edition. 

Since PIBBSS' inception, our guesses for what is most valuable to do have evolved. Since the latter half of 2023, we have started taking steps towards focusing on more concrete and more inside-view driven research directions. To this end, we started hosting several full-time research affiliates in January 2024. We are currently working on a more comprehensive update to our vision, strategy and plans,  and will be sharing these developments in an upcoming post. 

PIBBSS also pursues a range of other efforts aimed more broadly at field-building, including (co-)organizing a range of topic-specific AI safety workshops and hosting semi-regular speaker events [LW · GW] which feature research from a range of fields studying intelligent behavior and exploring their connections to the problem of AI Risk and Safety.

Zooming in on the fellowship

The Summer Research Fellowship pairs fellows (typically PhDs or Postdocs) from disciplines studying complex and intelligent behavior in natural and social systems, with mentors from AI alignment. Over the course of the 3-month long program, fellows and mentors work on a collaborative research project, and fellows are supported in developing proficiency in relevant skills relevant to AI safety research. 

One of the driving rationales in our decision to run the program is that a) we believe that there are many areas of expertise (beyond computer science and machine learning) that have useful (if not critical) insight, perspectives and methods to contribute to mitigating AI risk and safety, and b) to the best of our knowledge, there does not exist other programs that specifically aim to provide an entry point into technical AI safety research for people from such fields.

What we think the program can offer: 

In terms of more secondary effects, the fellowship has significantly helped us cultivate a thriving and growing research network which cuts across typical disciplinary boundaries, as well as combining more theoretical and more empirically driven approaches to AI safety research. This has synergized well with other endeavors already present in the AI risk space, and continuously provides us with surface area for new ideas and opportunities. 

Brief overview of the program

The fellowship started in mid-June with an opening retreat, and ended in mid-September with a final retreat and the delivery of Symposium presentations. Leading up to that, fellows participated in reading groups [LW · GW] (developed by TJ [LW · GW]) aimed at bringing them up to speed on key issues in AI risk. For the first half of the fellowship, fellows worked remotely; during the second half, we all worked from a shared office space in Prague (FixedPoint).

Visual representation of the PIBBSS program in 2023

 

We accepted 18 fellows in total, paired with 11 mentors. (You can find the full list of fellows and mentors on our website.) Most mentors were paired up with a single fellow, some mentors worked with two fellows, and a handful of fellows pursued their own research without a mentor. We have a fairly high bar for fellows working on their own project without mentorship. These these cases where we were both sufficiently excited about the suggested research direction and had enough evidence about the fellows’ ability to work independently. Ex-post, we think this essentially worked well, and is a relevant format to partially alleviate the mentorship bottleneck experienced by the field. 

Beyond mentorship, fellows are supported in various ways: 

We made some changes to the program structure compared to 2022: 

Organizing the fellowship has taken ~1.5 FTE split among two people, as well as various smaller bits of work provided by external collaborators, e.g. help with evaluating application, facilitating reading groups, developing a software solution for managing applications.  

Reflections

How did it go according to fellows?

Overall, fellows reported being satisfied with being part of the program, and having made useful connections.

Some (anonymized) testimonials from fellows (taken from our final survey): 

How did it go, according to mentors? 

Mentors overall find the fellowship a good use of their time, and overall think strongly the fellowship should happen again.

Some (anonymized) testimonials from mentors (taken from our final survey): 

How did it go according to organizers? 

Appendix

A complete list of research projects

1 comments

Comments sorted by top scores.

comment by Linda Linsefors · 2024-03-08T17:52:59.560Z · LW(p) · GW(p)

Did you forget to provide links to research project outputs in the appendix? Or is there some other reason for this?