Sydney AI Safety Fellowship Review

post by Chris_Leong · 2022-04-02T07:11:45.130Z · LW · GW · 0 comments

Contents

  Inspiration:
  Target Audience:
  Last Minute Organising:
  Impacts of Covid:
  Team:
  Projects:
  Selectivity
  Speakers
  Working fellows:
  Food:
  Impact:
  Miscellaneous things:
  Acknowledgments:
None
No comments

The Sydney AI Safety Fellowship was a 7-week program with a coworking space, speakers, mentors and social activities. The idea was for each participant to have a "project", but only one had a research project, whilst the others simply dedicated their time towards figuring out their plans for the future. More specifically, they were thinking about whether they wanted to shift their careers or prospective PhDs towards AI safety and if so what they focus on and what would be their next step.

Inspiration:

There were four main influences:

Target Audience:

Given that it is a 7-week program the natural target audience is students or people in the middle of a career transition. I'm thinking of complementing this program with a retreat later in the year to suit busy professionals so that our local activities cover different market segments.

Last Minute Organising:

Unfortunately, this idea came to me quite late and since the university summer break is fixed, I couldn't have pushed it back without having to wait until next year. We also didn't know until very late whether we were going to be funded by the Long Term Future Fund (LTFF) or not. The result was that this fellowship was organised at the last minute.

I expect this suppressed applications because many potential candidates would already have lined up something else, but also because our initial advertising was very vague about how much funding we'd be able to provide to participants. I made a mistake here by being too reluctant to send the LTFF a follow-up email until it really was the last minute.

The impact of organising things at the last minute was worse than I expected. Often I'd send a message to the advisor chat, wait maybe six hours, then make a decision before anyone responded because I didn't think there was time to wait any further. I'm still somewhat uncomfortable about how many decisions I made unilaterally. There was also much more of a burden on me because as soon as I finished one task, I had to work on the next one. Sometimes I delayed completing tasks by a few days as I was focusing on work. Normally this wouldn't have mattered, but this is more significant when organising things at the last minute. I'm really hoping that we can avoid this next year by starting organising much earlier.

Impacts of Covid:

Covid had a much greater impact on the program than I was expecting. When I was initially planning the program it didn't look like it was going to be an issue. Pretty much anyone who wanted to be double vaccinated was double vaccinated and there might have been some talk about Omicron, but it was far from clear whether it'd be an issue in Australia as we'd weathered the other waves well. Unfortunately, the fellowship was scheduled to start right around the peak of omicron in Australia and it significantly disrupted the program.

For a start, I was stuck in isolation for the first few weeks of the program. This placed substantially more load on my co-organiser than he'd been expecting. This also created issues for participants from New Zealand who would have had to quarantine when returning home. We only received one application from New Zealand - from a participant who wouldn't have to quarantine as they were planning to head to the US afterwards anyway.

Team:

James Dao was my co-organiser for this project and I would have struggled to manage it without him. James indicated that he might be interested in applying, but his project was more focused on applying AI to biosecurity than AI Safety. This was out of scope, but instead of just saying no, I asked if he wanted to be a co-organiser as I knew him and I thought he'd do a good job. I'm really glad I made this decision in retrospect, especially because I was stuck in isolation for the first few weeks, but also because I would have struggled with the load if it had just been me.

At first, I was a bit reluctant to have a co-organiser because it added significant overhead given the small size of the program. In retrospect, this was silly. Even without Covid, it's important to have redundancy as unexpected things can happen. I did have JJ as my main advisor, but originally he was going to be overseas during the program so if his plans hadn't changed, he wouldn't have been available to able to step in if things had gone wrong. Beyond this, the main EV from pilot programs comes from the potential for them to lead to future programs and the chance of this is greater with a co-organiser.

Neil Ferro was social director. The offer I made to him was that he'd be able to attend any socials for free if he helped organise them. In general, I think this is a good way of bringing people on board.

JJ Hepboin and Ivan Burdok provided significant advice, with Greg Colbourn and John Maxwell also providing assistance. The one thing I want to be more careful with for future is making sure that I take up less of people's time.

Projects:

The list of projects included on the application form was as follows:

  • By far the project that we would be most excited about would be two people spending the time thinking deeply about the AI safety ecosystem in Australia and NZ and how to improve it, especially if they then organised something as a result of this.
  • Planning an event such as a retreat or conference or mentorship program related to AI Safety. Or AI Safety related event programming for EAGx.
  • Someone who has already demonstrated research capability (such as by gaining a PhD or first class honours) pursuing an independent research project
  • Someone doing an internship (paid or unpaid) or volunteering for an AI safety org (if you are doing proper paid work for an AI Safety org, it seems that you or your employer should cover their main costs, however, you can apply anyway if you want to join the other activities)
  • Someone doing research through another program like SERI/AI Safety Camp Virtual or with a pre-existing mentor
  • People training up in ML if they had a strong enough history of involvement in the EA/AI safety space that it seems likely that they would use it to for AI Safety purposes
  • Someone spending a bunch of time improving LW wiki entries related to AI Safety
  • Someone volunteering as a research assistant to an already established researcher
  • Someone working on this project: https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement

The idea was to try to find useful ways for people to spend their time given that we couldn't make guarantees about mentorship. I was most excited about finding two people to analyse what needed to be done in terms of local movement-building, but unfortunately, the applicants who wrote this down mostly included it because I wrote this as the top priority, rather than because this was something they were particularly excited about or had relevant experience in. This was a significant update for me. Michael Aird had suggested that it'd be harder to find people to do this than I thought and he turned out to be right.

As you may have noticed, the most common activity engaged in by fellows wasn't even on the list. When I talked to the applicants, I realised that two of the fellows had just completed their honors years and were on track to begin PhDs. It seems silly to distract them with a research project from the much more important task of trying to ensure that their PhD went well. Further, another two participants were looking to make a career change. 

In the future, I think I'll ask any participant who engaging in a reflection project to produce a short document during the fellowship as this will provide extra visibility into their progress and make it easier for me or participants to offer assistance. My plan is to ask them to produce a set of key questions at the start of the fellowship, post an update on their progress so far in the middle and then post a final document at the end.

Selectivity

We accepted every application we received. This isn't to say that we would have accepted anyone into the program, just that we felt that every application was worth accepting. I was somewhat anxious about this as I think it would have looked better to potential funders if we'd had more applicants than capacity and therefore been forced into rejecting some. On the other hand, given that I was happy with the quality of the fellows it feels like it would have been a bit silly to be disappointed that we didn't have a bunch of bad applications merely for the purpose of rejecting them.

The big lesson for me here is that self-selection effects are strong. If you advertise a 7-week fellowship at the last minute with uncertain funding and uncertain mentorship, I guess it shouldn't surprise me that the only applicants will be people who are pretty serious about it.

Speakers

I generally feel that talks are overrated. People often organise talks at these kinds of thing because they feel a fellowship has to have talks rather than because they actually want to hear the speakers speak[3]. So I really wanted to find a way to deliver more value to participants.

Given the small-scale of the fellowship, I felt it made sense to focus on Q&A so participants could learn about the things that they were interested in. However, I also wanted the participants to have context about a speaker's knowledge and experience, so I encouraged them to do a lightning talk first.

I think the term "lightning talk" may have been too formal, so next time I'll probably ask speakers to provide some "introductory remarks" and mention that this could take the form of providing a summary of their project and areas of interest or that it could take the form of a lightning talk. 

Working fellows:

One idea I had was to award working fellowships. The idea was to subside the coworking memberships of up to two skilled AI professionals who would then share their knowledge with the participants just by being around. We didn't receive any applications so I offer a full coworking subsidy instead of half, but we still weren't receiving any applications. In the end, we only managed to find a working fellow because I asked a friend to volunteer to be a working fellow (as a favor to me). But then the Omicron wave happened and he decided to stay home so as to avoid catching it
[4].
So this idea didn't work out, but it wasn't a complete failure as he did give a lightning talk and organise one group discussion.

Food:

Impact:

I expect most of the impact will come from engaging two people who were interested in AI Safety, but hadn't really taken action on it into the community. For example, they talked to 80,000 Hours and Effective Thesis; attended the Stanford Existential Risk Conference and are both now applying for funding from EA sources. However, the other, more experienced participants played a vital role in the process of integrating them into the community. There may also have been some additional impact on two other participants who were already looking at changing careers, but who may have been nudged into taking more action.

I suspect that we'll have more applicants in future years as we'll be more established and we'll advertise the fellowship further ahead of time. However, if I'm running other local movement-building activities the counterfactual impact may be reduced as these activities may already be engaging people that the fellowship would have otherwise engaged.

Miscellaneous things:

Acknowledgments:

James Dao - Assistant Organiser
Neil Ferro - Social Director
JJ Hepboin, Ivan Burduk - Main advisors
John Maxwell, Greg Colbourne - Additional advisors
Hunter Jay, Greg Colbourn, Eugene Lubarsky - Backstop funding
Everyone who commented on the Google Document

  1. ^

    We ended up with high-quality mentors, so we likely could have accepted people who were more junior.

  2. ^

    I was already leaning in this direction, but his comments really solidified it for me.

  3. ^

    Scott Alexander writes: "I remember in college some friends set up an Atheist Club. There was a Christian Club, and a Buddhist Club, so why shouldn’t the atheists get a club too? So they wrote the charter, they set a meeting time, and then we realized none of us knew what exactly the Atheist Club was supposed to do. The Christian Club prayed and did Bible study; the Buddhist club meditated, the atheist club…sat around and tried to brainstorm Atheist Club activities. Occasionally we came up with some, like watching movies relevant to atheism, or having speakers come in and talk about how creationism was really bad. But we weren’t doing this because we really wanted to watch movies relevant to atheism, or because we were interested in what speakers had to say about creationism. We were doing this because we’d started an Atheist Club and now we had to come up with a purpose for it."

  4. ^

    A decision I totally respect.

0 comments

Comments sorted by top scores.