Experience Report - ML4Good AI Safety Bootcamp
post by Kieron Kretschmar · 2024-04-11T18:03:41.040Z · LW · GW · 0 commentsContents
My expectations The program The culture My personal learnings Conclusion None No comments
In this post I reflect on my experience in participating at a ML4Good (UK, March 2024) bootcamp. I am writing this mainly for my own benefit - to reflect and to start making a habit of engaging with the community on lesswrong. But also to help future participants. If there is at least one person who finds it helpful for deciding whether or not they want to apply to a future iteration of ML4Good or a similar program, then I’m more than happy.
Opinions are my own and relate to this iteration of ML4Good (the program may change in the future). I am not affiliated with ML4Good beyond having participated in this program.
My expectations
I mainly applied for three reasons. First, I wanted to connect with more AI safety minded people in Europe, and get a better feeling for the EU landscape of AI safety initiatives. My second goal was to train some best practices with pytorch for which I would simply not set time aside for at home. Lastly, I saw this as a fun opportunity to break out of my current routine of writing my thesis.
The program
ML4Good is a 10-day bootcamp to educate participants from a variety of backgrounds with at least some familiarity with Python on topics related to ML and AI safety.
For this iteration, the topics came from three broad categories: Technical topics (including optimization, transformers, RL, RLHF, interpretability, and adversarial attacks); conceptual and strategy (capabilities, AI risks, AI safety agendas, theory of change, career planning); and governance and policy (including international governance, compute governance, and evaluations).
The schedule was intense, with activities planned from 9am to 9pm almost every day, and contained a mix of lectures, discussions and opportunities for self-study. A particular focus was set on generation, feedback and refinement of ideas that participants might pursue after the camp.
A typical day might have looked like this:
Time | Activity |
---|---|
08.30 - 09.00 | Breakfast |
09.00 - 10.20 | Lecture on e.g. transformers or RLHF |
10.20 - 10.30 | Short break |
10.30 - 12.00 | Work on jupyter notebooks, alone or in small teams |
12.00 - 13.00 | Workshop with discussions on e.g. forecasting or theory of change |
13.00 - 14.00 | Lunch |
14.00 - 18.30 | Small group projects, e.g. a literature review |
18.30 - 19.30 | Dinner |
19.30 - 21.00 | Presentation of group projects |
After 21.00 | Leisure time |
The teachers were highly motivated and competent, each having their own specialization (e.g. Interpretability, Governance, Evals) and always eager to help with specific questions in 1-on-1s. During discussions, they paid attention not to impose their own opinion, but rather stayed in the background and let participants discuss amongst themselves.
The event took place at CEEALAR, in Blackpool (UK) from March 28th to April 7th. Participation was free of charge, including accommodation, several (delicious) meals and snacks, as well as travel support in some cases. Most participants came from different countries in Europe.
The culture
The organizers did a great job of selecting a heterogeneous crowd of participants: Some had a full-time job in the EA-sphere, while others had never heard of EA or the alignment problem before; some were undergraduate students, while others had years of experience doing postdoctoral research. I believe that this diversity made it easier for people to feel comfortable and voice their own opinions, once it became clear that not every participant is affiliated with EA, and not everyone has a background in computer science. Personally, I would love to see more events with a variety of participants like this, as it helps combat groupthink and makes AI safety more accessible. In addition to that, it made the conversations during breaks and meals much more interesting.
While the long days were exhausting, the intensity of the program also brought everybody closer together. We had communal fits of laughter every day, and somebody was always up to something in the few free hours, e.g. playing games, going to a nearby pub or making music. I’ve met many people whom I’d love to meet again or even collaborate in the future. I had not expected to develop new friendships when I initially applied, but was pleasantly surprised.
My personal learnings
Being close to graduating from a two year AI programme and having read bluedot’s AI Safety Fundamentals and Alignment 201 curricula, I was already familiar with many of the topics taught and practiced in the more technical workshops. Initially, I was a bit afraid of not learning many new things during these sessions. However, the teachers encouraged me to use that time to focus on adjacent topics, e.g. trying to come up with critiques of particular papers, or practicing using the einops library. Doing this was super helpful to me and is one of the reasons that make me feel the time and effort spent were a good investment.
My biggest updates came from the activities related to AI governance. Even prior to the camp, governance seemed to be one of the most important aspects of AI Safety, but I never really engaged with the field, thinking my background was too technical to contribute. Boy was I wrong! The workshops made topics in AI governance accessible. Beyond that, the teachers highlighted the need for more technical expertise and showcased a variety of ways to contribute, ranging from technical research aimed at supporting policymakers (e.g. related to compute governance) to field-building (e.g. by educating people who want to contribute in AI governance but are less familiar with the relevant technologies). I left the bootcamp with multiple ideas for how to learn more about the field and where to look for opportunities to contribute.
Conclusion
ML4Good was a fun and valuable experience for me. I would recommend applying to the next iterations even for people who feel under- or overqualified, as the team did a good job at encouraging participants to study at their own pace. I personally could have done with a slightly less intense schedule, with the later sessions of each day often becoming less productive, but I also see the upsides. Just try to make sure you have one or two days to recover after leaving the camp.
Thanks to the organizers and participants who made this event happen the way it did, and especially to Lovkush A., Monika J. and Emily F. for giving feedback on this post.
0 comments
Comments sorted by top scores.