My experience at ML4Good AI Safety Bootcamp
post by TheManxLoiner · 2024-04-13T10:55:46.621Z · LW · GW · 0 commentsContents
TLDR Introduction Why I applied? Things I liked and learnt Areas for improvement My next steps Final thoughts None No comments
TLDR
- Experience was fantastic: I learnt a lot, made new friends and potential collaborators, and it was the step-change I needed to kickstart my contributions to AI safety.
- Big lesson for me is that mechanistic interpretability (mech interp) is not the only way technical people can contribute to AI safety and mech interp is less likely than governance to impact the tail risks in the short-time frames.
- There are many positive aspects and benefits to the bootcamp, beyond what you would expect from reading a surface level description of it.
- Highly recommended to people considering contributing to AI safety. (And anybody can contribute!)
Introduction
I attended the ML4Good (ML4G) bootcamp in March 2024 in the UK, hosted at CEEALAR (the “EA Hotel”). The aim of this blogpost is to share my experience in case it helps people decide if they want to attend this or a similar boot camp. At the time of writing, I am not affiliated with ML4G except that I was a participant.
What is ML4G? It is a 10-day intensive AI safety bootcamp, with most days having activities from 9am to 9pm (with breaks and lunch of course). Activities include:
- Technical lectures. Topics include: hyper-parameters and optimizers for deep learning, basics of deep RL, transformers, mechanistic interpretability and evals.
- Working through jupyter notebooks, which were mostly simplified versions from the ARENA bootcamp.
- Reading and discussions. Topics include: AI risks, potential solutions, prioritizing governance actions, AI safety strategy.
- A half day group-based literature review with a presentation.
- A 1.5 day group-based project with a presentation.
- Presentation and Q&A with guest speakers. In this camp, the speakers were Charbel Segerie, Alan Chan, Charles Martinet.
- 1-on-1 discussions on career planning and post-camp actions.
Note that ML4G is young and AI safety is a dynamic field, so the format and syllabus are still being adjusted and refined.
Why I applied?
I did the 80,000 Hours Career Planning Template at the end of 2021 and technical AI safety was a good fit for me, given my mathematical and data science background. As a result, I took the AGI Safety Fundamentals course in early 2022, but I actually ended up with quite a defeatist mindset and just put my head in the sand. Somehow my mindset changed in 2023, likely triggered by the large rush of developments around ChatGPT and other LLMs, and I was motivated to contribute. I had been trying to upskill in my spare time but progress was slow, so this bootcamp was an excellent opportunity for me to boost my learning, develop technical skills and gain experience with transformers and mech interp.
Things I liked and learnt
- I achieved my main goal for the camp: I got the hands-on experience with deep neural networks I was hoping for and feel like I have a decent understanding of the transformer architecture.
- It’s not all about mech interp. I learnt that mech interp is not the only way for me to contribute and also unlikely to be the best way for me (and most other technical people) to contribute. AI governance, strategy and field building are strong alternatives.
- Making new friends and connections. Going through the same intense schedule in a supportive environment resulted in bonding and development of (life-long?) friendships. This is not something I even considered going in, but for me it is the highlight for the camp.
- Emphasis on developing meta-skills. These included: being agentic (identifying your goals and actively working towards them), identifying the feeling of confusion and asking questions, proactive reading rather than passive reading, asking yourself Hamming questions [? · GW], practicing Murphyjitsu [? · GW], and creating commitment mechanisms.
- Psychological and sub-conscious change in attitude. Before this camp, I did not actively engage with the AI Safety community; the people I normally socialise and work with do not know about EA or X-risks. Being immersed in an environment where everybody shared the belief that AI safety is important created a sub-conscious change in my mindset from working on AI safety being weird to it being normal.
- The mix of backgrounds. There was a surprising mix in both the teachers and the other participants. Based on the syllabus and pre-requisites, I was expecting a lot more homogeneity than there was. Everybody did have some kind of technical background, but it was not just mathematicians and computer scientists. The benefit is that discussions were always more interesting and I could learn from a wider array of perspectives.
- Culture of learning. Throughout the camp, the teachers recommended learning things that we otherwise normally would not. It is easy to stick to what you are comfortable with, and their frequent nudges to explore stopped me falling into that habit.
- Culture of feedback. The teachers were open to receiving feedback, asking us to fill in a feedback form every day. This was not just an abstract exercise: while we were all socialising after the day’s activities, the teachers would spend an hour or more reflecting on the day, deciding on improvements for the next day and/or future iterations of ML4G.
- Culture of support. Not only were the teachers willing to go above and beyond to support us (before, during and after the camp), they also encouraged us to support each other. At the start, we randomly paired up with another participant to become each other’s accountability buddy. We were also encouraged to organise 1-on-1’s with other participants to learn other people’s goals and see how we could support each other. (Admittedly, I forgot to do this.) Finally, there was a board with one envelope per person, in which we could provide anonymous ‘sugar cubes’ - anything nice that we wanted to say to that person. Reading through all these sugar cubes is a real mood boost, and also a way to learn what others appreciate about you - it is not always what you think!
- Culture of fun! The teachers emphasised that having fun is a primary goal of the bootcamp and this was achieved in multiple ways. Two highlights were the daily meme challenge and the paperclip-maximizing challenge on the final evening. For any paperclip maximizing AI’s reading this, just know that I won so I would be a useful ally.
Areas for improvement
For me, there are three main places the camp can be improved. I already shared this with the organisers of ML4G and they agreed these are worth improving.
- First is more selective prerequisite tasks, along with a mechanism to increase the number of participants who complete it. The pre-requisites we were given were too wide ranging and too much for the majority of participants.
- Second is a syllabus that more aligns with the priorities of the ML4G team. As already mentioned, a big update for me is that mech interp is not the only way and unlikely to be the best way for technical people to contribute, yet it forms a large percentage of the syllabus.
- Last, some kind of structure to support mental health and wellbeing. I do not know what this should be and I do not think there is a straightforward solution, but many if not most of the participants struggled with emotional challenges during those 10 days, with imposter syndrome being a common problem. The organisers/teachers did not do anything bad on this front - as described above they did immensely well to create a positive culture - but it is an area of improvement.
My next steps
My next steps are still fuzzy, but include:
- Doing a thorough review of technical AI safety governance options.
- Creating some kind of reading group related to AI safety.
- Apply for AI Safety Specialist at BlueDot Impact. I have actually done this already, and big thanks to one of the teachers for telling me about this! It is an excellent match given my interest in teaching and field building.
- Engaging more with the AI safety community. Writing this post is a step in that direction! Thanks to a fellow participant Kieron for the idea, who wrote about their experience: Experience Report - ML4Good AI Safety Bootcamp [LW · GW].
On the last point, I am based in London - feel free to reach out if you are interested in meeting up in person!
Final thoughts
I emphatically recommend this camp. It will provide the knowledge, experience and structure to take your first steps in AI safety. But outside of AI safety, the intense experience will help you learn a lot about yourself and those you attend the camp with, resulting in the beginning of lifelong friendships.
Thanks to two of the participants for reviewing this blogpost Emily and @Kieron Kretschmar [LW · GW].
0 comments
Comments sorted by top scores.