LASR Labs Spring 2025 applications are open!
post by Erin Robertson, charlie_griffin (cjgriffin), joehardie, Justin Olive · 2024-10-04T13:44:20.524Z · LW · GW · 0 commentsContents
Edit: Applications for this round are now closed! If you are interested in future rounds, you can express interest here. Programme details: Who should apply? Topics and supervisors: Timeline: How is this different from other programmes? How did the programme go last time? None No comments
Edit: Applications for this round are now closed! If you are interested in future rounds, you can express interest here.
TLDR; apply by October 27th to join a 13-week research programme in AI safety. You’ll write a technical paper in a team of 3-4 with supervision from an experienced researcher. The programme is full-time in London.
Apply to be a participant here. We’re also looking for a programme manager, and you can read more about the role here.
London AI Safety Research (LASR) Labs (previously run as AI Safety Hub Labs [AF · GW]) is an AI safety research programme focussed on reducing the risk of loss of control to advanced AI. We focus on action-relevant questions tackling concrete threat models.
LASR participants are matched into teams of 3-4 and will work with a supervisor to write an academic-style paper, with support and management from LASR. We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. Alumni from previous cohorts have gone on to work at UK AISI, OpenAI’s dangerous capabilities evals team, Leap Labs, and def/acc. Many more have continued working with their supervisors, doing independent research, or are doing AI Safety research in their PhD programmes. LASR will also be a good fit for someone hoping to publish in academia; four out of five groups in 2023 had papers accepted to workshops (at NeurIPS) or conferences (ICLR). [AF · GW] All of the 2024 cohort’s groups have submitted papers to workshops or conferences.
Participants will work full-time and in person from the London Initiative for Safe AI (LISA) co-working space, a hub for researchers from organisations such as Apollo Research, Bluedot Impact, ARENA, and the MATS extension programme. The office will host various guest sessions, talks, and networking events.
Programme details:
The programme will run from the 10th of February to the 9th of May (13 weeks). You will receive an £11,000 stipend to cover living expenses in London, and we will also provide food, office space and travel.
In week 0, you will learn about and critically evaluate a handful of technical AI safety research projects with support from LASR. Developing an understanding of which projects might be promising is difficult and often takes many years, but is essential for producing useful AI safety work. Week 0 aims to give participants space to develop their research prioritisation skills and learn about various different agendas and their respective routes to value. At the end of the week, participants will express preferences about their preferred projects, and we will match them into teams.
In the remaining 12 weeks, you will write and then submit an AI safety research paper (as a preprint, workshop paper, or conference paper).
During the programme, flexible and comprehensive support will be available, including;
- Reading groups
- Talks from leading AI safety researchers
- Career coaching
- Accountability and productivity assistance
Who should apply?
We are looking for applicants with the following skills:
- Technical ability: Strong quantitative skills and at least some machine learning experience.
- Practicality: Willingness to experiment, iterate, and dive into execution under uncertainty.
- Communication skills: Proficient in clearly explaining the results and implications of experiments. Reasoning transparency.
There are no specific requirements for experience, but we anticipate successful applicants will have done some of these things:
- Conducted research in a domain relevant to the topics below or research at the intersection of your domain and frontier AI systems.
- Experienced working with LLMs.
- Worked on research or machine learning in industry.
- Completed or in the process of a PhD in a relevant field like Computer Science, Physics, Maths, etc.
Research shows that people from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work.
Note: this programme takes place in London. Participants without an existing right to work in the UK will be given support with visas. Please get in touch if you have any visa-related questions; erin[at]lasrlabs.org
Topics and supervisors:
The supervisors for the Spring 2025 round will be announced by early December. Previous LASR groups have published on important areas in AI safety, focused on reducing risks from advanced AI. We’ve had supervisors from Apollo Research, Decode Research, and top UK universities. We have just released our research outputs from the Summer 2024 programme:
- [Paper] A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders [LW · GW]
- [Paper] Hidden in Plain Text: Emergence and Mitigation of Steganographic Collusion in LLMs [LW · GW]
- Evaluating Synthetic Activations composed of SAE Latents in GPT-2 [LW · GW]
- Characterizing stable regions in the residual stream of LLMs [LW · GW]
- Honesty to Subterfuge: In-Context Reinforcement Learning Can Make Honest Models Reward Hack
In earlier rounds, participants have worked on projects relating to: the science of deep learning, multi-agent systems and collusion, theory of alignment in RL, deception in LLMs [AF · GW], interpretability probes and concept extrapolation [? · GW].
For Spring, we’re excited about a range of areas, including automated interpretability, scalable oversight, capability evals and AI control. If you’re interested in being a supervisor for the Spring programme, send us an email at erin[at]lasrlabs.org
Timeline:
Application deadline: October 27th at 23:59 UK time (GMT+1)
Offers will be sent in mid-December, following a work test and interview.
How is this different from other programmes?
There are many similar programmes in AI safety, including MATS, PIBBSS, Pivotal Research Fellowship and ERA. We expect all of these programmes to be an excellent opportunity to gain relevant skills for a technical AI safety career. LASR Labs might be an especially good option if;
- You’re open to learning in-depth about many different kinds of projects
- You want to focus on producing an academic-style paper
You like working in a team, with an emphasis on group accountability
How did the programme go last time?
In our feedback for the last round, the average likelihood to recommend LASR Labs was 9.25/10, and the NPS was +75.
Testimonials from our recent cohort:
“LASR gave me a lot of confidence to do productive research in the field, and helped me to realize that I am capable and competent. I learned a ton from working with a team of talented collaborators and having a supervisor who was very hands-on and made sure that we succeeded. I feel like my future work will be a lot more productive as a result of LASR!”
“Participating in the LASR Labs program has been an incredible experience and a key opportunity during the early stages of my career transition into AI safety. Erin and Charlie did a fantastic job of securing exceptional research mentors with well-scoped projects, connecting participants with necessary resources, and introducing key topics and ideas during the first week of the program. They created a friendly and helpful environment full of passionate and driven co-workers that I felt incredibly grateful to be a part of. Additionally, working within the LISA offices in London provided an invaluable sense of community, with an abundance of inspiring ideas, presentations, and future career opportunities. I now have a far deeper understanding of the state of AI safety, what it means to produce high-value research, and the engineering skills required.”
“I would highly recommend LASR Labs to anyone looking to move into AI Safety research. The program provides an excellent structure to facilitate upskilling in AI Safety and the production of a high-quality research output. The proposed projects are promising and well-scoped. Working in a team has been enjoyable and allows for faster progress on our research. The LISA offices are an exciting environment to work in. I've found the program highly engaging, feel I've improved as a researcher, and now intend to work full-time on AI safety research in the future.”
0 comments
Comments sorted by top scores.