LASR Labs Summer 2025 applications are open!
post by Erin Robertson, Brandon Riggs (brandon-riggs) · 2025-04-02T13:32:04.741Z · LW · GW · 0 commentsContents
Programme details: Who should apply? Topics and supervisors: Timeline: How is this different from other programmes? None No comments
TLDR; apply by April 26th to join a 13-week research programme in AI safety. You’ll write a technical paper in a team of 3-4 with supervision from an experienced researcher. The programme is full-time in London.
London AI Safety Research (LASR) Labs is an AI safety research programme focussed on reducing the risk of loss of control to advanced AI. We focus on action-relevant questions tackling concrete threat models.
LASR participants are matched into teams of 3-4 and will work with a supervisor to write an academic-style paper, with support and management from LASR. We expect LASR Labs to be a good fit for applicants looking to join technical AI safety teams in the next year. Alumni from previous cohorts have gone on to work at UK AISI, Apollo, OpenAI dangerous capabilities evaluations team, and Open Philanthropy. Many more have continued working with their supervisors, or are doing AI Safety research in their PhD programmes. LASR will also be a good fit for someone hoping to publish in academia; all 5 papers in 2024 were accepted to workshops at NeurIPS, and four out of five groups in 2023 had papers accepted to workshops (at NeurIPS) or conferences (ICLR). [AF · GW]
Participants will work full-time and in person from the London Initiative for Safe AI (LISA) co-working space, a hub for researchers from organisations such as Apollo Research, Bluedot Impact, ARENA, and MATS extension programme. The office will host various guest sessions, talks, and networking events.
Programme details:
The programme will run from the 28th of July to the 24th of October (13 weeks). You will receive an £11,000 stipend to cover living expenses in London, and we will also provide food, office space and travel.
In week 0, you will learn about and critically evaluate a handful of technical AI safety research projects with support from LASR. Developing an understanding of which projects might be promising is difficult and often takes many years, but is essential for producing useful AI safety work. Week 0 aims to give participants space to develop their research prioritisation skills and learn about various different agendas and their respective routes to value. At the end of the week, participants will express preferences about their preferred projects, and we will match them into teams.
In the remaining 12 weeks, you will write and then submit an AI safety research paper (as a preprint, workshop paper, or conference paper).
During the programme, flexible and comprehensive support will be available, including;
- Workshops on writing, engineering and research
- Talks from leading AI safety researchers
- Career coaching
- Accountability and productivity assistance
Who should apply?
We are looking for applicants with the following skills:
- Technical ability: Machine learning engineering experience and strong quantitative skills.
- Research ability: Willingness to experiment, iterate, and dive into execution under uncertainty. An ability to develop a theory of change for a project focussed on impact.
- Communication skills: An ability to clearly articulate the outcomes and implications of experiments, coupled with transparent reasoning.
For more detail on how we think about and measure technical and research ability, refer to “tips for empirical alignment research” [? · GW] by Ethan Perez, which outlines in detail the specific skills valued within an AI safety research environment.
There are no specific requirements for experience, but we anticipate successful applicants will have done some of these things:
- Conducted research in a domain relevant to the topics below or research at the intersection of your domain and frontier AI systems.
- Experienced working with LLMs.
- Worked on research or machine learning in industry.
- Completed or in the process of a PhD in a relevant field like Computer Science, Physics, Maths, etc.
Research shows that people from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work.
Note: this programme takes place in London. Participants without an existing right to work in the UK will be given support with visas. Please get in touch if you have any visa-related questions; erin[at]lasrlabs.org
Topics and supervisors:
The supervisors for the Summer 2025 round will be announced in the next couple of months. Previous LASR groups have published on important areas in AI safety, focused on reducing risks from advanced AI. We’ve had supervisors from Google Deepmind, UK AI Security Institute, and top UK universities. These are our outputs from the Summer 2024 programme:
- [Paper] A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders [LW · GW]
- [Paper] Hidden in Plain Text: Emergence and Mitigation of Steganographic Collusion in LLMs [LW · GW]
- Evaluating Synthetic Activations composed of SAE Latents in GPT-2 [LW · GW]
- Characterizing stable regions in the residual stream of LLMs [LW · GW]
- Honesty to Subterfuge: In-Context Reinforcement Learning Can Make Honest Models Reward Hack
Timeline:
Application deadline: April 26th at 23:59 BST (GMT+1)
Offers will be sent in early June, following a skills assessment and an interview.
How is this different from other programmes?
There are many similar programmes in AI safety, including MATS, PIBBSS, Pivotal Research Fellowship, and ERA. We expect all of these programmes to be an excellent opportunity to gain relevant skills for a technical AI safety career. LASR Labs might be an especially good option if;
- You’re open to learning in-depth about many different kinds of projects
- You want to focus on producing an academic-style paper
- You like working in a team, with an emphasis on group accountability
0 comments
Comments sorted by top scores.